

# Importing and exporting SQL Server databases using native backup and restore
<a name="SQLServer.Procedural.Importing"></a>

Amazon RDS supports native backup and restore for Microsoft SQL Server databases using full backup files (.bak files). When you use RDS, you access files stored in Amazon S3 rather than using the local file system on the database server.

For example, you can create a full backup from your local server, store it on S3, and then restore it onto an existing Amazon RDS DB instance. You can also make backups from RDS, store them on S3, and then restore them wherever you want.

Native backup and restore is available in all AWS Regions for Single-AZ and Multi-AZ DB instances, including Multi-AZ DB instances with read replicas. Native backup and restore is available for all editions of Microsoft SQL Server supported on Amazon RDS.

The following diagram shows the supported scenarios.

![\[Native Backup and Restore Architecture\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/SQL-bak-file.png)


Using native .bak files to back up and restore databases is usually the fastest way to back up and restore databases. There are many additional advantages to using native backup and restore. For example, you can do the following:
+ Migrate databases to or from Amazon RDS.
+ Move databases between RDS for SQL Server DB instances.
+ Migrate data, schemas, stored procedures, triggers, and other database code inside .bak files.
+ Backup and restore single databases, instead of entire DB instances.
+ Create copies of databases for development, testing, training, and demonstrations.
+ Store and transfer backup files with Amazon S3, for an added layer of protection for disaster recovery.
+ Create native backups of databases that have Transparent Data Encryption (TDE) turned on, and restore those backups to on-premises databases. For more information, see [Support for Transparent Data Encryption in SQL Server](Appendix.SQLServer.Options.TDE.md).
+ Restore native backups of on-premises databases that have TDE turned on to RDS for SQL Server DB instances. For more information, see [Support for Transparent Data Encryption in SQL Server](Appendix.SQLServer.Options.TDE.md).

**Contents**
+ [

## Limitations and recommendations
](#SQLServer.Procedural.Importing.Native.Limitations)
+ [

# Setting up for native backup and restore
](SQLServer.Procedural.Importing.Native.Enabling.md)
  + [

## Manually creating an IAM role for native backup and restore
](SQLServer.Procedural.Importing.Native.Enabling.md#SQLServer.Procedural.Importing.Native.Enabling.IAM)
+ [

# Using native backup and restore
](SQLServer.Procedural.Importing.Native.Using.md)
  + [

## Backing up a database
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Using.Backup)
    + [

### Usage
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Backup.Syntax)
    + [

### Examples
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Backup.Examples)
  + [

## Restoring a database
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Using.Restore)
    + [

### Usage
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Restore.Syntax)
    + [

### Examples
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Restore.Examples)
  + [

## Restoring a log
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Restore.Log)
    + [

### Usage
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Restore.Log.Syntax)
    + [

### Examples
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Restore.Log.Examples)
  + [

## Finishing a database restore
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Finish.Restore)
    + [

### Usage
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Finish.Restore.Syntax)
  + [

## Working with partially restored databases
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Partially.Restored)
    + [

### Dropping a partially restored database
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Drop.Partially.Restored)
    + [

### Snapshot restore and point-in-time recovery behavior for partially restored databases
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Snapshot.Restore)
  + [

## Canceling a task
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Using.Cancel)
    + [

### Usage
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Cancel.Syntax)
  + [

## Tracking the status of tasks
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Tracking)
    + [

### Usage
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Tracking.Syntax)
    + [

### Examples
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Tracking.Examples)
    + [

### Response
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Tracking.Response)
+ [

# Compressing backup files
](SQLServer.Procedural.Importing.Native.Compression.md)
+ [

# Troubleshooting
](SQLServer.Procedural.Importing.Native.Troubleshooting.md)
+ [

# Importing and exporting SQL Server data using other methods
](SQLServer.Procedural.Importing.Snapshots.md)
  + [

## Importing data into RDS for SQL Server by using a snapshot
](SQLServer.Procedural.Importing.Snapshots.md#SQLServer.Procedural.Importing.Procedure)
    + [

### Import the data
](SQLServer.Procedural.Importing.Snapshots.md#ImportData.SQLServer.Import)
      + [

#### Generate and Publish Scripts Wizard
](SQLServer.Procedural.Importing.Snapshots.md#ImportData.SQLServer.MgmtStudio.ScriptWizard)
      + [

#### Import and Export Wizard
](SQLServer.Procedural.Importing.Snapshots.md#ImportData.SQLServer.MgmtStudio.ImportExportWizard)
      + [

#### Bulk copy
](SQLServer.Procedural.Importing.Snapshots.md#ImportData.SQLServer.MgmtStudio.BulkCopy)
  + [

## Exporting data from RDS for SQL Server
](SQLServer.Procedural.Importing.Snapshots.md#SQLServer.Procedural.Exporting)
    + [

### SQL Server Import and Export Wizard
](SQLServer.Procedural.Importing.Snapshots.md#SQLServer.Procedural.Exporting.SSIEW)
    + [

### SQL Server Generate and Publish Scripts Wizard and bcp utility
](SQLServer.Procedural.Importing.Snapshots.md#SQLServer.Procedural.Exporting.SSGPSW)
+ [

# Using BCP utility from Linux to import and export data
](SQLServer.Procedural.Importing.BCP.Linux.md)
  + [

## Prerequisites
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Prerequisites)
  + [

## Installing SQL Server command-line tools on Linux
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Installing)
  + [

## Exporting data from RDS for SQL Server
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Exporting)
    + [

### Basic export syntax
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Exporting.Basic)
    + [

### Export example
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Exporting.Example)
  + [

## Importing data to RDS for SQL Server
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Importing)
    + [

### Basic import syntax
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Importing.Basic)
    + [

### Import example
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Importing.Example)
  + [

## Common BCP options
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Options)
  + [

## Best practices and considerations
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.BestPractices)
  + [

## Troubleshooting common issues
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Troubleshooting)

## Limitations and recommendations
<a name="SQLServer.Procedural.Importing.Native.Limitations"></a>

The following are some limitations to using native backup and restore: 
+ You can't back up to, or restore from, an Amazon S3 bucket in a different AWS Region from your Amazon RDS DB instance.
+ You can't restore a database with the same name as an existing database. Database names are unique.
+ We strongly recommend that you don't restore backups from one time zone to a different time zone. If you restore backups from one time zone to a different time zone, you must audit your queries and applications for the effects of the time zone change.
+ RDS for Microsoft SQL Server has a size limit of 5 TB per file. For native backups of larger databases, you can use multifile backup.
+ The maximum database size that can be backed up to S3 depends on the available memory, CPU, I/O, and network resources on the DB instance. The larger the database, the more memory the backup agent consumes.
+ You can't back up to or restore from more than 10 backup files at the same time.
+ A differential backup is based on the last full backup. For differential backups to work, you can't take a snapshot between the last full backup and the differential backup. If you want a differential backup, but a manual or automated snapshot exists, then do another full backup before proceeding with the differential backup.
+ Differential and log restores aren't supported for databases with files that have their file\$1guid (unique identifier) set to `NULL`.
+ You can run up to two backup or restore tasks at the same time.
+ You can't perform native log backups from SQL Server on Amazon RDS.
+ RDS supports native restores of databases up to 64 TiB. Native restores of databases on SQL Server Express Edition are limited to 10 GB.
+ You can't do a native backup during the maintenance window, or any time Amazon RDS is in the process of taking a snapshot of the database. If a native backup task overlaps with the RDS daily backup window, the native backup task is canceled.
+ On Multi-AZ DB instances, you can only natively restore databases that are backed up in the full recovery model.
+ Calling the RDS procedures for native backup and restore within a transaction isn't supported.
+ Use a symmetric encryption AWS KMS key to encrypt your backups. Amazon RDS doesn't support asymmetric KMS keys. For more information, see [Creating symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html#create-symmetric-cmk) in the *AWS Key Management Service Developer Guide*.
+ Native backup files are encrypted with the specified KMS key using the "Encryption-Only" crypto mode. When you are restoring encrypted backup files, be aware that they were encrypted with the "Encryption-Only" crypto mode.
+ You can't restore a database that contains a FILESTREAM file group.
+ Amazon S3 server-side encryption with AWS KMS (SSE-KMS) is supported through your S3 bucket's default encryption configuration when you pass `@enable_bucket_default_encryption=1` to the backup stored procedure. By default, the restore supports the S3 object's server-side encryption.

  When you provide a KMS key to a stored procedure, any native backup and restores are encrypted and decrypted on the client-side with the KMS key. AWS stores the backups in the S3 bucket with SSE-S3 when `@enable_bucket_default_encryption=0` or with your S3 bucket's configured default encryption key when `@enable_bucket_default_encryption=1`.
+ When using S3 Access Points, the access point cannot be configured to use an RDS internal VPC.
+ For highest performance, we recommend using directory buckets or access points for directory buckets if they are available in your region.

If your database can be offline while the backup file is created, copied, and restored, we recommend that you use native backup and restore to migrate it to RDS. If your on-premises database can't be offline, we recommend that you use the AWS Database Migration Service to migrate your database to Amazon RDS. For more information, see [What is AWS Database Migration Service?](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) 

Native backup and restore isn't intended to replace the data recovery capabilities of the cross-region snapshot copy feature. We recommend that you use snapshot copy to copy your database snapshot to another AWS Region for cross-region disaster recovery in Amazon RDS. For more information, see [Copying a DB snapshot for Amazon RDS](USER_CopySnapshot.md).

# Setting up for native backup and restore
<a name="SQLServer.Procedural.Importing.Native.Enabling"></a>

To set up for native backup and restore, you need three components:

1. An Amazon S3 bucket to store your backup files.

   You must have an S3 bucket to use for your backup files and then upload backups you want to migrate to RDS. If you already have an Amazon S3 bucket, you can use that. If you don't, you can [create a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingaBucket.html). Alternatively, you can choose to have a new bucket created for you when you add the `SQLSERVER_BACKUP_RESTORE` option by using the AWS Management Console.

   For information on using S3, see the [Amazon Simple Storage Service User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/)

1. An AWS Identity and Access Management (IAM) role to access the bucket.

   If you already have an IAM role, you can use that. You can choose to have a new IAM role created for you when you add the `SQLSERVER_BACKUP_RESTORE` option by using the AWS Management Console. Alternatively, you can create a new one manually.

   If you want to create a new IAM role manually, take the approach discussed in the next section. Do the same if you want to attach trust relationships and permissions policies to an existing IAM role.

1. The `SQLSERVER_BACKUP_RESTORE` option added to an option group on your DB instance.

   To enable native backup and restore on your DB instance, you add the `SQLSERVER_BACKUP_RESTORE` option to an option group on your DB instance. For more information and instructions, see [Support for native backup and restore in SQL Server](Appendix.SQLServer.Options.BackupRestore.md).

## Manually creating an IAM role for native backup and restore
<a name="SQLServer.Procedural.Importing.Native.Enabling.IAM"></a>

If you want to manually create a new IAM role to use with native backup and restore, you can do so. In this case, you create a role to delegate permissions from the Amazon RDS service to your Amazon S3 bucket. When you create an IAM role, you attach a trust relationship and a permissions policy. The trust relationship allows RDS to assume this role. The permissions policy defines the actions this role can perform. For more information about creating the role, see [Creating a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html).

For the native backup and restore feature, use trust relationships and permissions policies similar to the examples in this section. In the following example, we use the service principal name `rds.amazonaws.com` as an alias for all service accounts. In the other examples, we specify an Amazon Resource Name (ARN) to identify another account, user, or role that we're granting access to in the trust policy.

We recommend using the [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn) and [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount) global condition context keys in resource-based trust relationships to limit the service's permissions to a specific resource. This is the most effective way to protect against the [confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html).

You might use both global condition context keys and have the `aws:SourceArn` value contain the account ID. In this case, the `aws:SourceAccount` value and the account in the `aws:SourceArn` value must use the same account ID when used in the same statement.
+ Use `aws:SourceArn` if you want cross-service access for a single resource.
+ Use `aws:SourceAccount` if you want to allow any resource in that account to be associated with the cross-service use.

In the trust relationship, make sure to use the `aws:SourceArn` global condition context key with the full ARN of the resources accessing the role. For native backup and restore, make sure to include both the DB option group and the DB instances, as shown in the following example.

**Example of trust relationship with global condition context key for native backup and restore**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "rds.amazonaws.com"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                    "aws:SourceArn": [
                        "arn:aws:rds:Region:0123456789:db:db_instance_identifier",
                        "arn:aws:rds:Region:0123456789:og:option_group_name"
                    ],
                    "aws:SourceAccount": "0123456789"
                }
            }
        }
    ]
}
```

The following example uses an ARN to specify a resource. For more information on using ARNs, see [ Amazon resource names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). 

**Example of permissions policy for native backup and restore without encryption support**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement":
    [
        {
        "Effect": "Allow",
        "Action":
            [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
        },
        {
        "Effect": "Allow",
        "Action":
            [
                "s3:GetObjectAttributes",
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListMultipartUploadParts",
                "s3:AbortMultipartUpload"
            ],
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
        }
    ]
}
```

**Example permissions policy for native backup and restore with encryption support**  
If you want to encrypt your backup files, include an encryption key in your permissions policy. For more information about encryption keys, see [Getting started](https://docs.aws.amazon.com/kms/latest/developerguide/getting-started.html) in the *AWS Key Management Service Developer Guide*.  
You must use a symmetric encryption KMS key to encrypt your backups. Amazon RDS doesn't support asymmetric KMS keys. For more information, see [Creating symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html#create-symmetric-cmk) in the *AWS Key Management Service Developer Guide*.  
The IAM role must also be a key user and key administrator for the KMS key, that is, it must be specified in the key policy. For more information, see [Creating symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html#create-symmetric-cmk) in the *AWS Key Management Service Developer Guide*.  
****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowAccessToKey",
      "Effect": "Allow",
      "Action": [
        "kms:DescribeKey",
        "kms:GenerateDataKey",
        "kms:Encrypt",
        "kms:Decrypt"
      ],
      "Resource": "arn:aws:kms:us-east-1:123456789012:key/key-id"
    },
    {
      "Sid": "AllowAccessToS3",
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket",
        "s3:GetBucketLocation"
      ],
      "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
    },
    {
      "Sid": "GetS3Info",
      "Effect": "Allow",
      "Action": [
        "s3:GetObjectAttributes",
        "s3:GetObject",
        "s3:PutObject",
        "s3:ListMultipartUploadParts",
        "s3:AbortMultipartUpload"
      ],
      "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
    }
  ]
}
```

**Example permissions policy for native backup and restore using access points without encryption support**  
The actions required to use S3 access points are the same as for S3 buckets. The resource path is updated to match the S3 access point ARN pattern.  
Access points must be configured to use **Network origin: Internet** as RDS does not publish private VPCs. S3 traffic from RDS instances does not go through the public internet since it goes through private VPCs.  
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
                ],
            "Resource": [
            "arn:aws:s3:us-east-1:111122223333:accesspoint/amzn-s3-demo-ap",
            "arn:aws:s3:::underlying-bucket"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObjectAttributes",
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListMultipartUploadParts",
                "s3:AbortMultipartUpload"
                ],
                "Resource": [
                "arn:aws:s3:us-east-1:111122223333:accesspoint/amzn-s3-demo-ap/*",
                    "arn:aws:s3:::underlying-bucket/*"
                    ]
                }
            ]   
}
```

**Example permissions policy for native backup and restore using access points for directory buckets without encryption support**  
Directory buckets use a different, [session-based authorization mechanism](https://docs.aws.amazon.com//AmazonS3/latest/userguide/s3-express-authenticating-authorizing.html) than general purpose buckets, so the only required permission for native backup restore is the bucket-level “s3express:CreateSession” permission. To configure object-level access, you must use [access points for directory buckets](https://docs.aws.amazon.com//AmazonS3/latest/userguide/access-points-directory-buckets-policies.html).  
Access points must be configured to use **Network origin: Internet** as RDS does not publish private VPCs. S3 traffic from RDS instances does not go through the public internet since it goes through private VPCs.  
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement":
    [
        {
        "Effect": "Allow",
        "Action": "s3express:CreateSession",
        "Resource": 
            [
                "arn:aws:s3express:us-east-1:111122223333:accesspoint/amzn-s3-demo-accesspoint--use1-az6--xa-s3",
                "arn:aws:s3express:us-east-1:111122223333:bucket/amzn-s3-demo-bucket--use1-az6--x-s3"
            ]
        }
    ]
}
```

# Using native backup and restore
<a name="SQLServer.Procedural.Importing.Native.Using"></a>

After you have enabled and configured native backup and restore, you can start using it. First, you connect to your Microsoft SQL Server database, and then you call an Amazon RDS stored procedure to do the work. For instructions on connecting to your database, see [Connecting to your Microsoft SQL Server DB instance](USER_ConnectToMicrosoftSQLServerInstance.md). 

Some of the stored procedures require that you provide an Amazon Resource Name (ARN) to your Amazon S3 bucket and file. The format for your ARN is `arn:aws:s3:::bucket_name/file_name.extension`. Amazon S3 doesn't require an account number or AWS Region in ARNs.

If you also provide an optional KMS key, the format for the ARN of the key is `arn:aws:kms:region:account-id:key/key-id`. For more information, see [ Amazon resource names (ARNs) and AWS service namespaces](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). You must use a symmetric encryption KMS key to encrypt your backups. Amazon RDS doesn't support asymmetric KMS keys. For more information, see [Creating symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html#create-symmetric-cmk) in the *AWS Key Management Service Developer Guide*.

**Note**  
Whether or not you use a KMS key, the native backup and restore tasks enable server-side Advanced Encryption Standard (AES) 256-bit encryption through SSE-S3 by default for files uploaded to S3. Passing in `@enable_bucket_default_encryption=1` to the backup stored procedure uses your S3 bucket's configured default encryption key.

For instructions on how to call each stored procedure, see the following topics:
+ [Backing up a database](#SQLServer.Procedural.Importing.Native.Using.Backup)
+ [Restoring a database](#SQLServer.Procedural.Importing.Native.Using.Restore)
+ [Restoring a log](#SQLServer.Procedural.Importing.Native.Restore.Log)
+ [Finishing a database restore](#SQLServer.Procedural.Importing.Native.Finish.Restore)
+ [Working with partially restored databases](#SQLServer.Procedural.Importing.Native.Partially.Restored)
+ [Canceling a task](#SQLServer.Procedural.Importing.Native.Using.Cancel)
+ [Tracking the status of tasks](#SQLServer.Procedural.Importing.Native.Tracking)

## Backing up a database
<a name="SQLServer.Procedural.Importing.Native.Using.Backup"></a>

To back up your database, use the `rds_backup_database` stored procedure.

**Note**  
You can't back up a database during the maintenance window, or while Amazon RDS is taking a snapshot. 

### Usage
<a name="SQLServer.Procedural.Importing.Native.Backup.Syntax"></a>

```
exec msdb.dbo.rds_backup_database
	@source_db_name='database_name',
	@s3_arn_to_backup_to='arn:aws:s3:::bucket_name/file_name.extension',
	[@kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id'],	
	[@overwrite_s3_backup_file=0|1],
	[@block_size=512|1024|2048|4096|8192|16384|32768|65536],
        [@max_transfer_size=n],
        [@buffer_count=n],
	[@type='DIFFERENTIAL|FULL'],
	[@number_of_files=n],
	[@enable_bucket_default_encryption=0|1];
```

The following parameters are required:
+ `@source_db_name` – The name of the database to back up.
+ `@s3_arn_to_backup_to` – The ARN indicating the Amazon S3 bucket, access point, directory bucket, or access point for directory bucket to use for the backup, plus the name of the backup file.

  The file can have any extension, but `.bak` is usually used. Note that access point ARNs must be of the format `arn:aws:s3:us-east-1:111122223333:access-point-name/object/key`.

The following parameters are optional:
+ `@kms_master_key_arn` – The ARN for the symmetric encryption KMS key to use to encrypt the item.
  + You can't use the default encryption key. If you use the default key, the database won't be backed up.
  +  If you don't specify a KMS key identifier, the backup file won't be encrypted. For more information, see [Encrypting Amazon RDS resources](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html).
  + When you specify a KMS key, client-side encryption is used.
  + Amazon RDS doesn't support asymmetric KMS keys. For more information, see [Creating symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html#create-symmetric-cmk) in the *AWS Key Management Service Developer Guide*.
+ `@overwrite_s3_backup_file` – A value that indicates whether to overwrite an existing backup file.
  + `0` – Doesn't overwrite an existing file. This value is the default.

    Setting `@overwrite_s3_backup_file` to 0 returns an error if the file already exists.
  + `1` – Overwrites an existing file that has the specified name, even if it isn't a backup file.
+ `@type` – The type of backup.
  + `DIFFERENTIAL` – Makes a differential backup.
  + `FULL` – Makes a full backup. This value is the default.

  A differential backup is based on the last full backup. For differential backups to work, you can't take a snapshot between the last full backup and the differential backup. If you want a differential backup, but a snapshot exists, then do another full backup before proceeding with the differential backup.

  You can look for the last full backup or snapshot using the following example SQL query:

  ```
  select top 1
  database_name
  , 	backup_start_date
  , 	backup_finish_date
  from    msdb.dbo.backupset
  where   database_name='mydatabase'
  and     type = 'D'
  order by backup_start_date desc;
  ```
+ `@number_of_files` – The number of files into which the backup will be divided (chunked). The maximum number is 10.
  + Multifile backup is supported for both full and differential backups.
  + If you enter a value of 1 or omit the parameter, a single backup file is created.

  Provide the prefix that the files have in common, then suffix that with an asterisk (`*`). The asterisk can be anywhere in the *file\$1name* part of the S3 ARN. The asterisk is replaced by a series of alphanumeric strings in the generated files, starting with `1-of-number_of_files`.

  For example, if the file names in the S3 ARN are `backup*.bak` and you set `@number_of_files=4`, the backup files generated are `backup1-of-4.bak`, `backup2-of-4.bak`, `backup3-of-4.bak`, and `backup4-of-4.bak`.
  + If any of the file names already exists, and `@overwrite_s3_backup_file` is 0, an error is returned.
  + Multifile backups can only have one asterisk in the *file\$1name* part of the S3 ARN.
  + Single-file backups can have any number of asterisks in the *file\$1name* part of the S3 ARN. Asterisks aren't removed from the generated file name.
+ `@block_size` – Block size (in bytes) specifying the physical block size for backup operations. Valid values are 512, 1024, 2048, 4096, 8192, 16384, 32768, and 65536
+ `@max_transfer_size` – Maximum transfer size denotes the upper limit of data volume (in bytes) transmitted per I/O operation during the backup process. Valid values are multiples of 65536 bytes (64 KB) up to 4194304 bytes (4 MB). 
+ `@buffer_count` – Total number of I/O buffers to be use for the backup process.
+ `@enable_bucket_default_encryption` – A value that indicates whether to use the S3 bucket's default encryption configuration for server-side encryption in S3. Directory buckets always use the bucket's default encryption configuration regardless of this setting.
  + `0` – Server-side encryption uses Advanced Encryption Standard (AES) 256-bit encryption through SSE-S3.
  + `1` – Server-side encryption uses your S3 bucket’s configured [default encryption](https://docs.aws.amazon.com//AmazonS3/latest/userguide/bucket-encryption.html). 

### Examples
<a name="SQLServer.Procedural.Importing.Native.Backup.Examples"></a>

**Example of differential backup**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup1.bak',
@overwrite_s3_backup_file=1,
@type='DIFFERENTIAL';
```

**Example of full backup with client-side encryption**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup1.bak',
@kms_master_key_arn='arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE',
@overwrite_s3_backup_file=1,
@type='FULL';
```

**Example of multifile backup**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@number_of_files=4;
```

**Example of multifile differential backup**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@type='DIFFERENTIAL',
@number_of_files=4;
```

**Example of multifile backup with encryption**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@kms_master_key_arn='arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE',
@number_of_files=4;
```

**Example of multifile backup with S3 overwrite**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@overwrite_s3_backup_file=1,
@number_of_files=4;
```

**Example of backup with block size**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@block_size=512;
```

**Example of multifile backup with `@max_transfer_size` and `@buffer_count`**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@number_of_files=4,
@max_transfer_size=4194304,
@buffer_count=10;
```

**Example of single-file backup with the @number\$1of\$1files parameter**  
This example generates a backup file named `backup*.bak`.  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@number_of_files=1;
```

**Example of full backup with server-side encryption**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@overwrite_s3_backup_file=1,
@type='FULL',
@enable_bucket_default_encryption=1;
```

**Example of full backup using an access point**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:us-east-1:111122223333:accesspoint/my-access-point/object/backup1.bak',
@overwrite_s3_backup_file=1,
@type='FULL';
```

**Example of full backup using an access point for a directory bucket**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3express:us-east-1:123456789012:accesspoint/my-access-point--use1-az6--xa-s3/object/backup1.bak',
@overwrite_s3_backup_file=1,
@type='FULL';
```

## Restoring a database
<a name="SQLServer.Procedural.Importing.Native.Using.Restore"></a>

To restore your database, call the `rds_restore_database` stored procedure. Amazon RDS creates an initial snapshot of the database after the restore task is complete and the database is open.

### Usage
<a name="SQLServer.Procedural.Importing.Native.Restore.Syntax"></a>

```
exec msdb.dbo.rds_restore_database
	@restore_db_name='database_name',
	@s3_arn_to_restore_from='arn:aws:s3:::bucket_name/file_name.extension',
	@with_norecovery=0|1,
	[@keep_cdc=0|1],
	[@data_file_volume='D:|H:|I:|J:'],
	[@log_file_volume='D:|H:|I:|J:'],
	[@kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id'],
        [@block_size=512|1024|2048|4096|8192|16384|32768|65536],
        [@max_transfer_size=n],
        [@buffer_count=n],
	[@type='DIFFERENTIAL|FULL'];
```

The following parameters are required:
+ `@restore_db_name` – The name of the database to restore. Database names are unique. You can't restore a database with the same name as an existing database.
+ `@s3_arn_to_restore_from` – The ARN indicating the Amazon S3 prefix and names of the backup files used to restore the database.
  + For a single-file backup, provide the entire file name.
  + For a multifile backup, provide the prefix that the files have in common, then suffix that with an asterisk (`*`).
    + If using a directory bucket, the ARN must end with `/*` due to [differences for directory buckets](https://docs.aws.amazon.com//AmazonS3/latest/userguide/s3-express-differences.html).
  + If `@s3_arn_to_restore_from` is empty, the following error message is returned: S3 ARN prefix cannot be empty.

The following parameter is required for differential restores, but optional for full restores:
+ `@with_norecovery` – The recovery clause to use for the restore operation.
  + Set it to `0` to restore with RECOVERY. In this case, the database is online after the restore.
  + Set it to `1` to restore with NORECOVERY. In this case, the database remains in the RESTORING state after restore task completion. With this approach, you can do later differential restores.
  + For DIFFERENTIAL restores, specify `0` or `1`.
  + For `FULL` restores, this value defaults to `0`.

The following parameters are optional:
+ `@keep_cdc` – Indicates whether to retain Change Data Capture (CDC) configuration on the restored database. Set to `1` to enable KEEP\$1CDC, `0` to disable. The default value is `0`.
+ `@data_file_volume` – Specifies the drive letter for database data files. The default value is `D:`.
+ `@log_file_volume` – Specifies the drive letter for database log files The default value is `D:`.
+ `@kms_master_key_arn` – If you encrypted the backup file, the KMS key to use to decrypt the file.

  When you specify a KMS key, client-side encryption is used.
+ `@type` – The type of restore. Valid types are `DIFFERENTIAL` and `FULL`. The default value is `FULL`.
+ `@block_size` – Block size (in bytes) specifying the physical block size for backup operations. Valid values are 512, 1024, 2048, 4096, 8192, 16384, 32768, and 65536
+ `@max_transfer_size` – Maximum transfer size denotes the upper limit of data volume (in bytes) transmitted per I/O operation during the backup process. Valid values are multiples of 65536 bytes (64 KB) up to 4194304 bytes (4 MB). 
+ `@buffer_count` – Total number of I/O buffers to be use for the backup process.

**Note**  
For differential restores, either the database must be in the RESTORING state or a task must already exist that restores with NORECOVERY.  
You can't restore later differential backups while the database is online.  
You can't submit a restore task for a database that already has a pending restore task with RECOVERY.  
Full restores with both NORECOVERY and KEEP\$1CDC aren’t supported.  
All native restores aren't supported on instances that have cross-region read replicas.  
For supported configurations, restoring a database on a Multi-AZ instance with read replicas is similar to restoring a database on a Multi-AZ instance. You don't have to take any additional actions to restore a database on a replica.

### Examples
<a name="SQLServer.Procedural.Importing.Native.Restore.Examples"></a>

**Example of single-file restore**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak';
```

**Example of multifile restore**  
To avoid errors when restoring multiple files, make sure that all the backup files have the same prefix, and that no other files use that prefix.  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup*';
```

**Example of full database restore with RECOVERY**  
The following three examples perform the same task, full restore with RECOVERY.  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak';
```

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
[@type='DIFFERENTIAL|FULL'];
```

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@type='FULL',
@with_norecovery=0;
```

**Example of full database restore with encryption**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@kms_master_key_arn='arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE';
```

**Example of restore with block size**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@block_size=512;
```

**Example of multifile restore with @max\$1transfer\$1size and @buffer\$1count**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup*',
@max_transfer_size=4194304,
@buffer_count=10;
```

**Example of full database restore with NORECOVERY**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@type='FULL',
@with_norecovery=1;
```

**Example of differential restore with NORECOVERY**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@type='DIFFERENTIAL',
@with_norecovery=1;
```

**Example of differential restore with RECOVERY**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@type='DIFFERENTIAL',
@with_norecovery=0;
```

**Example of full database restore with RECOVERY using an access point**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:us-east-1:111122223333:accesspoint/my-access-point/object/backup1.bak',
@with_norecovery=0;
```

**Example of full database restore with KEEP\$1CDC**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@keep_cdc=1;
```

## Restoring a log
<a name="SQLServer.Procedural.Importing.Native.Restore.Log"></a>

To restore your log, call the `rds_restore_log` stored procedure.

### Usage
<a name="SQLServer.Procedural.Importing.Native.Restore.Log.Syntax"></a>

```
exec msdb.dbo.rds_restore_log 
	@restore_db_name='database_name',
	@s3_arn_to_restore_from='arn:aws:s3:::bucket_name/log_file_name.extension',
	[@kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id'],
	[@with_norecovery=0|1],
	[@keep_cdc=0|1],
	[@stopat='datetime'],
	[@block_size=512|1024|2048|4096|8192|16384|32768|65536],
        [@max_transfer_size=n],
        [@buffer_count=n];
```

The following parameters are required:
+ `@restore_db_name` – The name of the database whose log to restore.
+ `@s3_arn_to_restore_from` – The ARN indicating the Amazon S3 prefix and name of the log file used to restore the log. The file can have any extension, but `.trn` is usually used.

  If `@s3_arn_to_restore_from` is empty, the following error message is returned: S3 ARN prefix cannot be empty.

The following parameters are optional:
+ `@keep_cdc` – Indicates whether to retain Change Data Capture (CDC) configuration on the restored database. Set to 1 to enable KEEP\$1CDC, 0 to disable. The default value is 0.
+ `@kms_master_key_arn` – If you encrypted the log, the KMS key to use to decrypt the log.
+ `@with_norecovery` – The recovery clause to use for the restore operation. This value defaults to `1`.
  + Set it to `0` to restore with RECOVERY. In this case, the database is online after the restore. You can't restore further log backups while the database is online.
  + Set it to `1` to restore with NORECOVERY. In this case, the database remains in the RESTORING state after restore task completion. With this approach, you can do later log restores.
+ `@stopat` – A value that specifies that the database is restored to its state at the date and time specified (in datetime format). Only transaction log records written before the specified date and time are applied to the database.

  If this parameter isn't specified (it is NULL), the complete log is restored.
+ `@block_size` – Block size (in bytes) specifying the physical block size for backup operations. Valid values are 512, 1024, 2048, 4096, 8192, 16384, 32768, and 65536
+ `@max_transfer_size` – Maximum transfer size denotes the upper limit of data volume (in bytes) transmitted per I/O operation during the backup process. Valid values are multiples of 65536 bytes (64 KB) up to 4194304 bytes (4 MB). 
+ `@buffer_count` – Total number of I/O buffers to be use for the backup process.

**Note**  
For log restores, either the database must be in a state of restoring or a task must already exist that restores with NORECOVERY.  
You can't restore log backups while the database is online.  
You can't submit a log restore task on a database that already has a pending restore task with RECOVERY.

### Examples
<a name="SQLServer.Procedural.Importing.Native.Restore.Log.Examples"></a>

**Example of log restore**  

```
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn';
```

**Example of log restore with encryption**  

```
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@kms_master_key_arn='arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE';
```

**Example of log restore with NORECOVERY**  
The following two examples perform the same task, log restore with NORECOVERY.  

```
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@with_norecovery=1;
```

```
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn';
```

**Example of restore with block size**  

```
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@block_size=512;
```

**Example of log restore with RECOVERY**  

```
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@with_norecovery=0;
```

**Example of log restore with STOPAT clause**  

```
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@with_norecovery=0,
@stopat='2019-12-01 03:57:09';
```

**Example of log restore with KEEP\$1CDC**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@keep_cdc=1;
```

## Finishing a database restore
<a name="SQLServer.Procedural.Importing.Native.Finish.Restore"></a>

If the last restore task on the database was performed using `@with_norecovery=1`, the database is now in the RESTORING state. Open this database for normal operation by using the `rds_finish_restore` stored procedure.

### Usage
<a name="SQLServer.Procedural.Importing.Native.Finish.Restore.Syntax"></a>

```
exec msdb.dbo.rds_finish_restore @db_name='database_name';
```

**Note**  
To use this approach, the database must be in the RESTORING state without any pending restore tasks.  
To finish restoring the database, use the master login. Or use the user login that most recently restored the database or log with NORECOVERY.

## Working with partially restored databases
<a name="SQLServer.Procedural.Importing.Native.Partially.Restored"></a>

### Dropping a partially restored database
<a name="SQLServer.Procedural.Importing.Native.Drop.Partially.Restored"></a>

To drop a partially restored database (left in the RESTORING state), use the `rds_drop_database` stored procedure.

```
exec msdb.dbo.rds_drop_database @db_name='database_name';
```

**Note**  
You can't submit a DROP database request for a database that already has a pending restore or finish restore task.  
To drop the database, use the master login. Or use the user login that most recently restored the database or log with NORECOVERY.

### Snapshot restore and point-in-time recovery behavior for partially restored databases
<a name="SQLServer.Procedural.Importing.Native.Snapshot.Restore"></a>

Partially restored databases in the source instance (left in the RESTORING state) are dropped from the target instance during snapshot restore and point-in-time recovery.

## Canceling a task
<a name="SQLServer.Procedural.Importing.Native.Using.Cancel"></a>

To cancel a backup or restore task, call the `rds_cancel_task` stored procedure.

**Note**  
You can't cancel a FINISH\$1RESTORE task.

### Usage
<a name="SQLServer.Procedural.Importing.Native.Cancel.Syntax"></a>

```
exec msdb.dbo.rds_cancel_task @task_id=ID_number;
```

The following parameter is required:
+ `@task_id` – The ID of the task to cancel. You can get the task ID by calling `rds_task_status`. 

## Tracking the status of tasks
<a name="SQLServer.Procedural.Importing.Native.Tracking"></a>

To track the status of your backup and restore tasks, call the `rds_task_status` stored procedure. If you don't provide any parameters, the stored procedure returns the status of all tasks. The status for tasks is updated approximately every two minutes. Task history is retained for 36 days.

### Usage
<a name="SQLServer.Procedural.Importing.Native.Tracking.Syntax"></a>

```
exec msdb.dbo.rds_task_status
	[@db_name='database_name'],
	[@task_id=ID_number];
```

The following parameters are optional: 
+ `@db_name` – The name of the database to show the task status for.
+ `@task_id` – The ID of the task to show the task status for.

### Examples
<a name="SQLServer.Procedural.Importing.Native.Tracking.Examples"></a>

**Example of listing the status for a specific task**  

```
exec msdb.dbo.rds_task_status @task_id=5;
```

**Example of listing the status for a specific database and task**  

```
exec msdb.dbo.rds_task_status
@db_name='my_database',
@task_id=5;
```

**Example of listing all tasks and their statuses on a specific database**  

```
exec msdb.dbo.rds_task_status @db_name='my_database';
```

**Example of listing all tasks and their statuses on the current instance**  

```
exec msdb.dbo.rds_task_status;
```

### Response
<a name="SQLServer.Procedural.Importing.Native.Tracking.Response"></a>

The `rds_task_status` stored procedure returns the following columns.


****  

| Column | Description | 
| --- | --- | 
| `task_id` |  The ID of the task.   | 
| `task_type` |  Task type depending on the input parameters, as follows: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.Native.Using.html) Amazon RDS creates an initial snapshot of the database after it is open on completion of the following restore tasks: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.Native.Using.html)  | 
| `database_name` |  The name of the database that the task is associated with.   | 
| `% complete` |  The progress of the task as a percent value.   | 
| `duration (mins)` |  The amount of time spent on the task, in minutes.   | 
| `lifecycle` |  The status of the task. The possible statuses are the following:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.Native.Using.html)  | 
| `task_info` |  Additional information about the task.  If an error occurs while backing up or restoring a database, this column contains information about the error. For a list of possible errors, and mitigation strategies, see [Troubleshooting](SQLServer.Procedural.Importing.Native.Troubleshooting.md).   | 
| `last_updated` |  The date and time that the task status was last updated. The status is updated after every 5 percent of progress.  | 
| `created_at` | The date and time that the task was created. | 
| S3\$1object\$1arn | The ARN indicating the Amazon S3 prefix and the name of the file that is being backed up or restored. | 
| `overwrite_s3_backup_file` |  The value of the `@overwrite_s3_backup_file` parameter specified when calling a backup task. For more information, see [Backing up a database](#SQLServer.Procedural.Importing.Native.Using.Backup).  | 
| KMS\$1master\$1key\$1arn | The ARN for the KMS key used for encryption (for backup) and decryption (for restore). | 
| filepath | Not applicable to native backup and restore tasks. | 
| overwrite\$1file | Not applicable to native backup and restore tasks. | 

# Compressing backup files
<a name="SQLServer.Procedural.Importing.Native.Compression"></a>

To save space in your Amazon S3 bucket, you can compress your backup files. For more information about compressing backup files, see [Backup compression](https://msdn.microsoft.com/en-us/library/bb964719.aspx) in the Microsoft documentation. 

Compressing your backup files is supported for the following database editions: 
+ Microsoft SQL Server Enterprise Edition 
+ Microsoft SQL Server Standard Edition 

To verify the compression option for your backup files, run the following code:

```
1. exec rdsadmin.dbo.rds_show_configuration 'S3 backup compression';
```

To turn on compression for your backup files, run the following code:

```
1. exec rdsadmin.dbo.rds_set_configuration 'S3 backup compression', 'true';
```

To turn off compression for your backup files, run the following code: 

```
1. exec rdsadmin.dbo.rds_set_configuration 'S3 backup compression', 'false';
```

# Troubleshooting
<a name="SQLServer.Procedural.Importing.Native.Troubleshooting"></a>

The following are issues you might encounter when you use native backup and restore.


****  

| Issue | Troubleshooting suggestions | 
| --- | --- | 
|  Database backup/restore option is not enabled yet or is in the process of being enabled. Please try again later.  |  Make sure that you have added the `SQLSERVER_BACKUP_RESTORE` option to the DB option group associated with your DB instance. For more information, see [Adding the native backup and restore option](Appendix.SQLServer.Options.BackupRestore.md#Appendix.SQLServer.Options.BackupRestore.Add).  | 
|  The EXECUTE permission was denied on the object '*rds\$1backup\$1database*', database 'msdb', schema 'dbo'.  |  Make sure that you are using the master user when executing the stored procedure. If you encounter this error even after being logged in as the master user, it might be due to the admin user permissions being misaligned. To reset the master user, use the AWS Management Console. See [Resetting the db\$1owner role membership for master user for Amazon RDS for SQL Server](Appendix.SQLServer.CommonDBATasks.ResetPassword.md).   | 
|  The EXECUTE permission was denied on the object '*rds\$1restore\$1database*', database 'msdb', schema 'dbo'.  |  Make sure that you are using the master user when executing the stored procedure. If you encounter this error even after being logged in as the master user, it might be due to the admin user permissions being misaligned. To reset the master user, use the AWS Management Console. See [Resetting the db\$1owner role membership for master user for Amazon RDS for SQL Server](Appendix.SQLServer.CommonDBATasks.ResetPassword.md).   | 
|  Access Denied  | The backup or restore process can't access the backup file. This is usually caused by issues like the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.Native.Troubleshooting.html)  | 
|  BACKUP DATABASE WITH COMPRESSION isn't supported on <edition\$1name> Edition  |  Compressing your backup files is only supported for Microsoft SQL Server Enterprise Edition and Standard Edition. For more information, see [Compressing backup files](SQLServer.Procedural.Importing.Native.Compression.md).   | 
|  Key <ARN> does not exist  |  You attempted to restore an encrypted backup, but didn't provide a valid encryption key. Check your encryption key and retry. For more information, see [Restoring a database](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Using.Restore).   | 
|  Please reissue task with correct type and overwrite property  |  If you attempt to back up your database and provide the name of a file that already exists, but set the overwrite property to false, the save operation fails. To fix this error, either provide the name of a file that doesn't already exist, or set the overwrite property to true. For more information, see [Backing up a database](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Using.Backup). It's also possible that you intended to restore your database, but called the `rds_backup_database` stored procedure accidentally. In that case, call the `rds_restore_database` stored procedure instead. For more information, see [Restoring a database](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Using.Restore). If you intended to restore your database and called the `rds_restore_database` stored procedure, make sure that you provided the name of a valid backup file. For more information, see [Using native backup and restore](SQLServer.Procedural.Importing.Native.Using.md).  | 
|  Please specify a bucket that is in the same region as RDS instance  |  You can't back up to, or restore from, an Amazon S3 bucket in a different AWS Region from your Amazon RDS DB instance. You can use Amazon S3 replication to copy the backup file to the correct AWS Region. For more information, see [Cross-Region replication](https://docs.aws.amazon.com/AmazonS3/latest/userguide/crr.html) in the Amazon S3 documentation.  | 
|  The specified bucket does not exist  | Verify that you have provided the correct ARN for your bucket and file, in the correct format.  For more information, see [Using native backup and restore](SQLServer.Procedural.Importing.Native.Using.md).  | 
|  User <ARN> is not authorized to perform <kms action> on resource <ARN>  |  You requested an encrypted operation, but didn't provide correct AWS KMS permissions. Verify that you have the correct permissions, or add them.  For more information, see [Setting up for native backup and restore](SQLServer.Procedural.Importing.Native.Enabling.md).  | 
|  The Restore task is unable to restore from more than 10 backup file(s). Please reduce the number of files matched and try again.  |  Reduce the number of files that you're trying to restore from. You can make each individual file larger if necessary.  | 
|  Database '*database\$1name*' already exists. Two databases that differ only by case or accent are not allowed. Choose a different database name.  |  You can't restore a database with the same name as an existing database. Database names are unique.  | 

# Importing and exporting SQL Server data using other methods
<a name="SQLServer.Procedural.Importing.Snapshots"></a>

Following, you can find information about using snapshots to import your Microsoft SQL Server data to Amazon RDS. You can also find information about using snapshots to export your data from an RDS DB instance running SQL Server. 

If your scenario supports it, it's easier to move data in and out of Amazon RDS by using the native backup and restore functionality. For more information, see [Importing and exporting SQL Server databases using native backup and restore](SQLServer.Procedural.Importing.md). 

**Note**  
Amazon RDS for Microsoft SQL Server doesn't support importing data into the `msdb` database. 

## Importing data into RDS for SQL Server by using a snapshot
<a name="SQLServer.Procedural.Importing.Procedure"></a>

**To import data into a SQL Server DB instance by using a snapshot**

1. Create a DB instance. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).

1. Stop applications from accessing the destination DB instance. 

   If you prevent access to your DB instance while you are importing data, data transfer is faster. Additionally, you don't need to worry about conflicts while data is being loaded if other applications cannot write to the DB instance at the same time. If something goes wrong and you have to roll back to an earlier database snapshot, the only changes that you lose are the imported data. You can import this data again after you resolve the issue. 

   For information about controlling access to your DB instance, see [Controlling access with security groups](Overview.RDSSecurityGroups.md). 

1. Create a snapshot of the target database. 

   If the target database is already populated with data, we recommend that you take a snapshot of the database before you import the data. If something goes wrong with the data import or you want to discard the changes, you can restore the database to its previous state by using the snapshot. For information about database snapshots, see [Creating a DB snapshot for a Single-AZ DB instance for Amazon RDS](USER_CreateSnapshot.md). 
**Note**  
When you take a database snapshot, I/O operations to the database are suspended for a moment (milliseconds) while the backup is in progress. 

1. Disable automated backups on the target database. 

   Disabling automated backups on the target DB instance improves performance while you are importing your data because Amazon RDS doesn't log transactions when automatic backups are disabled. However, there are some things to consider. Automated backups are required to perform a point-in-time recovery. Thus, you can't restore the database to a specific point in time while you are importing data. Additionally, any automated backups that were created on the DB instance are erased unless you choose to retain them. 

   Choosing to retain the automated backups can help protect you against accidental deletion of data. Amazon RDS also saves the database instance properties along with each automated backup to make it easy to recover. Using this option lets you restore a deleted database instance to a specified point in time within the backup retention period even after deleting it. Automated backups are automatically deleted at the end of the specified backup window, just as they are for an active database instance. 

   You can also use previous snapshots to recover the database, and any snapshots that you have taken remain available. For information about automated backups, see [Introduction to backups](USER_WorkingWithAutomatedBackups.md). 

1. Disable foreign key constraints, if applicable. 

    If you need to disable foreign key constraints, you can do so with the following script. 

   ```
   --Disable foreign keys on all tables
       DECLARE @table_name SYSNAME;
       DECLARE @cmd NVARCHAR(MAX);
       DECLARE table_cursor CURSOR FOR SELECT name FROM sys.tables;
       
       OPEN table_cursor;
       FETCH NEXT FROM table_cursor INTO @table_name;
       
       WHILE @@FETCH_STATUS = 0 BEGIN
         SELECT @cmd = 'ALTER TABLE '+QUOTENAME(@table_name)+' NOCHECK CONSTRAINT ALL';
         EXEC (@cmd);
         FETCH NEXT FROM table_cursor INTO @table_name;
       END
       
       CLOSE table_cursor;
       DEALLOCATE table_cursor;
       
       GO
   ```

1. Drop indexes, if applicable. 

1. Disable triggers, if applicable. 

    If you need to disable triggers, you can do so with the following script. 

   ```
   --Disable triggers on all tables
       DECLARE @enable BIT = 0;
       DECLARE @trigger SYSNAME;
       DECLARE @table SYSNAME;
       DECLARE @cmd NVARCHAR(MAX);
       DECLARE trigger_cursor CURSOR FOR SELECT trigger_object.name trigger_name,
        table_object.name table_name
       FROM sysobjects trigger_object
       JOIN sysobjects table_object ON trigger_object.parent_obj = table_object.id
       WHERE trigger_object.type = 'TR';
       
       OPEN trigger_cursor;
       FETCH NEXT FROM trigger_cursor INTO @trigger, @table;
       
       WHILE @@FETCH_STATUS = 0 BEGIN
         IF @enable = 1
            SET @cmd = 'ENABLE ';
         ELSE
            SET @cmd = 'DISABLE ';
       
         SET @cmd = @cmd + ' TRIGGER dbo.'+QUOTENAME(@trigger)+' ON dbo.'+QUOTENAME(@table)+' ';
         EXEC (@cmd);
         FETCH NEXT FROM trigger_cursor INTO @trigger, @table;
       END
       
       CLOSE trigger_cursor;
       DEALLOCATE trigger_cursor;
       
       GO
   ```

1. Query the source SQL Server instance for any logins that you want to import to the destination DB instance. 

   SQL Server stores logins and passwords in the `master` database. Because Amazon RDS doesn't grant access to the `master` database, you cannot directly import logins and passwords into your destination DB instance. Instead, you must query the `master` database on the source SQL Server instance to generate a data definition language (DDL) file. This file should include all logins and passwords that you want to add to the destination DB instance. This file also should include role memberships and permissions that you want to transfer. 

   For information about querying the `master` database, see [ Transfer logins and passwords between instances of SQL Server](https://learn.microsoft.com/en-us/troubleshoot/sql/database-engine/security/transfer-logins-passwords-between-instances) in the Microsoft Knowledge Base.

   The output of the script is another script that you can run on the destination DB instance. The script in the Knowledge Base article has the following code: 

   ```
   p.type IN 
   ```

   Every place `p.type` appears, use the following code instead: 

   ```
   p.type = 'S' 
   ```

1. Import the data using the method in [Import the data](#ImportData.SQLServer.Import). 

1. Grant applications access to the target DB instance. 

   When your data import is complete, you can grant access to the DB instance to those applications that you blocked during the import. For information about controlling access to your DB instance, see [Controlling access with security groups](Overview.RDSSecurityGroups.md). 

1. Enable automated backups on the target DB instance. 

   For information about automated backups, see [Introduction to backups](USER_WorkingWithAutomatedBackups.md). 

1. Enable foreign key constraints. 

    If you disabled foreign key constraints earlier, you can now enable them with the following script. 

   ```
   --Enable foreign keys on all tables
       DECLARE @table_name SYSNAME;
       DECLARE @cmd NVARCHAR(MAX);
       DECLARE table_cursor CURSOR FOR SELECT name FROM sys.tables;
       
       OPEN table_cursor;
       FETCH NEXT FROM table_cursor INTO @table_name;
       
       WHILE @@FETCH_STATUS = 0 BEGIN
         SELECT @cmd = 'ALTER TABLE '+QUOTENAME(@table_name)+' CHECK CONSTRAINT ALL';
         EXEC (@cmd);
         FETCH NEXT FROM table_cursor INTO @table_name;
       END
       
       CLOSE table_cursor;
       DEALLOCATE table_cursor;
   ```

1. Enable indexes, if applicable.

1. Enable triggers, if applicable.

    If you disabled triggers earlier, you can now enable them with the following script. 

   ```
   --Enable triggers on all tables
       DECLARE @enable BIT = 1;
       DECLARE @trigger SYSNAME;
       DECLARE @table SYSNAME;
       DECLARE @cmd NVARCHAR(MAX);
       DECLARE trigger_cursor CURSOR FOR SELECT trigger_object.name trigger_name,
        table_object.name table_name
       FROM sysobjects trigger_object
       JOIN sysobjects table_object ON trigger_object.parent_obj = table_object.id
       WHERE trigger_object.type = 'TR';
       
       OPEN trigger_cursor;
       FETCH NEXT FROM trigger_cursor INTO @trigger, @table;
       
       WHILE @@FETCH_STATUS = 0 BEGIN
         IF @enable = 1
            SET @cmd = 'ENABLE ';
         ELSE
            SET @cmd = 'DISABLE ';
       
         SET @cmd = @cmd + ' TRIGGER dbo.'+QUOTENAME(@trigger)+' ON dbo.'+QUOTENAME(@table)+' ';
         EXEC (@cmd);
         FETCH NEXT FROM trigger_cursor INTO @trigger, @table;
       END
       
       CLOSE trigger_cursor;
       DEALLOCATE trigger_cursor;
   ```

### Import the data
<a name="ImportData.SQLServer.Import"></a>

Microsoft SQL Server Management Studio is a graphical SQL Server client that is included in all Microsoft SQL Server editions except the Express Edition. SQL Server Management Studio Express is available from Microsoft as a free download. To find this download, see [the Microsoft website](https://www.microsoft.com/en-us/download). 

**Note**  
SQL Server Management Studio is available only as a Windows-based application.

SQL Server Management Studio includes the following tools, which are useful in importing data to a SQL Server DB instance: 
+ Generate and Publish Scripts Wizard
+ Import and Export Wizard
+ Bulk copy

#### Generate and Publish Scripts Wizard
<a name="ImportData.SQLServer.MgmtStudio.ScriptWizard"></a>

The Generate and Publish Scripts Wizard creates a script that contains the schema of a database, the data itself, or both. You can generate a script for a database in your local SQL Server deployment. You can then run the script to transfer the information that it contains to an Amazon RDS DB instance. 

**Note**  
For databases of 1 GiB or larger, it's more efficient to script only the database schema. You then use the Import and Export Wizard or the bulk copy feature of SQL Server to transfer the data.

For detailed information about the Generate and Publish Scripts Wizard, see the [Microsoft SQL Server documentation](http://msdn.microsoft.com/en-us/library/ms178078%28v=sql.105%29.aspx). 

In the wizard, pay particular attention to the advanced options on the **Set Scripting Options** page to ensure that everything you want your script to include is selected. For example, by default, database triggers are not included in the script.

When the script is generated and saved, you can use SQL Server Management Studio to connect to your DB instance and then run the script.

#### Import and Export Wizard
<a name="ImportData.SQLServer.MgmtStudio.ImportExportWizard"></a>

The Import and Export Wizard creates a special Integration Services package, which you can use to copy data from your local SQL Server database to the destination DB instance. The wizard can filter which tables and even which tuples within a table are copied to the destination DB instance.

**Note**  
The Import and Export Wizard works well for large datasets, but it might not be the fastest way to remotely export data from your local deployment. For an even faster way, consider the SQL Server bulk copy feature.

For detailed information about the Import and Export Wizard, see the [ Microsoft SQL Server documentation](http://msdn.microsoft.com/en-us/library/ms140052%28v=sql.105%29.aspx).

In the wizard, on the **Choose a Destination** page, do the following:
+ For **Server Name**, type the name of the endpoint for your DB instance.
+ For the server authentication mode, choose **Use SQL Server Authentication**.
+ For **User name** and **Password**, type the credentials for the master user that you created for the DB instance.

#### Bulk copy
<a name="ImportData.SQLServer.MgmtStudio.BulkCopy"></a>

The SQL Server bulk copy feature is an efficient means of copying data from a source database to your DB instance. Bulk copy writes the data that you specify to a data file, such as an ASCII file. You can then run bulk copy again to write the contents of the file to the destination DB instance. 

This section uses the **bcp** utility, which is included with all editions of SQL Server. For detailed information about bulk import and export operations, see [the Microsoft SQL Server documentation](http://msdn.microsoft.com/en-us/library/ms187042%28v=sql.105%29.aspx). 

**Note**  
Before you use bulk copy, you must first import your database schema to the destination DB instance. The Generate and Publish Scripts Wizard, described earlier in this topic, is an excellent tool for this purpose. 

The following command connects to the local SQL Server instance. It generates a tab-delimited file of a specified table in the C:\$1 root directory of your existing SQL Server deployment. The table is specified by its fully qualified name, and the text file has the same name as the table that is being copied. 

```
bcp dbname.schema_name.table_name out C:\table_name.txt -n -S localhost -U username -P password -b 10000 
```

The preceding code includes the following options:
+ `-n` specifies that the bulk copy uses the native data types of the data to be copied.
+ `-S` specifies the SQL Server instance that the *bcp* utility connects to.
+ `-U` specifies the user name of the account to log in to the SQL Server instance.
+ `-P` specifies the password for the user specified by `-U`.
+ `-b` specifies the number of rows per batch of imported data.

**Note**  
There might be other parameters that are important to your import situation. For example, you might need the `-E` parameter that pertains to identity values. For more information; see the full description of the command line syntax for the **bcp** utility in [the Microsoft SQL Server documentation](http://msdn.microsoft.com/en-us/library/ms162802%28v=sql.105%29.aspx). 

For example, suppose that a database named `store` that uses the default schema, `dbo`, contains a table named `customers`. The user account `admin`, with the password `insecure`, copies 10,000 rows of the `customers` table to a file named `customers.txt`. 

```
bcp store.dbo.customers out C:\customers.txt -n -S localhost -U admin -P insecure -b 10000 
```

After you generate the data file, you can upload the data to your DB instance by using a similar command. Beforehand, create the database and schema on the target DB instance. Then use the `in` argument to specify an input file instead of `out` to specify an output file. Instead of using localhost to specify the local SQL Server instance, specify the endpoint of your DB instance. If you use a port other than 1433, specify that too. The user name and password are the master user and password for your DB instance. The syntax is as follows. 

```
bcp dbname.schema_name.table_name 
					in C:\table_name.txt -n -S endpoint,port -U master_user_name -P master_user_password -b 10000
```

To continue the previous example, suppose that the master user name is `admin`, and the password is `insecure`. The endpoint for the DB instance is `rds.ckz2kqd4qsn1.us-east-1.rds.amazonaws.com`, and you use port 4080. The command is as follows. 

```
bcp store.dbo.customers in C:\customers.txt -n -S rds.ckz2kqd4qsn1.us-east-1.rds.amazonaws.com,4080 -U admin -P insecure -b 10000 
```

**Note**  
Specify a password other than the prompt shown here as a security best practice.

## Exporting data from RDS for SQL Server
<a name="SQLServer.Procedural.Exporting"></a>

You can choose one of the following options to export data from an RDS for SQL Server DB instance:
+ **Native database backup using a full backup file (.bak)** – Using .bak files to backup databases is heavily optimized, and is usually the fastest way to export data. For more information, see [Importing and exporting SQL Server databases using native backup and restore](SQLServer.Procedural.Importing.md). 
+ **SQL Server Import and Export Wizard** – For more information, see [SQL Server Import and Export Wizard](#SQLServer.Procedural.Exporting.SSIEW). 
+ **SQL Server Generate and Publish Scripts Wizard and bcp utility** – For more information, see [SQL Server Generate and Publish Scripts Wizard and bcp utility](#SQLServer.Procedural.Exporting.SSGPSW). 

### SQL Server Import and Export Wizard
<a name="SQLServer.Procedural.Exporting.SSIEW"></a>

You can use the SQL Server Import and Export Wizard to copy one or more tables, views, or queries from your RDS for SQL Server DB instance to another data store. This choice is best if the target data store is not SQL Server. For more information, see [ SQL Server Import and Export Wizard](http://msdn.microsoft.com/en-us/library/ms141209%28v=sql.110%29.aspx) in the SQL Server documentation. 

The SQL Server Import and Export Wizard is available as part of Microsoft SQL Server Management Studio. This graphical SQL Server client is included in all Microsoft SQL Server editions except the Express Edition. SQL Server Management Studio is available only as a Windows-based application. SQL Server Management Studio Express is available from Microsoft as a free download. To find this download, see [the Microsoft website](http://www.microsoft.com/en-us/search/Results.aspx?q=sql%20server%20management%20studio). 

**To use the SQL Server Import and Export Wizard to export data**

1. In SQL Server Management Studio, connect to your RDS for SQL Server DB instance. For details on how to do this, see [Connecting to your Microsoft SQL Server DB instance](USER_ConnectToMicrosoftSQLServerInstance.md). 

1. In **Object Explorer**, expand **Databases**, open the context (right-click) menu for the source database, choose **Tasks**, and then choose **Export Data**. The wizard appears. 

1. On the **Choose a Data Source** page, do the following:

   1. For **Data source**, choose **SQL Server Native Client 11.0**. 

   1. Verify that the **Server name** box shows the endpoint of your RDS for SQL Server DB instance.

   1. Select **Use SQL Server Authentication**. For **User name** and **Password**, type the master user name and password of your DB instance.

   1. Verify that the **Database** box shows the database from which you want to export data.

   1. Choose **Next**.

1. On the **Choose a Destination** page, do the following:

   1. For **Destination**, choose **SQL Server Native Client 11.0**. 
**Note**  
Other target data sources are available. These include .NET Framework data providers, OLE DB providers, SQL Server Native Client providers, ADO.NET providers, Microsoft Office Excel, Microsoft Office Access, and the Flat File source. If you choose to target one of these data sources, skip the remainder of step 4. For details on the connection information to provide next, see [Choose a destination](http://msdn.microsoft.com/en-us/library/ms178430%28v=sql.110%29.aspx) in the SQL Server documentation. 

   1. For **Server name**, type the server name of the target SQL Server DB instance. 

   1. Choose the appropriate authentication type. Type a user name and password if necessary. 

   1. For **Database**, choose the name of the target database, or choose **New** to create a new database to contain the exported data. 

      If you choose **New**, see [Create database](http://msdn.microsoft.com/en-us/library/ms183323%28v=sql.110%29.aspx) in the SQL Server documentation for details on the database information to provide.

   1. Choose **Next**.

1. On the **Table Copy or Query** page, choose **Copy data from one or more tables or views** or **Write a query to specify the data to transfer**. Choose **Next**. 

1. If you chose **Write a query to specify the data to transfer**, you see the **Provide a Source Query** page. Type or paste in a SQL query, and then choose **Parse** to verify it. Once the query validates, choose **Next**. 

1. On the **Select Source Tables and Views** page, do the following:

   1. Select the tables and views that you want to export, or verify that the query you provided is selected.

   1. Choose **Edit Mappings** and specify database and column mapping information. For more information, see [Column mappings](http://msdn.microsoft.com/en-us/library/ms189660%28v=sql.110%29.aspx) in the SQL Server documentation. 

   1. (Optional) To see a preview of data to be exported, select the table, view, or query, and then choose **Preview**.

   1. Choose **Next**.

1. On the **Run Package** page, verify that **Run immediately** is selected. Choose **Next**. 

1. On the **Complete the Wizard** page, verify that the data export details are as you expect. Choose **Finish**. 

1. On the **The execution was successful** page, choose **Close**. 

### SQL Server Generate and Publish Scripts Wizard and bcp utility
<a name="SQLServer.Procedural.Exporting.SSGPSW"></a>

You can use the SQL Server Generate and Publish Scripts Wizard to create scripts for an entire database or just selected objects. You can run these scripts on a target SQL Server DB instance to recreate the scripted objects. You can then use the bcp utility to bulk export the data for the selected objects to the target DB instance. This choice is best if you want to move a whole database (including objects other than tables) or large quantities of data between two SQL Server DB instances. For a full description of the bcp command-line syntax, see [bcp utility](http://msdn.microsoft.com/en-us/library/ms162802%28v=sql.110%29.aspx) in the Microsoft SQL Server documentation. 

The SQL Server Generate and Publish Scripts Wizard is available as part of Microsoft SQL Server Management Studio. This graphical SQL Server client is included in all Microsoft SQL Server editions except the Express Edition. SQL Server Management Studio is available only as a Windows-based application. SQL Server Management Studio Express is available from Microsoft as a [free download](http://www.microsoft.com/en-us/search/Results.aspx?q=sql%20server%20management%20studio). 

**To use the SQL Server Generate and Publish Scripts Wizard and the bcp utility to export data**

1. In SQL Server Management Studio, connect to your RDS for SQL Server DB instance. For details on how to do this, see [Connecting to your Microsoft SQL Server DB instance](USER_ConnectToMicrosoftSQLServerInstance.md). 

1. In **Object Explorer**, expand the **Databases** node and select the database you want to script. 

1. Follow the instructions in [Generate and publish scripts Wizard](http://msdn.microsoft.com/en-us/library/bb895179%28v=sql.110%29.aspx) in the SQL Server documentation to create a script file.

1. In SQL Server Management Studio, connect to your target SQL Server DB instance.

1. With the target SQL Server DB instance selected in **Object Explorer**, choose **Open** on the **File** menu, choose **File**, and then open the script file. 

1. If you have scripted the entire database, review the CREATE DATABASE statement in the script. Make sure that the database is being created in the location and with the parameters that you want. For more information, see [CREATE DATABASE](http://msdn.microsoft.com/en-us/library/ms176061%28v=sql.110%29.aspx) in the SQL Server documentation. 

1. If you are creating database users in the script, check to see if server logins exist on the target DB instance for those users. If not, create logins for those users; the scripted commands to create the database users fail otherwise. For more information, see [Create a login](http://msdn.microsoft.com/en-us/library/aa337562%28v=sql.110%29.aspx) in the SQL Server documentation.

1. Choose **\$1Execute** on the SQL Editor menu to run the script file and create the database objects. When the script finishes, verify that all database objects exist as expected.

1. Use the bcp utility to export data from the RDS for SQL Server DB instance into files. Open a command prompt and type the following command.

   ```
   bcp database_name.schema_name.table_name out data_file -n -S aws_rds_sql_endpoint -U username -P password
   ```

   The preceding code includes the following options:
   + *table\$1name* is the name of one of the tables that you've recreated in the target database and now want to populate with data. 
   + *data\$1file* is the full path and name of the data file to be created.
   + `-n` specifies that the bulk copy uses the native data types of the data to be copied.
   + `-S` specifies the SQL Server DB instance to export from.
   + `-U` specifies the user name to use when connecting to the SQL Server DB instance.
   + `-P` specifies the password for the user specified by `-U`.

   The following shows an example command. 

   ```
   bcp world.dbo.city out C:\Users\JohnDoe\city.dat -n -S sql-jdoe.1234abcd.us-west-2.rds.amazonaws.com,1433 -U JohnDoe -P ClearTextPassword
   ```

   Repeat this step until you have data files for all of the tables you want to export. 

1. Prepare your target DB instance for bulk import of data by following the instructions at [Basic guidelines for bulk importing data](http://msdn.microsoft.com/en-us/library/ms189989%28v=sql.110%29.aspx) in the SQL Server documentation. 

1. Decide on a bulk import method to use after considering performance and other concerns discussed in [About bulk import and bulk export operations](http://msdn.microsoft.com/en-us/library/ms187042%28v=sql.105%29.aspx) in the SQL Server documentation. 

1. Bulk import the data from the data files that you created using the bcp utility. To do so, follow the instructions at either [Import and export bulk data by using the bcp utility](http://msdn.microsoft.com/en-us/library/aa337544%28v=sql.110%29.aspx) or [Import bulk data by using BULK INSERT or OPENROWSET(BULK...)](http://msdn.microsoft.com/en-us/library/ms175915%28v=sql.110%29.aspx) in the SQL Server documentation, depending on what you decided in step 11. 

# Using BCP utility from Linux to import and export data
<a name="SQLServer.Procedural.Importing.BCP.Linux"></a>

The BCP (Bulk Copy Program) utility provides an efficient way to transfer large amounts of data between your RDS for SQL Server DB instance and data files. You can use BCP from Linux environments to perform bulk data operations, making it useful for data migration, ETL processes, and regular data transfers.

BCP supports both importing data from files into SQL Server tables and exporting data from SQL Server tables to files. This is particularly effective for transferring structured data in various formats including delimited text files.

## Prerequisites
<a name="SQLServer.Procedural.Importing.BCP.Linux.Prerequisites"></a>

Before using BCP with your RDS for SQL Server DB instance from Linux, ensure you have the following:
+ A Linux environment with network connectivity to your RDS for SQL Server DB instance
+ Microsoft SQL Server command-line tools installed on your Linux system, including:
  + sqlcmd - SQL Server command-line query tool
  + bcp - Bulk Copy Program utility
+ Valid credentials for your RDS for SQL Server DB instance
+ Network access configured through security groups to allow connections on the SQL Server port (typically 1433)
+ Appropriate database permissions for the operations you want to perform

## Installing SQL Server command-line tools on Linux
<a name="SQLServer.Procedural.Importing.BCP.Linux.Installing"></a>

To use BCP from Linux, you need to install the Microsoft SQL Server command-line tools. For detailed installation instructions for your specific Linux distribution, see the following Microsoft documentation:
+ [Install sqlcmd and bcp the SQL Server command-line tools on Linux](https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-tools)
+ [bcp utility](https://docs.microsoft.com/en-us/sql/tools/bcp-utility) - Complete reference for the BCP utility

After installation, ensure the tools are available in your PATH by running:

```
bcp -v
sqlcmd -?
```

## Exporting data from RDS for SQL Server
<a name="SQLServer.Procedural.Importing.BCP.Linux.Exporting"></a>

You can use BCP to export data from your RDS for SQL Server DB instance to files on your Linux system. This is useful for creating backups, data analysis, or preparing data for migration.

### Basic export syntax
<a name="SQLServer.Procedural.Importing.BCP.Linux.Exporting.Basic"></a>

The basic syntax for exporting data using BCP is:

```
bcp database.schema.table out output_file -S server_name -U username -P password [options]
```

Where:
+ `database.schema.table` - The fully qualified table name
+ `output_file` - The path and name of the output file
+ `server_name` - Your RDS for SQL Server endpoint
+ `username` - Your database username
+ `password` - Your database password

### Export example
<a name="SQLServer.Procedural.Importing.BCP.Linux.Exporting.Example"></a>

The following example exports data from a table named `customers` in the `sales` database:

```
bcp sales.dbo.customers out /home/user/customers.txt \
    -S mydb.cluster-abc123.us-east-1.rds.amazonaws.com \
    -U admin \
    -P mypassword \
    -c \
    -t "|" \
    -r "\n"
```

This command:
+ Exports data from the `customers` table
+ Saves the output to `/home/user/customers.txt`
+ Uses character format (`-c`)
+ Uses pipe (\$1) as the field delimiter (`-t "|"`)
+ Uses newline as the row delimiter (`-r "\n"`)

## Importing data to RDS for SQL Server
<a name="SQLServer.Procedural.Importing.BCP.Linux.Importing"></a>

You can use BCP to import data from files on your Linux system into your RDS for SQL Server DB instance. This is useful for data migration, loading test data, or regular data updates.

### Basic import syntax
<a name="SQLServer.Procedural.Importing.BCP.Linux.Importing.Basic"></a>

The basic syntax for importing data using BCP is:

```
bcp database.schema.table in input_file -S server_name -U username -P password [options]
```

Where:
+ `database.schema.table` - The fully qualified destination table name
+ `input_file` - The path and name of the input file
+ `server_name` - Your RDS for SQL Server endpoint
+ `username` - Your database username
+ `password` - Your database password

### Import example
<a name="SQLServer.Procedural.Importing.BCP.Linux.Importing.Example"></a>

The following example imports data from a file into a table named `customers`:

```
bcp sales.dbo.customers in /home/user/customers.txt \
    -S mydb.cluster-abc123.us-east-1.rds.amazonaws.com \
    -U admin \
    -P mypassword \
    -c \
    -t "|" \
    -r "\n" \
    -b 1000
```

This command:
+ Imports data into the `customers` table
+ Reads data from `/home/user/customers.txt`
+ Uses character format (`-c`)
+ Uses pipe (\$1) as the field delimiter (`-t "|"`)
+ Uses newline as the row delimiter (`-r "\n"`)
+ Processes data in batches of 1000 rows (`-b 1000`)

## Common BCP options
<a name="SQLServer.Procedural.Importing.BCP.Linux.Options"></a>

BCP provides numerous options to control data formatting and transfer behavior. The following table describes commonly used options:


| Option | Description | 
| --- | --- | 
| -c | Uses character data type for all columns | 
| -n | Uses native database data types | 
| -t | Specifies the field delimiter (default is tab) | 
| -r | Specifies the row delimiter (default is newline) | 
| -b | Specifies the batch size for bulk operations | 
| -F | Specifies the first row to export or import | 
| -L | Specifies the last row to export or import | 
| -e | Specifies an error file to capture rejected rows | 
| -f | Specifies a format file for data formatting | 
| -q | Uses quoted identifiers for object names | 

## Best practices and considerations
<a name="SQLServer.Procedural.Importing.BCP.Linux.BestPractices"></a>

When using BCP with RDS for SQL Server from Linux, consider the following best practices:
+ **Use batch processing** – For large datasets, use the `-b` option to process data in batches. T his improves performance and allows for better error recovery.
+ **Handle errors gracefully** – Use the `-e` option to capture error information and rejected rows in a separate file for analysis.
+ **Choose appropriate data formats** – Use character format (`-c`) for cross-platform compatibility or native format (`-n`) for better performance when both source and destination are SQL Server.
+ **Secure your credentials** – Avoid putting passwords directly in command lines. Consider using environment variables or configuration files with appropriate permissions.
+ **Test with small datasets** – Before processing large amounts of data, test your BCP commands with smaller datasets to verify formatting and connectivity.
+ **Monitor network connectivity** – Ensure stable network connections, especially for large data transfers. Consider using tools like `screen` or `tmux` for long-running operations.
+ **Validate data integrity** – After data transfer, verify row counts and sample data to ensure the operation completed successfully.

## Troubleshooting common issues
<a name="SQLServer.Procedural.Importing.BCP.Linux.Troubleshooting"></a>

The following table describes common issues you might encounter when using BCP from Linux and their solutions:


| Issue | Solution | 
| --- | --- | 
| Connection timeout or network errors | Verify your Amazon RDS endpoint, security group settings, and network connectivity. Make sure the SQL Server port (typically 1433) is accessible from your Linux system. | 
| Authentication failures | Verify your username and password. Make sure the database user has appropriate permissions for the operations you're performing. | 
| Data format errors | Check your field and row delimiters. Make sure the data format matches what BCP expects. Use format files for complex data structures. | 
| Permission denied errors | Make sure your database user has INSERT permissions for imports or SELECT permissions for exports on the target tables. | 
| Large file handling issues | Use batch processing with the -b option. Consider splitting large files into smaller chunks for better performance and error recovery. | 
| Character encoding problems | Ensure your data files use compatible character encoding. Use the -c option for character format or specify appropriate code pages. | 