

# Additional features for Microsoft SQL Server on Amazon RDS
<a name="User.SQLServer.AdditionalFeatures"></a>

In the following sections, you can find information about augmenting Amazon RDS instances running the Microsoft SQL Server DB engine.

**Topics**
+ [Using Password Policy for SQL Server logins on RDS for SQL Server](SQLServer.Concepts.General.PasswordPolicy.Using.md)
+ [Integrating an Amazon RDS for SQL Server DB instance with Amazon S3](User.SQLServer.Options.S3-integration.md)
+ [Using Database Mail on Amazon RDS for SQL Server](SQLServer.DBMail.md)
+ [Instance store support for the tempdb database on Amazon RDS for SQL Server](SQLServer.InstanceStore.md)
+ [Using extended events with Amazon RDS for Microsoft SQL Server](SQLServer.ExtendedEvents.md)
+ [Access to transaction log backups with RDS for SQL Server](USER.SQLServer.AddlFeat.TransactionLogAccess.md)

# Using Password Policy for SQL Server logins on RDS for SQL Server
<a name="SQLServer.Concepts.General.PasswordPolicy.Using"></a>

Amazon RDS allows you to set the password policy for your Amazon RDS DB instance running Microsoft SQL Server. Use this to set complexity, length, and lockout requirements for logins that use SQL Server Authentication to authenticate to your DB instance.

## Key terms
<a name="SQLServer.Concepts.General.PasswordPolicy.Using.KT"></a>

**Login**  
In SQL Server, a server-level principal that can authenticate to a database instance is referred to as a **login**. Other database engines might refer to this principal as a *user*. In RDS for SQL Server, a login can authenticate using SQL Server Authentication or Windows Authentication.

**SQL Server login**  
A login that uses a username and password to authenticate using SQL Server Authentication is a SQL Server login. The password policy you configure through DB parameters only applies to SQL Server logins.

**Windows login**  
A login that is based on a Windows principal and authenticates using Windows Authentication is a Windows login. You can configure the password policy for your Windows logins in Active Directory. For more information, see [Working with Active Directory with RDS for SQL Server](User.SQLServer.ActiveDirectoryWindowsAuth.md).

## Enabling and disabling policy for each login
<a name="SQLServer.Concepts.General.PasswordPolicy.EnableDisable"></a>

 Each SQL Server login has flags for `CHECK_POLICY` and `CHECK_EXPIRATION`. By default, new logins are created with `CHECK_POLICY` set to `ON` and `CHECK_EXPIRATION` set to `OFF`. 

If `CHECK_POLICY` is enabled for a login, RDS for SQL Server validates the password against the complexity and minimum length requirements. Lockout policies also apply. An example T-SQL statement to enable `CHECK_POLICY` and `CHECK_EXPIRATION`: 

```
ALTER LOGIN [master_user] WITH CHECK_POLICY = ON, CHECK_EXPIRATION = ON;
```

If `CHECK_EXPIRATION` is enabled, passwords are subject to password age policies. The T-SQL statement to check if `CHECK_POLICY` and `CHECK_EXPIRATION` are set:

```
SELECT name, is_policy_checked, is_expiration_checked FROM sys.sql_logins;
```

## Password policy parameters
<a name="SQLServer.Concepts.General.PasswordPolicy.PWDPolicyParams"></a>

All password policy parameters are dynamic and do not require DB reboot to take effect. The following table lists the DB parameters you can set to modify the password policy for SQL Server logins:


****  

| DB parameter | Description | Allowed Values | Default Value | 
| --- | --- | --- | --- | 
| rds.password\$1complexity\$1enabled | Password complexity requirements must be satisfied when creating or changing passwords for SQL Server logins. The following constraints must be met: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Concepts.General.PasswordPolicy.Using.html)  | 0,1 | 0 | 
| rds.password\$1min\$1length | The minimum number of characters required in a password for a SQL Server login. | 0-14 | 0 | 
| rds.password\$1min\$1age | The minimum number of days a SQL Server login password must be used before the user can change it. Passwords can be changed immediately when set to 0. | 0-998 | 0 | 
| rds.password\$1max\$1age | The maximum number of days a SQL Server login password can be used after which the user is required to change it. Passwords never expire when set to 0. | 0-999 | 42 | 
| rds.password\$1lockout\$1threshold | The number of consecutive failed login attempts that cause a SQL Server login to become locked out. | 0-999 | 0 | 
| rds.password\$1lockout\$1duration | The number of minutes a locked out SQL Server login must wait before being unlocked. | 1-60 | 10 | 
| rds.password\$1lockout\$1reset\$1counter\$1after | The number of minutes that must elapse after a failed login attempt before the failed login attempt counter is reset to 0. | 1-60 | 10 | 

**Note**  
For more information about SQL Server password policy, see [ Password Policy](https://learn.microsoft.com/en-us/sql/relational-databases/security/password-policy).   
The password complexity and minimum length policies also apply to DB users in contained databases. For more information, see [ Contained Databases](https://learn.microsoft.com/en-us/sql/relational-databases/databases/contained-databases).

The following constraints apply to the password policy parameters:
+ The `rds.password_min_age` parameter must be less than `rds.password_max_age parameter`, unless `rds.password_max_age` is set to 0
+ The `rds.password_lockout_reset_counter_after` parameter must be less than or equal to the `rds.password_lockout_duration` parameter.
+ If `rds.password_lockout_threshold` is set to 0, `rds.password_lockout_duration` and `rds.password_lockout_reset_counter_after` do not apply.

### Considerations for existing logins
<a name="SQLServer.Concepts.General.PasswordPolicy.ExistingLogins"></a>

After modifying the password policy on an instance, existing passwords for logins are **not** retroactively evaluated against the new password complexity and length requirements. Only new passwords are validated against the new policy. 

SQL Server **does** evaluate existing passwords for age requirements.

It is possible for passwords to expire immediately once a password policy is modified. For example, if a login has `CHECK_EXPIRATION` enabled and its password was last changed 100 days ago and you set the `rds.password_max_age` parameter to 5 days, the password immediately expires and the login needs to change their password at their next attempt to log in.

**Note**  
RDS for SQL Server doesn't support password history policies. History policies prevent logins from reusing previously used passwords.

### Considerations for Multi-AZ deployments
<a name="SQLServer.Concepts.General.PasswordPolicy.MAZPasswords"></a>

The failed login attempt counter and lockout state for Multi-AZ instances does not replicate between nodes. In the event of a login being locked out when a Multi-AZ instance fails over, it is possible for the login to be already unlocked on the new node.

# Password considerations for the master login
<a name="SQLServer.Concepts.General.PasswordPolicy.MasterLogin"></a>

When you create an RDS for SQL Server DB instance, the master user password is not evaluated against the password policy. A new master password is also not evaluated against the password when performing operations to the master user, specifically when setting `MasterUserPassword` in the `ModifyDBInstance` command. In both cases, you can set a password for the master user that does not satisfy your password policy, and the operation still succeeds. If the policy is not satisfied, RDS attempts to raise an RDS event, with the recommendation to set a strong password. Take care to only use strong passwords for the master user. 

RDS attempts to generate the following event messages when the master user password does not meet the password policy requirements:
+ The master user was created, but the password doesn't meet the minimum length requirement of your password policy. Consider using a stronger password.
+ The master user was created, but the password doesn't meet the complexity requirement of your password policy. Consider using a stronger password.
+ The master user password was reset, but the password doesn't meet the minimum length requirement of your password policy. Consider using a stronger password.
+ The master user password was reset, but the password doesn't meet the complexity requirement of your password policy. Consider using a stronger password.

By default, the master user is created with `CHECK_POLICY` and `CHECK_EXPIRATION` set to `OFF`. To apply the password policy to the master user, you must manually enable these flags for the master user after DB instance creation. After you enable these flags, modify the master user password directly in SQL Server (eg. via T-SQL statements or SSMS) to validate the new password against the password policy.

**Note**  
If the master user gets locked out, you can unlock the user by resetting the master user password using the `ModifyDBInstance` command.

## Modifying the master user password
<a name="SQLServer.Concepts.General.PasswordPolicy.MasterLogin.Reset"></a>

You can modify the master user password by using the [ModifyDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) command.

**Note**  
When you reset the master user password, RDS resets various permissions for the master user and the master user might lose certain permissions. Resetting the master user password also unlocks the master user, if it was locked out.

RDS validates the new master user password and attempts to emit an RDS event if the password does not satisfy the policy. RDS sets the password even if it does not satisfy the password policy. 

# Integrating an Amazon RDS for SQL Server DB instance with Amazon S3
<a name="User.SQLServer.Options.S3-integration"></a>

You can transfer files between a DB instance running Amazon RDS for SQL Server and an Amazon S3 bucket. By doing this, you can use Amazon S3 with SQL Server features such as BULK INSERT. For example, you can download .csv, .xml, .txt, and other files from Amazon S3 to the DB instance host and import the data from `D:\S3\` into the database. All files are stored in `D:\S3\` on the DB instance.

The following limitations apply:

**Note**  
Traffic between the RDS host and S3 routes through VPC endpoints in RDS internal VPCs for all SQL Server features that use S3. This traffic doesn't use the RDS instance endpoint ENI. S3 bucket policies can't restrict RDS traffic by networking conditions.
+ Files in the `D:\S3` folder are deleted on the standby replica after a failover on Multi-AZ instances. For more information, see [Multi-AZ limitations for S3 integration](#S3-MAZ).
+ The DB instance and the S3 bucket must be in the same AWS Region.
+ If you run more than one S3 integration task at a time, the tasks run sequentially, not in parallel.
**Note**  
S3 integration tasks share the same queue as native backup and restore tasks. At maximum, you can have only two tasks in progress at any time in this queue. Therefore, two running native backup and restore tasks will block any S3 integration tasks.
+ You must re-enable the S3 integration feature on restored instances. S3 integration isn't propagated from the source instance to the restored instance. Files in `D:\S3` are deleted on a restored instance.
+ Downloading to the DB instance is limited to 100 files. In other words, there can't be more than 100 files in `D:\S3\`.
+ Only files without file extensions or with the following file extensions are supported for download: .abf, .asdatabase, .bcp, .configsettings, .csv, .dat, .deploymentoptions, .deploymenttargets, .fmt, .info, .ispac, .lst, .tbl, .txt, .xml, and .xmla.
+ The S3 bucket must have the same owner as the related AWS Identity and Access Management (IAM) role. Therefore, cross-account S3 integration isn't supported.
+ The S3 bucket can't be open to the public.
+ The file size for uploads from RDS to S3 is limited to 50 GB per file.
+ The file size for downloads from S3 to RDS is limited to the maximum supported by S3.

**Topics**
+ [Prerequisites for integrating RDS for SQL Server with S3](Appendix.SQLServer.Options.S3-integration.preparing.md)
+ [Enabling RDS for SQL Server integration with S3](Appendix.SQLServer.Options.S3-integration.enabling.md)
+ [Transferring files between RDS for SQL Server and Amazon S3](Appendix.SQLServer.Options.S3-integration.using.md)
+ [Listing files on the RDS DB instance](Appendix.SQLServer.Options.S3-integration.using.listing-files.md)
+ [Deleting files on the RDS DB instance](Appendix.SQLServer.Options.S3-integration.using.deleting-files.md)
+ [Monitoring the status of a file transfer task](Appendix.SQLServer.Options.S3-integration.using.monitortasks.md)
+ [Canceling a task](Appendix.SQLServer.Options.S3-integration.canceltasks.md)
+ [Multi-AZ limitations for S3 integration](#S3-MAZ)
+ [Disabling RDS for SQL Server integration with S3](Appendix.SQLServer.Options.S3-integration.disabling.md)

For more information on working with files in Amazon S3, see [Getting started with Amazon Simple Storage Service](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3).

# Prerequisites for integrating RDS for SQL Server with S3
<a name="Appendix.SQLServer.Options.S3-integration.preparing"></a>

Before you begin, find or create the S3 bucket that you want to use. Also, add permissions so that the RDS DB instance can access the S3 bucket. To configure this access, you create both an IAM policy and an IAM role.

## Console
<a name="Appendix.SQLServer.Options.S3-integration.preparing.console"></a>

**To create an IAM policy for access to Amazon S3**

1. In the [IAM Management Console](https://console.aws.amazon.com/iam/home?#home), choose **Policies** in the navigation pane.

1. Create a new policy, and use the **Visual editor** tab for the following steps.

1. For **Service**, enter **S3** and then choose the **S3** service.

1. For **Actions**, choose the following to grant the access that your DB instance requires:
   + `ListAllMyBuckets` – required
   + `ListBucket` – required
   + `GetBucketAcl` – required
   + `GetBucketLocation` – required
   + `GetObject` – required for downloading files from S3 to `D:\S3\`
   + `PutObject` – required for uploading files from `D:\S3\` to S3
   + `ListMultipartUploadParts` – required for uploading files from `D:\S3\` to S3
   + `AbortMultipartUpload` – required for uploading files from `D:\S3\` to S3

1. For **Resources**, the options that display depend on which actions you choose in the previous step. You might see options for **bucket**, **object**, or both. For each of these, add the appropriate Amazon Resource Name (ARN).

   For **bucket**, add the ARN for the bucket that you want to use. For example, if your bucket is named *amzn-s3-demo-bucket*, set the ARN to `arn:aws:s3:::amzn-s3-demo-bucket`.

   For **object**, enter the ARN for the bucket and then choose one of the following:
   + To grant access to all files in the specified bucket, choose **Any** for both **Bucket name** and **Object name**.
   + To grant access to specific files or folders in the bucket, provide ARNs for the specific buckets and objects that you want SQL Server to access. 

1. Follow the instructions in the console until you finish creating the policy.

   The preceding is an abbreviated guide to setting up a policy. For more detailed instructions on creating IAM policies, see [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide.*

**To create an IAM role that uses the IAM policy from the previous procedure**

1. In the [IAM Management Console](https://console.aws.amazon.com/iam/home?#home), choose **Roles** in the navigation pane.

1. Create a new IAM role, and choose the following options as they appear in the console:
   + **AWS service**
   + **RDS**
   + **RDS – Add Role to Database**

   Then choose **Next:Permissions** at the bottom.

1. For **Attach permissions policies**, enter the name of the IAM policy that you previously created. Then choose the policy from the list.

1. Follow the instructions in the console until you finish creating the role.

   The preceding is an abbreviated guide to setting up a role. If you want more detailed instructions on creating roles, see [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) in the *IAM User Guide.*

## AWS CLI
<a name="Appendix.SQLServer.Options.S3-integration.preparing.CLI"></a>

To grant Amazon RDS access to an Amazon S3 bucket, use the following process:

1. Create an IAM policy that grants Amazon RDS access to an S3 bucket.

1. Create an IAM role that Amazon RDS can assume on your behalf to access your S3 buckets.

   For more information, see [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.

1. Attach the IAM policy that you created to the IAM role that you created.

**To create the IAM policy**

Include the appropriate actions to grant the access your DB instance requires:
+ `ListAllMyBuckets` – required
+ `ListBucket` – required
+ `GetBucketAcl` – required
+ `GetBucketLocation` – required
+ `GetObject` – required for downloading files from S3 to `D:\S3\`
+ `PutObject` – required for uploading files from `D:\S3\` to S3
+ `ListMultipartUploadParts` – required for uploading files from `D:\S3\` to S3
+ `AbortMultipartUpload` – required for uploading files from `D:\S3\` to S3

1. The following AWS CLI command creates an IAM policy named `rds-s3-integration-policy` with these options. It grants access to a bucket named *amzn-s3-demo-bucket*.  
**Example**  

   For Linux, macOS, or Unix:

   ```
   aws iam create-policy \
   	 --policy-name rds-s3-integration-policy \
   	 --policy-document '{
   	        "Version": "2012-10-17",		 	 	 
   	        "Statement": [
   	            {
   	                "Effect": "Allow",
   	                "Action": "s3:ListAllMyBuckets",
   	                "Resource": "*"
   	            },
   	            {
   	                "Effect": "Allow",
   	                "Action": [
   	                    "s3:ListBucket",
   	                    "s3:GetBucketAcl",
   	                    "s3:GetBucketLocation"
   	                ],
   	                "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
   	            },
   	            {
   	                "Effect": "Allow",
   	                "Action": [
   	                    "s3:GetObject",
   	                    "s3:PutObject",
   	                    "s3:ListMultipartUploadParts",
   	                    "s3:AbortMultipartUpload"
   	                ],
   	                "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/key_prefix/*"
   	            }
   	        ]
   	    }'
   ```

   For Windows:

   Make sure to change the line endings to the ones supported by your interface (`^` instead of `\`). Also, in Windows, you must escape all double quotes with a `\`. To avoid the need to escape the quotes in the JSON, you can save it to a file instead and pass that in as a parameter. 

   First, create the `policy.json` file with the following permission policy:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "s3:ListAllMyBuckets",
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:ListBucket",
                   "s3:GetBucketACL",
                   "s3:GetBucketLocation"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:GetObject",
                   "s3:PutObject",
                   "s3:ListMultipartUploadParts",
                   "s3:AbortMultipartUpload"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/key_prefix/*"
           }
       ]
   }
   ```

------

   Then use the following command to create the policy:

   ```
   aws iam create-policy ^
        --policy-name rds-s3-integration-policy ^
        --policy-document file://file_path/assume_role_policy.json
   ```

1. After the policy is created, note the Amazon Resource Name (ARN) of the policy. You need the ARN for a later step.

**To create the IAM role**
+ The following AWS CLI command creates the `rds-s3-integration-role` IAM role for this purpose.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws iam create-role \
  	   --role-name rds-s3-integration-role \
  	   --assume-role-policy-document '{
  	     "Version": "2012-10-17",		 	 	 
  	     "Statement": [
  	       {
  	         "Effect": "Allow",
  	         "Principal": {
  	            "Service": "rds.amazonaws.com"
  	          },
  	         "Action": "sts:AssumeRole"
  	       }
  	     ]
  	   }'
  ```

  For Windows:

  Make sure to change the line endings to the ones supported by your interface (`^` instead of `\`). Also, in Windows, you must escape all double quotes with a `\`. To avoid the need to escape the quotes in the JSON, you can save it to a file instead and pass that in as a parameter. 

  First, create the `assume_role_policy.json` file with the following policy:

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Principal": {
                  "Service": [
                      "rds.amazonaws.com"
                  ]
              },
              "Action": "sts:AssumeRole"
          }
      ]
  }
  ```

------

  Then use the following command to create the IAM role:

  ```
  aws iam create-role ^
       --role-name rds-s3-integration-role ^
       --assume-role-policy-document file://file_path/assume_role_policy.json
  ```  
**Example of using the global condition context key to create the IAM role**  

  We recommend using the [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn) and [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount) global condition context keys in resource-based policies to limit the service's permissions to a specific resource. This is the most effective way to protect against the [confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html).

  You might use both global condition context keys and have the `aws:SourceArn` value contain the account ID. In this case, the `aws:SourceAccount` value and the account in the `aws:SourceArn` value must use the same account ID when used in the same policy statement.
  + Use `aws:SourceArn` if you want cross-service access for a single resource.
  + Use `aws:SourceAccount` if you want to allow any resource in that account to be associated with the cross-service use.

  In the policy, make sure to use the `aws:SourceArn` global condition context key with the full Amazon Resource Name (ARN) of the resources accessing the role. For S3 integration, make sure to include the DB instance ARNs, as shown in the following example.

  For Linux, macOS, or Unix:

  ```
  aws iam create-role \
  	   --role-name rds-s3-integration-role \
  	   --assume-role-policy-document '{
  	     "Version": "2012-10-17",		 	 	 
  	     "Statement": [
  	       {
  	         "Effect": "Allow",
  	         "Principal": {
  	            "Service": "rds.amazonaws.com"
  	          },
  	         "Action": "sts:AssumeRole",
                  "Condition": {
                      "StringEquals": {
                          "aws:SourceArn":"arn:aws:rds:Region:my_account_ID:db:db_instance_identifier"
                      }
                  }
  	       }
  	     ]
  	   }'
  ```

  For Windows:

  Add the global condition context key to `assume_role_policy.json`.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Principal": {
                  "Service": [
                      "rds.amazonaws.com"
                  ]
              },
              "Action": "sts:AssumeRole",
              "Condition": {
                  "StringEquals": {
                      "aws:SourceArn":"arn:aws:rds:Region:my_account_ID:db:db_instance_identifier"
                  }
              }
          }
      ]
  }
  ```

------

**To attach the IAM policy to the IAM role**
+ The following AWS CLI command attaches the policy to the role named `rds-s3-integration-role`. Replace `your-policy-arn` with the policy ARN that you noted in a previous step.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws iam attach-role-policy \
  	   --policy-arn your-policy-arn \
  	   --role-name rds-s3-integration-role
  ```

  For Windows:

  ```
  aws iam attach-role-policy ^
  	   --policy-arn your-policy-arn ^
  	   --role-name rds-s3-integration-role
  ```

# Enabling RDS for SQL Server integration with S3
<a name="Appendix.SQLServer.Options.S3-integration.enabling"></a>

In the following section, you can find how to enable Amazon S3 integration with Amazon RDS for SQL Server. To work with S3 integration, your DB instance must be associated with the IAM role that you previously created before you use the `S3_INTEGRATION` feature-name parameter.

**Note**  
To add an IAM role to a DB instance, the status of the DB instance must be **available**.

## Console
<a name="Appendix.SQLServer.Options.S3-integration.enabling.console"></a>

**To associate your IAM role with your DB instance**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose the RDS for SQL Server DB instance name to display its details.

1. On the **Connectivity & security** tab, in the **Manage IAM roles** section, choose the IAM role to add for **Add IAM roles to this instance**.

1. For **Feature**, choose **S3\$1INTEGRATION**.  
![\[Add the S3_INTEGRATION role\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/ora-s3-integration-role.png)

1. Choose **Add role**.

## AWS CLI
<a name="Appendix.SQLServer.Options.S3-integration.enabling.cli"></a>

**To add the IAM role to the RDS for SQL Server DB instance**
+ The following AWS CLI command adds your IAM role to an RDS for SQL Server DB instance named `mydbinstance`.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds add-role-to-db-instance \
  	   --db-instance-identifier mydbinstance \
  	   --feature-name S3_INTEGRATION \
  	   --role-arn your-role-arn
  ```

  For Windows:

  ```
  aws rds add-role-to-db-instance ^
  	   --db-instance-identifier mydbinstance ^
  	   --feature-name S3_INTEGRATION ^
  	   --role-arn your-role-arn
  ```

  Replace `your-role-arn` with the role ARN that you noted in a previous step. `S3_INTEGRATION` must be specified for the `--feature-name` option.

# Transferring files between RDS for SQL Server and Amazon S3
<a name="Appendix.SQLServer.Options.S3-integration.using"></a>

You can use Amazon RDS stored procedures to download and upload files between Amazon S3 and your RDS DB instance. You can also use Amazon RDS stored procedures to list and delete files on the RDS instance.

The files that you download from and upload to S3 are stored in the `D:\S3` folder. This is the only folder that you can use to access your files. You can organize your files into subfolders, which are created for you when you include the destination folder during download.

Some of the stored procedures require that you provide an Amazon Resource Name (ARN) to your S3 bucket and file. The format for your ARN is `arn:aws:s3:::amzn-s3-demo-bucket/file_name`. Amazon S3 doesn't require an account number or AWS Region in ARNs.

S3 integration tasks run sequentially and share the same queue as native backup and restore tasks. At maximum, you can have only two tasks in progress at any time in this queue. It can take up to five minutes for the task to begin processing.

## Downloading files from an Amazon S3 bucket to a SQL Server DB instance
<a name="Appendix.SQLServer.Options.S3-integration.using.download"></a>

To download files from an S3 bucket to an RDS for SQL Server DB instance, use the Amazon RDS stored procedure `msdb.dbo.rds_download_from_s3` with the following parameters.


| Parameter name | Data type | Default | Required | Description | 
| --- | --- | --- | --- | --- | 
|  `@s3_arn_of_file`  |  NVARCHAR  |  –  |  Required  |  The S3 ARN of the file to download, for example: `arn:aws:s3:::amzn-s3-demo-bucket/mydata.csv`  | 
|  `@rds_file_path`  |  NVARCHAR  |  –  |  Optional  |  The file path for the RDS instance. If not specified, the file path is `D:\S3\<filename in s3>`. RDS supports absolute paths and relative paths. If you want to create a subfolder, include it in the file path.  | 
|  `@overwrite_file`  |  INT  |  0  |  Optional  | Overwrite the existing file:  0 = Don't overwrite 1 = Overwrite | 

You can download files without a file extension and files with the following file extensions: .bcp, .csv, .dat, .fmt, .info, .lst, .tbl, .txt, and .xml.

**Note**  
Files with the .ispac file extension are supported for download when SQL Server Integration Services is enabled. For more information on enabling SSIS, see [SQL Server Integration Services](Appendix.SQLServer.Options.SSIS.md).  
Files with the following file extensions are supported for download when SQL Server Analysis Services is enabled: .abf, .asdatabase, .configsettings, .deploymentoptions, .deploymenttargets, and .xmla. For more information on enabling SSAS, see [SQL Server Analysis Services](Appendix.SQLServer.Options.SSAS.md).

The following example shows the stored procedure to download files from S3. 

```
exec msdb.dbo.rds_download_from_s3
	    @s3_arn_of_file='arn:aws:s3:::amzn-s3-demo-bucket/bulk_data.csv',
	    @rds_file_path='D:\S3\seed_data\data.csv',
	    @overwrite_file=1;
```

The example `rds_download_from_s3` operation creates a folder named `seed_data` in `D:\S3\`, if the folder doesn't exist yet. Then the example downloads the source file `bulk_data.csv` from S3 to a new file named `data.csv` on the DB instance. If the file previously existed, it's overwritten because the `@overwrite_file` parameter is set to `1`.

## Uploading files from a SQL Server DB instance to an Amazon S3 bucket
<a name="Appendix.SQLServer.Options.S3-integration.using.upload"></a>

To upload files from an RDS for SQL Server DB instance to an S3 bucket, use the Amazon RDS stored procedure `msdb.dbo.rds_upload_to_s3` with the following parameters.


| Parameter name | Data type | Default | Required | Description | 
| --- | --- | --- | --- | --- | 
|  `@s3_arn_of_file`  |  NVARCHAR  |  –  |  Required  |  The S3 ARN of the file to be created in S3, for example: `arn:aws:s3:::amzn-s3-demo-bucket/mydata.csv`  | 
|  `@rds_file_path`  |  NVARCHAR  |  –  |  Required  | The file path of the file to upload to S3. Absolute and relative paths are supported. | 
|  `@overwrite_file`  |  INT  |  –  |  Optional  |  Overwrite the existing file:  0 = Don't overwrite 1 = Overwrite  | 

The following example uploads the file named `data.csv` from the specified location in `D:\S3\seed_data\` to a file `new_data.csv` in the S3 bucket specified by the ARN.

```
exec msdb.dbo.rds_upload_to_s3 
		@rds_file_path='D:\S3\seed_data\data.csv',
		@s3_arn_of_file='arn:aws:s3:::amzn-s3-demo-bucket/new_data.csv',
		@overwrite_file=1;
```

If the file previously existed in S3, it's overwritten because the @overwrite\$1file parameter is set to `1`.

# Listing files on the RDS DB instance
<a name="Appendix.SQLServer.Options.S3-integration.using.listing-files"></a>

To list the files available on the DB instance, use both a stored procedure and a function. First, run the following stored procedure to gather file details from the files in `D:\S3\`. 

```
exec msdb.dbo.rds_gather_file_details;
```

The stored procedure returns the ID of the task. Like other tasks, this stored procedure runs asynchronously. As soon as the status of the task is `SUCCESS`, you can use the task ID in the `rds_fn_list_file_details` function to list the existing files and directories in D:\$1S3\$1, as shown following.

```
SELECT * FROM msdb.dbo.rds_fn_list_file_details(TASK_ID);
```

The `rds_fn_list_file_details` function returns a table with the following columns.


| Output parameter | Description | 
| --- | --- | 
| filepath | Absolute path of the file (for example, D:\$1S3\$1mydata.csv) | 
| size\$1in\$1bytes | File size (in bytes) | 
| last\$1modified\$1utc | Last modification date and time in UTC format | 
| is\$1directory | Option that indicates whether the item is a directory (true/false) | 

# Deleting files on the RDS DB instance
<a name="Appendix.SQLServer.Options.S3-integration.using.deleting-files"></a>

To delete the files available on the DB instance, use the Amazon RDS stored procedure `msdb.dbo.rds_delete_from_filesystem` with the following parameters. 


| Parameter name | Data type | Default | Required | Description | 
| --- | --- | --- | --- | --- | 
|  `@rds_file_path`  |  NVARCHAR  |  –  |  Required  | The file path of the file to delete. Absolute and relative paths are supported.  | 
|  `@force_delete`  |  INT  | 0 |  Optional  |  To delete a directory, this flag must be included and set to `1`. `1` = delete a directory This parameter is ignored if you are deleting a file.  | 

To delete a directory, the `@rds_file_path` must end with a backslash (`\`) and `@force_delete` must be set to `1`.

The following example deletes the file `D:\S3\delete_me.txt`.

```
exec msdb.dbo.rds_delete_from_filesystem
    @rds_file_path='D:\S3\delete_me.txt';
```

The following example deletes the directory `D:\S3\example_folder\`.

```
exec msdb.dbo.rds_delete_from_filesystem
    @rds_file_path='D:\S3\example_folder\',
    @force_delete=1;
```

# Monitoring the status of a file transfer task
<a name="Appendix.SQLServer.Options.S3-integration.using.monitortasks"></a>

To track the status of your S3 integration task, call the `rds_fn_task_status` function. It takes two parameters. The first parameter should always be `NULL` because it doesn't apply to S3 integration. The second parameter accepts a task ID.

To see a list of all tasks, set the first parameter to `NULL` and the second parameter to `0`, as shown in the following example.

```
SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,0);
```

To get a specific task, set the first parameter to `NULL` and the second parameter to the task ID, as shown in the following example.

```
SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,42);
```

The `rds_fn_task_status` function returns the following information.


|  Output parameter  |  Description  | 
| --- | --- | 
|  `task_id`  |  The ID of the task.  | 
|  `task_type`  |  For S3 integration, tasks can have the following task types: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.Options.S3-integration.using.monitortasks.html)  | 
|  `database_name`  | Not applicable to S3 integration tasks. | 
|  `% complete`  |  The progress of the task as a percentage.  | 
|  `duration(mins)`  |  The amount of time spent on the task, in minutes.  | 
|  `lifecycle`  |  The status of the task. Possible statuses are the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.Options.S3-integration.using.monitortasks.html)  | 
|  `task_info`  |  Additional information about the task. If an error occurs during processing, this column contains information about the error.   | 
|  `last_updated`  |  The date and time that the task status was last updated.   | 
|  `created_at`  |  The date and time that the task was created.  | 
|  `S3_object_arn`  |  The ARN of the S3 object downloaded from or uploaded to.  | 
|  `overwrite_S3_backup_file`  |  Not applicable to S3 integration tasks.  | 
|  `KMS_master_key_arn`  |  Not applicable to S3 integration tasks.  | 
|  `filepath`  |  The file path on the RDS DB instance.  | 
|  `overwrite_file`  |  An option that indicates if an existing file is overwritten.  | 
|  `task_metadata`  |  Not applicable to S3 integration tasks.  | 

# Canceling a task
<a name="Appendix.SQLServer.Options.S3-integration.canceltasks"></a>

To cancel S3 integration tasks, use the `msdb.dbo.rds_cancel_task` stored procedure with the `task_id` parameter. Delete and list tasks that are in progress can't be cancelled. The following example shows a request to cancel a task. 

```
exec msdb.dbo.rds_cancel_task @task_id = 1234;
```

To get an overview of all tasks and their task IDs, use the `rds_fn_task_status` function as described in [Monitoring the status of a file transfer task](Appendix.SQLServer.Options.S3-integration.using.monitortasks.md).

## Multi-AZ limitations for S3 integration
<a name="S3-MAZ"></a>

On Multi-AZ instances, files in the `D:\S3` folder are deleted on the standby replica after a failover. A failover can be planned, for example, during DB instance modifications such as changing the instance class or upgrading the engine version. Or a failover can be unplanned, during an outage of the primary.

**Note**  
We don't recommend using the `D:\S3` folder for file storage. The best practice is to upload created files to Amazon S3 to make them durable, and download files when you need to import data.

To determine the last failover time, you can use the `msdb.dbo.rds_failover_time` stored procedure. For more information, see [Determining the last failover time for Amazon RDS for SQL Server](Appendix.SQLServer.CommonDBATasks.LastFailover.md).

**Example of no recent failover**  
This example shows the output when there is no recent failover in the error logs. No failover has happened since 2020-04-29 23:59:00.01.  
Therefore, all files downloaded after that time that haven't been deleted using the `rds_delete_from_filesystem` stored procedure are still accessible on the current host. Files downloaded before that time might also be available.  


| errorlog\$1available\$1from | recent\$1failover\$1time | 
| --- | --- | 
|  2020-04-29 23:59:00.0100000  |  null  | 

**Example of recent failover**  
This example shows the output when there is a failover in the error logs. The most recent failover was at 2020-05-05 18:57:51.89.  
All files downloaded after that time that haven't been deleted using the `rds_delete_from_filesystem` stored procedure are still accessible on the current host.  


| errorlog\$1available\$1from | recent\$1failover\$1time | 
| --- | --- | 
|  2020-04-29 23:59:00.0100000  |  2020-05-05 18:57:51.8900000  | 

# Disabling RDS for SQL Server integration with S3
<a name="Appendix.SQLServer.Options.S3-integration.disabling"></a>

Following, you can find how to disable Amazon S3 integration with Amazon RDS for SQL Server. Files in `D:\S3\` aren't deleted when disabling S3 integration.

**Note**  
To remove an IAM role from a DB instance, the status of the DB instance must be `available`.

## Console
<a name="Appendix.SQLServer.Options.S3-integration.disabling.console"></a>

**To disassociate your IAM role from your DB instance**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose the RDS for SQL Server DB instance name to display its details.

1. On the **Connectivity & security** tab, in the **Manage IAM roles** section, choose the IAM role to remove.

1. Choose **Delete**.

## AWS CLI
<a name="Appendix.SQLServer.Options.S3-integration.disabling.cli"></a>

**To remove the IAM role from the RDS for SQL Server DB instance**
+ The following AWS CLI command removes the IAM role from a RDS for SQL Server DB instance named `mydbinstance`.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds remove-role-from-db-instance \
  	   --db-instance-identifier mydbinstance \
  	   --feature-name S3_INTEGRATION \
  	   --role-arn your-role-arn
  ```

  For Windows:

  ```
  aws rds remove-role-from-db-instance ^
  	   --db-instance-identifier mydbinstance ^
  	   --feature-name S3_INTEGRATION ^
  	   --role-arn your-role-arn
  ```

  Replace `your-role-arn` with the appropriate IAM role ARN for the `--feature-name` option.

# Using Database Mail on Amazon RDS for SQL Server
<a name="SQLServer.DBMail"></a>

You can use Database Mail to send email messages to users from your Amazon RDS on SQL Server database instance. The messages can contain files and query results. Database Mail includes the following components:
+ **Configuration and security objects** – These objects create profiles and accounts, and are stored in the `msdb` database.
+ **Messaging objects** – These objects include the [sp\$1send\$1dbmail](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-send-dbmail-transact-sql) stored procedure used to send messages, and data structures that hold information about messages. They're stored in the `msdb` database.
+ **Logging and auditing objects** – Database Mail writes logging information to the `msdb` database and the Microsoft Windows application event log.
+ **Database Mail executable** – `DatabaseMail.exe` reads from a queue in the `msdb` database and sends email messages.

RDS supports Database Mail for all SQL Server versions on the Web, Standard, and Enterprise Editions.

## Limitations
<a name="SQLServer.DBMail.Limitations"></a>

The following limitations apply to using Database Mail on your SQL Server DB instance:
+ Database Mail isn't supported for SQL Server Express Edition.
+ Modifying Database Mail configuration parameters isn't supported. To see the preset (default) values, use the [sysmail\$1help\$1configure\$1sp](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-help-configure-sp-transact-sql) stored procedure.
+ File attachments aren't fully supported. For more information, see [Working with file attachments](#SQLServer.DBMail.Files).
+ The maximum file attachment size is 1 MB.
+ Database Mail requires additional configuration on Multi-AZ DB instances. For more information, see [Considerations for Multi-AZ deployments](#SQLServer.DBMail.MAZ).
+ Configuring SQL Server Agent to send email messages to predefined operators isn't supported.

# Enabling Database Mail
<a name="SQLServer.DBMail.Enable"></a>

Use the following process to enable Database Mail for your DB instance:

1. Create a new parameter group.

1. Modify the parameter group to set the `database mail xps` parameter to 1.

1. Associate the parameter group with the DB instance.

## Creating the parameter group for Database Mail
<a name="DBMail.CreateParamGroup"></a>

Create a parameter group for the `database mail xps` parameter that corresponds to the SQL Server edition and version of your DB instance.

**Note**  
You can also modify an existing parameter group. Follow the procedure in [Modifying the parameter that enables Database Mail](#DBMail.ModifyParamGroup).

### Console
<a name="DBMail.CreateParamGroup.Console"></a>

The following example creates a parameter group for SQL Server Standard Edition 2016.

**To create the parameter group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Parameter groups**.

1. Choose **Create parameter group**.

1. In the **Create parameter group** pane, do the following:

   1. For **Parameter group family**, choose **sqlserver-se-13.0**.

   1. For **Group name**, enter an identifier for the parameter group, such as **dbmail-sqlserver-se-13**.

   1. For **Description**, enter **Database Mail XPs**.

1. Choose **Create**.

### CLI
<a name="DBMail.CreateParamGroup.CLI"></a>

The following example creates a parameter group for SQL Server Standard Edition 2016.

**To create the parameter group**
+ Use one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-db-parameter-group \
      --db-parameter-group-name dbmail-sqlserver-se-13 \
      --db-parameter-group-family "sqlserver-se-13.0" \
      --description "Database Mail XPs"
  ```

  For Windows:

  ```
  aws rds create-db-parameter-group ^
      --db-parameter-group-name dbmail-sqlserver-se-13 ^
      --db-parameter-group-family "sqlserver-se-13.0" ^
      --description "Database Mail XPs"
  ```

## Modifying the parameter that enables Database Mail
<a name="DBMail.ModifyParamGroup"></a>

Modify the `database mail xps` parameter in the parameter group that corresponds to the SQL Server edition and version of your DB instance.

To enable Database Mail, set the `database mail xps` parameter to 1.

### Console
<a name="DBMail.ModifyParamGroup.Console"></a>

The following example modifies the parameter group that you created for SQL Server Standard Edition 2016.

**To modify the parameter group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Parameter groups**.

1. Choose the parameter group, such as **dbmail-sqlserver-se-13**.

1. Under **Parameters**, filter the parameter list for **mail**.

1. Choose **database mail xps**.

1. Choose **Edit parameters**.

1. Enter **1**.

1. Choose **Save changes**.

### CLI
<a name="DBMail.ModifyParamGroup.CLI"></a>

The following example modifies the parameter group that you created for SQL Server Standard Edition 2016.

**To modify the parameter group**
+ Use one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-parameter-group \
      --db-parameter-group-name dbmail-sqlserver-se-13 \
      --parameters "ParameterName='database mail xps',ParameterValue=1,ApplyMethod=immediate"
  ```

  For Windows:

  ```
  aws rds modify-db-parameter-group ^
      --db-parameter-group-name dbmail-sqlserver-se-13 ^
      --parameters "ParameterName='database mail xps',ParameterValue=1,ApplyMethod=immediate"
  ```

## Associating the parameter group with the DB instance
<a name="DBMail.AssocParamGroup"></a>

You can use the AWS Management Console or the AWS CLI to associate the Database Mail parameter group with the DB instance.

### Console
<a name="DBMail.AssocParamGroup.Console"></a>

You can associate the Database Mail parameter group with a new or existing DB instance.
+ For a new DB instance, associate it when you launch the instance. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).
+ For an existing DB instance, associate it by modifying the instance. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

### CLI
<a name="DBMail.AssocParamGroup.CLI"></a>

You can associate the Database Mail parameter group with a new or existing DB instance.

**To create a DB instance with the Database Mail parameter group**
+ Specify the same DB engine type and major version as you used when creating the parameter group.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-db-instance \
      --db-instance-identifier mydbinstance \
      --db-instance-class db.m5.2xlarge \
      --engine sqlserver-se \
      --engine-version 13.00.5426.0.v1 \
      --allocated-storage 100 \
      --manage-master-user-password \
      --master-username admin \
      --storage-type gp2 \
      --license-model li
      --db-parameter-group-name dbmail-sqlserver-se-13
  ```

  For Windows:

  ```
  aws rds create-db-instance ^
      --db-instance-identifier mydbinstance ^
      --db-instance-class db.m5.2xlarge ^
      --engine sqlserver-se ^
      --engine-version 13.00.5426.0.v1 ^
      --allocated-storage 100 ^
      --manage-master-user-password ^
      --master-username admin ^
      --storage-type gp2 ^
      --license-model li ^
      --db-parameter-group-name dbmail-sqlserver-se-13
  ```

**To modify a DB instance and associate the Database Mail parameter group**
+ Use one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-instance \
      --db-instance-identifier mydbinstance \
      --db-parameter-group-name dbmail-sqlserver-se-13 \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds modify-db-instance ^
      --db-instance-identifier mydbinstance ^
      --db-parameter-group-name dbmail-sqlserver-se-13 ^
      --apply-immediately
  ```

# Configuring Database Mail
<a name="SQLServer.DBMail.Configure"></a>

You perform the following tasks to configure Database Mail:

1. Create the Database Mail profile.

1. Create the Database Mail account.

1. Add the Database Mail account to the Database Mail profile.

1. Add users to the Database Mail profile.

**Note**  
To configure Database Mail, make sure that you have `execute` permission on the stored procedures in the `msdb` database.

## Creating the Database Mail profile
<a name="SQLServer.DBMail.Configure.Profile"></a>

To create the Database Mail profile, you use the [sysmail\$1add\$1profile\$1sp](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-add-profile-sp-transact-sql) stored procedure. The following example creates a profile named `Notifications`.

**To create the profile**
+ Use the following SQL statement.

  ```
  USE msdb
  GO
  
  EXECUTE msdb.dbo.sysmail_add_profile_sp  
      @profile_name         = 'Notifications',  
      @description          = 'Profile used for sending outgoing notifications using Amazon SES.';
  GO
  ```

## Creating the Database Mail account
<a name="SQLServer.DBMail.Configure.Account"></a>

To create the Database Mail account, you use the [sysmail\$1add\$1account\$1sp](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-add-account-sp-transact-sql) stored procedure. The following example creates an account named `SES` on an RDS for SQL Server DB instance in a private VPC, using Amazon Simple Email Service.

Using Amazon SES requires the following parameters:
+ `@email_address` – An Amazon SES verified identity. For more information, see [Verified identities in Amazon SES](https://docs.aws.amazon.com/ses/latest/dg/verify-addresses-and-domains.html).
+ `@mailserver_name` – An Amazon SES SMTP endpoint. For more information, see [Connecting to an Amazon SES SMTP endpoint](https://docs.aws.amazon.com/ses/latest/dg/smtp-connect.html).
+ `@username` – An Amazon SES SMTP user name. For more information, see [Obtaining Amazon SES SMTP credentials](https://docs.aws.amazon.com/ses/latest/dg/smtp-credentials.html).

  Don't use an AWS Identity and Access Management user name.
+ `@password` – An Amazon SES SMTP password. For more information, see [Obtaining Amazon SES SMTP credentials](https://docs.aws.amazon.com/ses/latest/dg/smtp-credentials.html).

**To create the account**
+ Use the following SQL statement.

  ```
  USE msdb
  GO
  
  EXECUTE msdb.dbo.sysmail_add_account_sp
      @account_name        = 'SES',
      @description         = 'Mail account for sending outgoing notifications.',
      @email_address       = 'nobody@example.com',
      @display_name        = 'Automated Mailer',
      @mailserver_name     = 'vpce-0a1b2c3d4e5f-01234567.email-smtp.us-west-2.vpce.amazonaws.com',
      @port                = 587,
      @enable_ssl          = 1,
      @username            = 'Smtp_Username',
      @password            = 'Smtp_Password';
  GO
  ```
**Note**  
Specify credentials other than the prompts shown here as a security best practice.

## Adding the Database Mail account to the Database Mail profile
<a name="SQLServer.DBMail.Configure.AddAccount"></a>

To add the Database Mail account to the Database Mail profile, you use the [sysmail\$1add\$1profileaccount\$1sp](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-add-profileaccount-sp-transact-sql) stored procedure. The following example adds the `SES` account to the `Notifications` profile.

**To add the account to the profile**
+ Use the following SQL statement.

  ```
  USE msdb
  GO
  
  EXECUTE msdb.dbo.sysmail_add_profileaccount_sp
      @profile_name        = 'Notifications',
      @account_name        = 'SES',
      @sequence_number     = 1;
  GO
  ```

## Adding users to the Database Mail profile
<a name="SQLServer.DBMail.Configure.AddUser"></a>

To grant permission for an `msdb` database principal to use a Database Mail profile, you use the [sysmail\$1add\$1principalprofile\$1sp](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-add-principalprofile-sp-transact-sql) stored procedure. A *principal* is an entity that can request SQL Server resources. The database principal must map to a SQL Server authentication user, a Windows Authentication user, or a Windows Authentication group.

The following example grants public access to the `Notifications` profile.

**To add a user to the profile**
+ Use the following SQL statement.

  ```
  USE msdb
  GO
  
  EXECUTE msdb.dbo.sysmail_add_principalprofile_sp  
      @profile_name       = 'Notifications',  
      @principal_name     = 'public',  
      @is_default         = 1;
  GO
  ```

## Amazon RDS stored procedures and functions for Database Mail
<a name="SQLServer.DBMail.StoredProc"></a>

Microsoft provides [stored procedures](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/database-mail-stored-procedures-transact-sql) for using Database Mail, such as creating, listing, updating, and deleting accounts and profiles. In addition, RDS provides the stored procedures and functions for Database Mail shown in the following table.


| Procedure/Function | Description | 
| --- | --- | 
| rds\$1fn\$1sysmail\$1allitems | Shows sent messages, including those submitted by other users. | 
| rds\$1fn\$1sysmail\$1event\$1log | Shows events, including those for messages submitted by other users. | 
| rds\$1fn\$1sysmail\$1mailattachments | Shows attachments, including those to messages submitted by other users. | 
| rds\$1sysmail\$1control | Starts and stops the mail queue (DatabaseMail.exe process). | 
| rds\$1sysmail\$1delete\$1mailitems\$1sp | Deletes email messages sent by all users from the Database Mail internal tables. | 

# Sending email messages using Database Mail
<a name="SQLServer.DBMail.Send"></a>

You use the [sp\$1send\$1dbmail](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-send-dbmail-transact-sql) stored procedure to send email messages using Database Mail.

## Usage
<a name="SQLServer.DBMail.Send.Usage"></a>

```
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'profile_name',
@recipients = 'recipient1@example.com[; recipient2; ... recipientn]',
@subject = 'subject',
@body = 'message_body',
[@body_format = 'HTML'],
[@file_attachments = 'file_path1; file_path2; ... file_pathn'],
[@query = 'SQL_query'],
[@attach_query_result_as_file = 0|1]';
```

The following parameters are required:
+ `@profile_name` – The name of the Database Mail profile from which to send the message.
+ `@recipients` – The semicolon-delimited list of email addresses to which to send the message.
+ `@subject` – The subject of the message.
+ `@body` – The body of the message. You can also use a declared variable as the body.

The following parameters are optional:
+ `@body_format` – This parameter is used with a declared variable to send email in HTML format.
+ `@file_attachments` – The semicolon-delimited list of message attachments. File paths must be absolute paths.
+ `@query` – A SQL query to run. The query results can be attached as a file or included in the body of the message.
+ `@attach_query_result_as_file` – Whether to attach the query result as a file. Set to 0 for no, 1 for yes. The default is 0.

## Examples
<a name="SQLServer.DBMail.Send.Examples"></a>

The following examples demonstrate how to send email messages.

**Example of sending a message to a single recipient**  

```
USE msdb
GO

EXEC msdb.dbo.sp_send_dbmail
     @profile_name       = 'Notifications',
     @recipients         = 'nobody@example.com',
     @subject            = 'Automated DBMail message - 1',
     @body               = 'Database Mail configuration was successful.';
GO
```

**Example of sending a message to multiple recipients**  

```
USE msdb
GO

EXEC msdb.dbo.sp_send_dbmail
     @profile_name       = 'Notifications',
     @recipients         = 'recipient1@example.com;recipient2@example.com',
     @subject            = 'Automated DBMail message - 2',
     @body               = 'This is a message.';
GO
```

**Example of sending a SQL query result as a file attachment**  

```
USE msdb
GO

EXEC msdb.dbo.sp_send_dbmail
     @profile_name       = 'Notifications',
     @recipients         = 'nobody@example.com',
     @subject            = 'Test SQL query',
     @body               = 'This is a SQL query test.',
     @query              = 'SELECT * FROM abc.dbo.test',
     @attach_query_result_as_file = 1;
GO
```

**Example of sending a message in HTML format**  

```
USE msdb
GO

DECLARE @HTML_Body as NVARCHAR(500) = 'Hi, <h4> Heading </h4> </br> See the report. <b> Regards </b>';

EXEC msdb.dbo.sp_send_dbmail
     @profile_name       = 'Notifications',
     @recipients         = 'nobody@example.com',
     @subject            = 'Test HTML message',
     @body               = @HTML_Body,
     @body_format        = 'HTML';
GO
```

**Example of sending a message using a trigger when a specific event occurs in the database**  

```
USE AdventureWorks2017
GO
IF OBJECT_ID ('Production.iProductNotification', 'TR') IS NOT NULL
DROP TRIGGER Purchasing.iProductNotification
GO

CREATE TRIGGER iProductNotification ON Production.Product
   FOR INSERT
   AS
   DECLARE @ProductInformation nvarchar(255);
   SELECT
   @ProductInformation = 'A new product, ' + Name + ', is now available for $' + CAST(StandardCost AS nvarchar(20)) + '!'
   FROM INSERTED i;

EXEC msdb.dbo.sp_send_dbmail
     @profile_name       = 'Notifications',
     @recipients         = 'nobody@example.com',
     @subject            = 'New product information',
     @body               = @ProductInformation;
GO
```

# Viewing messages, logs, and attachments
<a name="SQLServer.DBMail.View"></a>

You use RDS stored procedures to view messages, event logs, and attachments.

**To view all email messages**
+ Use the following SQL query.

  ```
  SELECT * FROM msdb.dbo.rds_fn_sysmail_allitems(); --WHERE sent_status='sent' or 'failed' or 'unsent'
  ```

**To view all email event logs**
+ Use the following SQL query.

  ```
  SELECT * FROM msdb.dbo.rds_fn_sysmail_event_log();
  ```

**To view all email attachments**
+ Use the following SQL query.

  ```
  SELECT * FROM msdb.dbo.rds_fn_sysmail_mailattachments();
  ```

# Deleting messages
<a name="SQLServer.DBMail.Delete"></a>

You use the `rds_sysmail_delete_mailitems_sp` stored procedure to delete messages.

**Note**  
RDS automatically deletes mail table items when DBMail history data reaches 1 GB in size, with a retention period of at least 24 hours.  
If you want to keep mail items for a longer period, you can archive them. For more information, see [Create a SQL Server Agent job to archive Database Mail messages and event logs](https://docs.microsoft.com/en-us/sql/relational-databases/database-mail/create-a-sql-server-agent-job-to-archive-database-mail-messages-and-event-logs) in the Microsoft documentation.

**To delete all email messages**
+ Use the following SQL statement.

  ```
  DECLARE @GETDATE datetime
  SET @GETDATE = GETDATE();
  EXECUTE msdb.dbo.rds_sysmail_delete_mailitems_sp @sent_before = @GETDATE;
  GO
  ```

**To delete all email messages with a particular status**
+ Use the following SQL statement to delete all failed messages.

  ```
  DECLARE @GETDATE datetime
  SET @GETDATE = GETDATE();
  EXECUTE msdb.dbo.rds_sysmail_delete_mailitems_sp @sent_status = 'failed';
  GO
  ```

# Starting and stopping mail queue
<a name="SQLServer.DBMail.StartStop"></a>

Use the following instructions to start and stop the DB mail queue:

**Topics**
+ [Starting the mail queue](#SQLServer.DBMail.Start)
+ [Stopping the mail queue](#SQLServer.DBMail.Stop)

## Starting the mail queue
<a name="SQLServer.DBMail.Start"></a>

You use the `rds_sysmail_control` stored procedure to start the Database Mail process.

**Note**  
Enabling Database Mail automatically starts the mail queue.

**To start the mail queue**
+ Use the following SQL statement.

  ```
  EXECUTE msdb.dbo.rds_sysmail_control start;
  GO
  ```

## Stopping the mail queue
<a name="SQLServer.DBMail.Stop"></a>

You use the `rds_sysmail_control` stored procedure to stop the Database Mail process.

**To stop the mail queue**
+ Use the following SQL statement.

  ```
  EXECUTE msdb.dbo.rds_sysmail_control stop;
  GO
  ```

## Working with file attachments
<a name="SQLServer.DBMail.Files"></a>

The following file attachment extensions aren't supported in Database Mail messages from RDS on SQL Server: .ade, .adp, .apk, .appx, .appxbundle, .bat, .bak, .cab, .chm, .cmd, .com, .cpl, .dll, .dmg, .exe, .hta, .inf1, .ins, .isp, .iso, .jar, .job, .js, .jse, .ldf, .lib, .lnk, .mde, .mdf, .msc, .msi, .msix, .msixbundle, .msp, .mst, .nsh, .pif, .ps, .ps1, .psc1, .reg, .rgs, .scr, .sct, .shb, .shs, .svg, .sys, .u3p, .vb, .vbe, .vbs, .vbscript, .vxd, .ws, .wsc, .wsf, and .wsh.

Database Mail uses the Microsoft Windows security context of the current user to control access to files. Users who log in with SQL Server Authentication can't attach files using the `@file_attachments` parameter with the `sp_send_dbmail` stored procedure. Windows doesn't allow SQL Server to provide credentials from a remote computer to another remote computer. Therefore, Database Mail can't attach files from a network share when the command is run from a computer other than the computer running SQL Server.

However, you can use SQL Server Agent jobs to attach files. For more information on SQL Server Agent, see [Using SQL Server Agent for Amazon RDS](Appendix.SQLServer.CommonDBATasks.Agent.md) and [SQL Server Agent](https://docs.microsoft.com/en-us/sql/ssms/agent/sql-server-agent) in the Microsoft documentation.

## Considerations for Multi-AZ deployments
<a name="SQLServer.DBMail.MAZ"></a>

When you configure Database Mail on a Multi-AZ DB instance, the configuration isn't automatically propagated to the secondary. We recommend converting the Multi-AZ instance to a Single-AZ instance, configuring Database Mail, and then converting the DB instance back to Multi-AZ. Then both the primary and secondary nodes have the Database Mail configuration.

If you create a read replica from your Multi-AZ instance that has Database Mail configured, the replica inherits the configuration, but without the password to the SMTP server. Update the Database Mail account with the password.

## Removing the SMTP (port 25) restriction
<a name="SQLServer.DBMail.SMTP"></a>

By default, AWS blocks outbound traffic on SMTP (port 25) for RDS for SQL Server DB instances. This is done to prevent spam based on the elastic network interface owner's policies. You can remove this restriction if needed. For more information, see [ How do I remove the restriction on port 25 from my Amazon EC2 instance or Lambda function?](https://repost.aws/knowledge-center/ec2-port-25-throttle). 

# Instance store support for the tempdb database on Amazon RDS for SQL Server
<a name="SQLServer.InstanceStore"></a>

An *instance store* provides temporary block-level storage for your DB instance. This storage is located on disks that are physically attached to the host computer. These disks have Non-Volatile Memory Express (NVMe) instance storage that is based on solid-state drives (SSDs). This storage is optimized for low latency, very high random I/O performance, and high sequential read throughput.

By placing `tempdb` data files and `tempdb` log files on the instance store, you can achieve lower read and write latencies compared to standard storage based on Amazon EBS.

**Note**  
SQL Server database files and database log files aren't placed on the instance store.

## Enabling the instance store
<a name="SQLServer.InstanceStore.Enable"></a>

When RDS provisions DB instances with one of the following instance classes, the `tempdb` database is automatically placed onto the instance store:
+ db.m5d
+ db.r5d
+ db.x2iedn

To enable the instance store, do one of the following:
+ Create a SQL Server DB instance using one of these instance types. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).
+ Modify an existing SQL Server DB instance to use one of them. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

The instance store is available in all AWS Regions where one or more of these instance types are supported. For more information on the `db.m5d` and `db.r5d` instance classes, see [DB instance classes](Concepts.DBInstanceClass.md). For more information on the instance classes supported by Amazon RDS for SQL Server, see [DB instance class support for Microsoft SQL Server](SQLServer.Concepts.General.InstanceClasses.md).

## File location and size considerations
<a name="SQLServer.InstanceStore.Files"></a>

On instances without an instance store, RDS stores the `tempdb` data and log files in the `D:\rdsdbdata\DATA` directory. Both files start at 8 MB by default.

On instances with an instance store, RDS stores the `tempdb` data and log files in the `T:\rdsdbdata\DATA` directory.

When `tempdb` has only one data file (`tempdb.mdf`) and one log file (`templog.ldf`), `templog.ldf` starts at 8 MB by default and `tempdb.mdf` starts at 80% or more of the instance's storage capacity. Twenty percent of the storage capacity or 200 GB, whichever is less, is kept free to start. Multiple `tempdb` data files split the 80% disk space evenly, while log files always have an 8-MB initial size.

For example, if you modify your DB instance class from `db.m5.2xlarge` to `db.m5d.2xlarge`, the size of `tempdb` data files increases from 8 MB each to 234 GB in total.

**Note**  
Besides the `tempdb` data and log files on the instance store (`T:\rdsdbdata\DATA`), you can still create extra `tempdb` data and log files on the data volume (`D:\rdsdbdata\DATA`). Those files always have an 8 MB initial size.

## Backup considerations
<a name="SQLServer.InstanceStore.Backups"></a>

You might need to retain backups for long periods, incurring costs over time. The `tempdb` data and log blocks can change very often depending on the workload. This can greatly increase the DB snapshot size.

When `tempdb` is on the instance store, snapshots don't include temporary files. This means that snapshot sizes are smaller and consume less of the free backup allocation compared to EBS-only storage.

## Disk full errors
<a name="SQLServer.InstanceStore.DiskFull"></a>

If you use all of the available space in the instance store, you might receive errors such as the following:
+ The transaction log for database 'tempdb' is full due to 'ACTIVE\$1TRANSACTION'.
+ Could not allocate space for object 'dbo.SORT temporary run storage: 140738941419520' in database 'tempdb' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

You can do one or more of the following when the instance store is full:
+ Adjust your workload or the way you use `tempdb`.
+ Scale up to use a DB instance class with more NVMe storage.
+ Stop using the instance store, and use an instance class with only EBS storage.
+ Use a mixed mode by adding secondary data or log files for `tempdb` on the EBS volume.

## Removing the instance store
<a name="SQLServer.InstanceStore.Disable"></a>

To remove the instance store, modify your SQL Server DB instance to use an instance type that doesn't support instance store, such as db.m5, db.r5, or db.x1e.

**Note**  
When you remove the instance store, the temporary files are moved to the `D:\rdsdbdata\DATA` directory and reduced in size to 8 MB.

# Using extended events with Amazon RDS for Microsoft SQL Server
<a name="SQLServer.ExtendedEvents"></a>

You can use extended events in Microsoft SQL Server to capture debugging and troubleshooting information for Amazon RDS for SQL Server. Extended events replace SQL Trace and Server Profiler, which have been deprecated by Microsoft. Extended events are similar to profiler traces but with more granular control on the events being traced. Extended events are supported for SQL Server versions 2016 and later on Amazon RDS. For more information, see [Extended events overview](https://docs.microsoft.com/en-us/sql/relational-databases/extended-events/extended-events) in the Microsoft documentation.

Extended events are turned on automatically for users with master user privileges in Amazon RDS for SQL Server.

**Topics**
+ [Limitations and recommendations](#SQLServer.ExtendedEvents.Limits)
+ [Configuring extended events on RDS for SQL Server](#SQLServer.ExtendedEvents.Config)
+ [Considerations for Multi-AZ deployments](#SQLServer.ExtendedEvents.MAZ)
+ [Querying extended event files](#SQLServer.ExtendedEvents.Querying)

## Limitations and recommendations
<a name="SQLServer.ExtendedEvents.Limits"></a>

When using extended events on RDS for SQL Server, the following limitations apply:
+ Extended events are supported only for the Enterprise and Standard Editions.
+ You can't alter default extended event sessions.
+ Make sure to set the session memory partition mode to `NONE`.
+ Session event retention mode can be either `ALLOW_SINGLE_EVENT_LOSS` or `ALLOW_MULTIPLE_EVENT_LOSS`.
+ Event Tracing for Windows (ETW) targets aren't supported.
+ Make sure that file targets are in the `D:\rdsdbdata\log` directory.
+ For pair matching targets, set the `respond_to_memory_pressure` property to `1`.
+ Ring buffer target memory can't be greater than 4 MB.
+ The following actions aren't supported:
  + `debug_break`
  + `create_dump_all_threads`
  + `create_dump_single_threads`
+ The `rpc_completed` event is supported on the following versions and later: 15.0.4083.2, 14.0.3370.1, 13.0.5865.1, 12.0.6433.1, 11.0.7507.2.

## Configuring extended events on RDS for SQL Server
<a name="SQLServer.ExtendedEvents.Config"></a>

On RDS for SQL Server, you can configure the values of certain parameters of extended event sessions. The following table describes the configurable parameters.


| Parameter name | Description | RDS default value | Minimum value | Maximum value | 
| --- | --- | --- | --- | --- | 
| xe\$1session\$1max\$1memory | Specifies the maximum amount of memory to allocate to the session for event buffering. This value corresponds to the max\$1memory setting of the event session. | 4 MB | 4 MB | 8 MB | 
| xe\$1session\$1max\$1event\$1size | Specifies the maximum memory size allowed for large events. This value corresponds to the max\$1event\$1size setting of the event session. | 4 MB | 4 MB | 8 MB | 
| xe\$1session\$1max\$1dispatch\$1latency | Specifies the amount of time that events are buffered in memory before being dispatched to extended event session targets. This value corresponds to the max\$1dispatch\$1latency setting of the event session. | 30 seconds | 1 second | 30 seconds | 
| xe\$1file\$1target\$1size | Specifies the maximum size of the file target. This value corresponds to the max\$1file\$1size setting of the file target. | 100 MB | 10 MB | 1 GB | 
| xe\$1file\$1retention | Specifies the retention time in days for files generated by the file targets of event sessions. | 7 days | 0 days | 7 days | 

**Note**  
Setting `xe_file_retention` to zero causes .xel files to be removed automatically after the lock on these files is released by SQL Server. The lock is released whenever an .xel file reaches the size limit set in `xe_file_target_size`.

You can use the `rdsadmin.dbo.rds_show_configuration` stored procedure to show the current values of these parameters. For example, use the following SQL statement to view the current setting of `xe_session_max_memory`.

```
exec rdsadmin.dbo.rds_show_configuration 'xe_session_max_memory'
```

You can use the `rdsadmin.dbo.rds_set_configuration` stored procedure to modify them. For example, use the following SQL statement to set `xe_session_max_memory` to 4 MB.

```
exec rdsadmin.dbo.rds_set_configuration 'xe_session_max_memory', 4
```

## Considerations for Multi-AZ deployments
<a name="SQLServer.ExtendedEvents.MAZ"></a>

When you create an extended event session on a primary DB instance, it doesn't propagate to the standby replica. You can fail over and create the extended event session on the new primary DB instance. Or you can remove and then re-add the Multi-AZ configuration to propagate the extended event session to the standby replica. RDS stops all nondefault extended event sessions on the standby replica, so that these sessions don't consume resources on the standby. Because of this, after a standby replica becomes the primary DB instance, make sure to manually start the extended event sessions on the new primary.

**Note**  
This approach applies to both Always On Availability Groups and Database Mirroring.

You can also use a SQL Server Agent job to track the standby replica and start the sessions if the standby becomes the primary. For example, use the following query in your SQL Server Agent job step to restart event sessions on a primary DB instance.

```
BEGIN
    IF (DATABASEPROPERTYEX('rdsadmin','Updateability')='READ_WRITE'
    AND DATABASEPROPERTYEX('rdsadmin','status')='ONLINE'
    AND (DATABASEPROPERTYEX('rdsadmin','Collation') IS NOT NULL OR DATABASEPROPERTYEX('rdsadmin','IsAutoClose')=1)
    )
    BEGIN
        IF NOT EXISTS (SELECT 1 FROM sys.dm_xe_sessions WHERE name='xe1')
            ALTER EVENT SESSION xe1 ON SERVER STATE=START
        IF NOT EXISTS (SELECT 1 FROM sys.dm_xe_sessions WHERE name='xe2')
            ALTER EVENT SESSION xe2 ON SERVER STATE=START
    END
END
```

This query restarts the event sessions `xe1` and `xe2` on a primary DB instance if these sessions are in a stopped state. You can also add a schedule with a convenient interval to this query.

## Querying extended event files
<a name="SQLServer.ExtendedEvents.Querying"></a>

You can either use SQL Server Management Studio or the `sys.fn_xe_file_target_read_file` function to view data from extended events that use file targets. For more information on this function, see [sys.fn\$1xe\$1file\$1target\$1read\$1file (Transact-SQL)](https://docs.microsoft.com/en-us/sql/relational-databases/system-functions/sys-fn-xe-file-target-read-file-transact-sql) in the Microsoft documentation.

Extended event file targets can only write files to the `D:\rdsdbdata\log` directory on RDS for SQL Server.

As an example, use the following SQL query to list the contents of all files of extended event sessions whose names start with `xe`.

```
SELECT * FROM sys.fn_xe_file_target_read_file('d:\rdsdbdata\log\xe*', null,null,null);
```

# Access to transaction log backups with RDS for SQL Server
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess"></a>

With access to transaction log backups for RDS for SQL Server, you can list the transaction log backup files for a database and copy them to a target Amazon S3 bucket. By copying transaction log backups in an Amazon S3 bucket, you can use them in combination with full and differential database backups to perform point in time database restores. You use RDS stored procedures to set up access to transaction log backups, list available transaction log backups, and copy them to your Amazon S3 bucket.

Access to transaction log backups provides the following capabilities and benefits:
+ List and view the metadata of available transaction log backups for a database on an RDS for SQL Server DB instance.
+ Copy available transaction log backups from RDS for SQL Server to a target Amazon S3 bucket.
+ Perform point-in-time restores of databases without the need to restore an entire DB instance. For more information on restoring a DB instance to a point in time, see [Restoring a DB instance to a specified time for Amazon RDS](USER_PIT.md).

## Availability and support
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.Availability"></a>

Access to transaction log backups is supported in all AWS Regions. Access to transaction log backups is available for all editions and versions of Microsoft SQL Server supported on Amazon RDS. 

## Requirements
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.Requirements"></a>

The following requirements must be met before enabling access to transaction log backups: 
+  Automated backups must be enabled on the DB instance and the backup retention must be set to a value of one or more days. For more information on enabling automated backups and configuring a retention policy, see [Enabling automated backups](USER_WorkingWithAutomatedBackups.Enabling.md). 
+ An Amazon S3 bucket must exist in the same account and Region as the source DB instance. Before enabling access to transaction log backups, choose an existing Amazon S3 bucket or [create a new bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingaBucket.html) to use for your transaction log backup files.
+ An Amazon S3 bucket permissions policy must be configured as follows to allow Amazon RDS to copy transaction log files into it:

  1. Set the object account ownership property on the bucket to **Bucket Owner Preferred**.

  1. Add the following policy. There will be no policy by default, so use the bucket Access Control Lists (ACL) to edit the bucket policy and add it.

  

  The following example uses an ARN to specify a resource. We recommend using the `SourceArn` and `SourceAccount` global condition context keys in resource-based trust relationships to limit the service's permissions to a specific resource. For more information on working with ARNs, see [ Amazon resource names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) and [Amazon Resource Names (ARNs) in Amazon RDS](USER_Tagging.ARN.md).

    
**Example of an Amazon S3 permissions policy for access to transaction log backups**  

------
#### [ JSON ]

****  

  ```
      {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "Only allow writes to my bucket with bucket owner full control",
              "Effect": "Allow",
              "Principal": {
                  "Service": "backups.rds.amazonaws.com"
              },
              "Action": "s3:PutObject",
              "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/{customer_path}/*",
              "Condition": {
                  "StringEquals": {
                      "s3:x-amz-acl": "bucket-owner-full-control",
                      "aws:sourceAccount": "{customer_account}",
                      "aws:sourceArn": "{db_instance_arn}"
                  }
              }
          }
      ]
  }
  ```

------
+ An AWS Identity and Access Management (IAM) role to access the Amazon S3 bucket. If you already have an IAM role, you can use that. You can choose to have a new IAM role created for you when you add the `SQLSERVER_BACKUP_RESTORE` option by using the AWS Management Console. Alternatively, you can create a new one manually. For more information on creating and configuring an IAM role with `SQLSERVER_BACKUP_RESTORE`, see [Manually creating an IAM role for native backup and restore](SQLServer.Procedural.Importing.Native.Enabling.md#SQLServer.Procedural.Importing.Native.Enabling.IAM).
+ The `SQLSERVER_BACKUP_RESTORE` option must be added to an option group on your DB instance. For more information on adding the `SQLSERVER_BACKUP_RESTORE` option, see [Support for native backup and restore in SQL Server](Appendix.SQLServer.Options.BackupRestore.md).
**Note**  
If your DB instance has storage encryption enabled, the AWS KMS (KMS) actions and key must be provided in the IAM role provided in the native backup and restore option group.

  Optionally, if you intend to use the `rds_restore_log` stored procedure to perform point in time database restores, we recommend using the same Amazon S3 path for the native backup and restore option group and access to transaction log backups. This method ensures that when Amazon RDS assumes the role from the option group to perform the restore log functions, it has access to retrieve transaction log backups from the same Amazon S3 path.
+ If the DB instance is encrypted, regardless of encryption type (AWS managed key or customer managed key), you must provide a customer managed KMS key in the IAM role and in the `rds_tlog_backup_copy_to_S3` stored procedure. 

## Limitations and recommendations
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.Limitations"></a>

Access to transaction log backups has the following limitations and recommendations:
+  You can list and copy up to the last seven days of transaction log backups for any DB instance that has backup retention configured between one to 35 days. 
+  The Amazon S3 bucket used for access to transaction log backups must exist in the same account and Region as the source DB instance. Cross-account and cross-region copy is not supported. 
+  Only one Amazon S3 bucket can be configured as a target to copy transaction log backups into. You can choose a new target Amazon S3 bucket with the `rds_tlog_copy_setup` stored procedure. For more information on choosing a new target Amazon S3 bucket, see [Setting up access to transaction log backups](USER.SQLServer.AddlFeat.TransactionLogAccess.Enabling.md).
+  You cannot specify the KMS key when using the `rds_tlog_backup_copy_to_S3` stored procedure if your RDS instance is not enabled for storage encryption. 
+  Multi-account copying is not supported. The IAM role used for copying will only permit write access to Amazon S3 buckets within the owner account of the DB instance. 
+  Only two concurrent tasks of any type may be run on an RDS for SQL Server DB instance. 
+  Only one copy task can run for a single database at a given time. If you want to copy transaction log backups for multiple databases on the DB instance, use a separate copy task for each database. 
+  If you copy a transaction log backup that already exists with the same name in the Amazon S3 bucket, the existing transaction log backup will be overwritten. 
+  You can only run the stored procedures that are provided with access to transaction log backups on the primary DB instance. You can’t run these stored procedures on an RDS for SQL Server read replica or on a secondary instance of a Multi-AZ DB cluster. 
+  If the RDS for SQL Server DB instance is rebooted while the `rds_tlog_backup_copy_to_S3` stored procedure is running, the task will automatically restart from the beginning when the DB instance is back online. Any transaction log backups that had been copied to the Amazon S3 bucket while the task was running before the reboot will be overwritten. 
+ The Microsoft SQL Server system databases and the `RDSAdmin` database cannot be configured for access to transaction log backups.
+  Copying to buckets encrypted by SSE-KMS isn't supported. 

# Setting up access to transaction log backups
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.Enabling"></a>

To set up access to transaction log backups, complete the list of requirements in the [Requirements](USER.SQLServer.AddlFeat.TransactionLogAccess.md#USER.SQLServer.AddlFeat.TransactionLogAccess.Requirements) section, and then run the `rds_tlog_copy_setup` stored procedure. The procedure will enable the access to transaction log backups feature at the DB instance level. You don't need to run it for each individual database on the DB instance. 

**Important**  
The database user must be granted the `db_owner` role within SQL Server on each database to configure and use the access to transaction log backups feature.

**Example usage:**  

```
exec msdb.dbo.rds_tlog_copy_setup
@target_s3_arn='arn:aws:s3:::amzn-s3-demo-bucket/myfolder';
```

The following parameter is required:
+ `@target_s3_arn` – The ARN of the target Amazon S3 bucket to copy transaction log backups files to.

**Example of setting an Amazon S3 target bucket:**  

```
exec msdb.dbo.rds_tlog_copy_setup @target_s3_arn='arn:aws:s3:::amzn-s3-demo-logging-bucket/mytestdb1';
```

To validate the configuration, call the `rds_show_configuration` stored procedure.

**Example of validating the configuration:**  

```
exec rdsadmin.dbo.rds_show_configuration @name='target_s3_arn_for_tlog_copy';
```

To modify access to transaction log backups to point to a different Amazon S3 bucket, you can view the current Amazon S3 bucket value and re-run the `rds_tlog_copy_setup` stored procedure using a new value for the `@target_s3_arn`.

**Example of viewing the existing Amazon S3 bucket configured for access to transaction log backups**  

```
exec rdsadmin.dbo.rds_show_configuration @name='target_s3_arn_for_tlog_copy';
```

**Example of updating to a new target Amazon S3 bucket**  

```
exec msdb.dbo.rds_tlog_copy_setup @target_s3_arn='arn:aws:s3:::amzn-s3-demo-logging-bucket1/mynewfolder';
```

# Listing available transaction log backups
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.Listing"></a>

With RDS for SQL Server, databases configured to use the full recovery model and a DB instance backup retention set to one or more days have transaction log backups automatically enabled. By enabling access to transaction log backups, up to seven days of those transaction log backups are made available for you to copy into your Amazon S3 bucket.

After you have enabled access to transaction log backups, you can start using it to list and copy available transaction log backup files.

**Listing transaction log backups**

To list all transaction log backups available for an individual database, call the `rds_fn_list_tlog_backup_metadata` function. You can use an `ORDER BY` or a `WHERE` clause when calling the function.

**Example of listing and filtering available transaction log backup files**  

```
SELECT * from msdb.dbo.rds_fn_list_tlog_backup_metadata('mydatabasename');
SELECT * from msdb.dbo.rds_fn_list_tlog_backup_metadata('mydatabasename') WHERE rds_backup_seq_id = 3507;
SELECT * from msdb.dbo.rds_fn_list_tlog_backup_metadata('mydatabasename') WHERE backup_file_time_utc > '2022-09-15 20:44:01' ORDER BY backup_file_time_utc DESC;
```

![\[Output from rds_fn_list_tlog_backup_metadata\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/sql_accesstransactionlogs_func.png)


The `rds_fn_list_tlog_backup_metadata` function returns the following output:


****  

| Column name | Data type | Description | 
| --- | --- | --- | 
| `db_name` | sysname | The database name provided to list the transaction log backups for. | 
| `db_id` | int | The internal database identifier for the input parameter `db_name`. | 
| `family_guid` | uniqueidentifier | The unique ID of the original database at creation. This value remains the same when the database is restored, even to a different database name. | 
| `rds_backup_seq_id` | int | The ID that RDS uses internally to maintain a sequence number for each transaction log backup file. | 
| `backup_file_epoch` | bigint | The epoch time that a transaction backup file was generated. | 
| `backup_file_time_utc` | datetime | The UTC time-converted value for the `backup_file_epoch` value. | 
| `starting_lsn` | numeric(25,0) | The log sequence number of the first or oldest log record of a transaction log backup file. | 
| `ending_lsn` | numeric(25,0) | The log sequence number of the last or next log record of a transaction log backup file. | 
| `is_log_chain_broken` | bit | A boolean value indicating if the log chain is broken between the current transaction log backup file and the previous transaction log backup file. | 
| `file_size_bytes` | bigint | The size of the transactional backup set in bytes. | 
| `Error` | varchar(4000) | Error message if the `rds_fn_list_tlog_backup_metadata` function throws an exception. NULL if no exceptions. | 

# Copying transaction log backups
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.Copying"></a>

To copy a set of available transaction log backups for an individual database to your Amazon S3 bucket, call the `rds_tlog_backup_copy_to_S3` stored procedure. The `rds_tlog_backup_copy_to_S3` stored procedure will initiate a new task to copy transaction log backups. 

**Note**  
The `rds_tlog_backup_copy_to_S3` stored procedure will copy the transaction log backups without validating against `is_log_chain_broken` attribute. For this reason, you should manually confirm an unbroken log chain before running the `rds_tlog_backup_copy_to_S3` stored procedure. For further explanation, see [Validating the transaction log backup log chain](#USER.SQLServer.AddlFeat.TransactionLogAccess.Copying.LogChain).

**Example usage of the `rds_tlog_backup_copy_to_S3` stored procedure**  

```
exec msdb.dbo.rds_tlog_backup_copy_to_S3
	@db_name='mydatabasename',
	[@kms_key_arn='arn:aws:kms:region:account-id:key/key-id'],	
	[@backup_file_start_time='2022-09-01 01:00:15'],
	[@backup_file_end_time='2022-09-01 21:30:45'],
	[@starting_lsn=149000000112100001],
	[@ending_lsn=149000000120400001],
	[@rds_backup_starting_seq_id=5],
	[@rds_backup_ending_seq_id=10];
```

The following input parameters are available:


****  

| Parameter | Description | 
| --- | --- | 
| `@db_name` | The name of the database to copy transaction log backups for | 
| `@kms_key_arn` |  A customer managed KMS key. If you encrypt your DB instance with an AWS managed KMS key, you must create a customer managed key. If you encrypt your DB instance with a customer managed key, you can use the same KMS key ARN. | 
| `@backup_file_start_time` | The UTC timestamp as provided from the `[backup_file_time_utc]` column of the `rds_fn_list_tlog_backup_metadata` function. | 
| `@backup_file_end_time` | The UTC timestamp as provided from the `[backup_file_time_utc]` column of the `rds_fn_list_tlog_backup_metadata` function. | 
| `@starting_lsn` | The log sequence number (LSN) as provided from the `[starting_lsn]` column of the `rds_fn_list_tlog_backup_metadata` function | 
| `@ending_lsn` | The log sequence number (LSN) as provided from the `[ending_lsn]` column of the `rds_fn_list_tlog_backup_metadata` function. | 
| `@rds_backup_starting_seq_id` | The sequence ID as provided from the `[rds_backup_seq_id]` column of the `rds_fn_list_tlog_backup_metadata` function. | 
| `@rds_backup_ending_seq_id` | The sequence ID as provided from the `[rds_backup_seq_id]` column of the `rds_fn_list_tlog_backup_metadata` function. | 

You can specify a set of either the time, LSN, or sequence ID parameters. Only one set of parameters are required.

You can also specify just a single parameter in any of the sets. For example, by providing a value for only the `backup_file_end_time` parameter, all available transaction log backup files prior to that time within the seven-day limit will be copied to your Amazon S3 bucket. 

Following are the valid input parameter combinations for the `rds_tlog_backup_copy_to_S3` stored procedure.


****  

| Parameters provided | Expected result | 
| --- | --- | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3  <br />	@db_name = 'testdb1',<br />            @backup_file_start_time='2022-08-23 00:00:00',<br />            @backup_file_end_time='2022-08-30 00:00:00';</pre>  | Copies transaction log backups from the last seven days and exist between the provided range of `backup_file_start_time` and `backup_file_end_time`. In this example, the stored procedure will copy transaction log backups that were generated between '2022-08-23 00:00:00' and '2022-08-30 00:00:00'.  | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />           @db_name = 'testdb1',<br />           @backup_file_start_time='2022-08-23 00:00:00';</pre>  | Copies transaction log backups from the last seven days and starting from the provided `backup_file_start_time`. In this example, the stored procedure will copy transaction log backups from '2022-08-23 00:00:00' up to the latest transaction log backup.  | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />          @db_name = 'testdb1',<br />          @backup_file_end_time='2022-08-30 00:00:00';</pre>  | Copies transaction log backups from the last seven days up to the provided `backup_file_end_time`. In this example, the stored procedure will copy transaction log backups from '2022-08-23 00:00:00 up to '2022-08-30 00:00:00'.  | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />         @db_name='testdb1',<br />         @starting_lsn =1490000000040007,<br />         @ending_lsn =  1490000000050009;</pre>  | Copies transaction log backups that are available from the last seven days and are between the provided range of the `starting_lsn` and `ending_lsn`. In this example, the stored procedure will copy transaction log backups from the last seven days with an LSN range between 1490000000040007 and 1490000000050009.   | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />        @db_name='testdb1',<br />        @starting_lsn =1490000000040007;</pre>  |  Copies transaction log backups that are available from the last seven days, beginning from the provided `starting_lsn`. In this example, the stored procedure will copy transaction log backups from LSN 1490000000040007 up to the latest transaction log backup.   | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />        @db_name='testdb1',<br />        @ending_lsn  =1490000000050009;</pre>  |  Copies transaction log backups that are available from the last seven days, up to the provided `ending_lsn`. In this example, the stored procedure will copy transaction log backups beginning from the last seven days up to lsn 1490000000050009.   | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />       @db_name='testdb1',<br />       @rds_backup_starting_seq_id= 2000,<br />       @rds_backup_ending_seq_id= 5000;</pre>  |  Copies transaction log backups that are available from the last seven days, and exist between the provided range of `rds_backup_starting_seq_id` and `rds_backup_ending_seq_id`. In this example, the stored procedure will copy transaction log backups beginning from the last seven days and within the provided rds backup sequence id range, starting from seq\$1id 2000 up to seq\$1id 5000.   | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />       @db_name='testdb1',<br />       @rds_backup_starting_seq_id= 2000;</pre>  |  Copies transaction log backups that are available from the last seven days, beginning from the provided `rds_backup_starting_seq_id`. In this example, the stored procedure will copy transaction log backups beginning from seq\$1id 2000, up to the latest transaction log backup.   | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />      @db_name='testdb1',<br />      @rds_backup_ending_seq_id= 5000;</pre>  |  Copies transaction log backups that are available from the last seven days, up to the provided `rds_backup_ending_seq_id`. In this example, the stored procedure will copy transaction log backups beginning from the last seven days, up to seq\$1id 5000.   | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />      @db_name='testdb1',<br />      @rds_backup_starting_seq_id= 2000;<br />      @rds_backup_ending_seq_id= 2000;</pre>  |  Copies a single transaction log backup with the provided `rds_backup_starting_seq_id`, if available within the last seven days. In this example, the stored procedure will copy a single transaction log backup that has a seq\$1id of 2000, if it exists within the last seven days.   | 

## Validating the transaction log backup log chain
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.Copying.LogChain"></a>

 Databases configured for access to transaction log backups must have automated backup retention enabled. Automated backup retention sets the databases on the DB instance to the `FULL` recovery model. To support point in time restore for a database, avoid changing the database recovery model, which can result in a broken log chain. We recommend keeping the database set to the `FULL` recovery model.

To manually validate the log chain before copying transaction log backups, call the `rds_fn_list_tlog_backup_metadata` function and review the values in the `is_log_chain_broken` column. A value of "1" indicates the log chain was broken between the current log backup and the previous log backup.

The following example shows a broken log chain in the output from the `rds_fn_list_tlog_backup_metadata` stored procedure. 

![\[Output from rds_fn_list_tlog_backup_metadata showing a broken log chain.\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/sql_accesstransactionlogs_logchain_error.png)


In a normal log chain, the log sequence number (LSN) value for first\$1lsn for given rds\$1sequence\$1id should match the value of last\$1lsn in the preceding rds\$1sequence\$1id. In the image, the rds\$1sequence\$1id of 45 has a first\$1lsn value 90987, which does not match the last\$1lsn value of 90985 for preceeding rds\$1sequence\$1id 44.

For more information about SQL Server transaction log architecture and log sequence numbers, see [Transaction Log Logical Architecture](https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-log-architecture-and-management-guide?view=sql-server-ver15#Logical_Arch) in the Microsoft SQL Server documentation.

# Amazon S3 bucket folder and file structure
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.S3namingConvention"></a>

Transaction log backups have the following standard structure and naming convention within an Amazon S3 bucket:
+ A new folder is created under the `target_s3_arn` path for each database with the naming structure as `{db_id}.{family_guid}`.
+ Within the folder, transaction log backups have a filename structure as `{db_id}.{family_guid}.{rds_backup_seq_id}.{backup_file_epoch}`.
+ You can view the details of `family_guid,db_id,rds_backup_seq_id and backup_file_epoch` with the `rds_fn_list_tlog_backup_metadata`function.

The following example shows the folder and file structure of a set of transaction log backups within an Amazon S3 bucket.

![\[Amazon S3 bucket structure with access to transaction logs\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/sql_accesstransactionlogs_s3.png)


# Tracking the status of tasks
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.TrackTaskStatus"></a>

 To track the status of your copy tasks, call the `rds_task_status` stored procedure. If you don't provide any parameters, the stored procedure returns the status of all tasks. 

**Example usage:**  

```
exec msdb.dbo.rds_task_status
  @db_name='database_name',
  @task_id=ID_number;
```

The following parameters are optional:
+ `@db_name` – The name of the database to show the task status for.
+ `@task_id` – The ID of the task to show the task status for.

**Example of listing the status for a specific task ID:**  

```
exec msdb.dbo.rds_task_status @task_id=5;
```

**Example of listing the status for a specific database and task:**  

```
exec msdb.dbo.rds_task_status@db_name='my_database',@task_id=5;
```

**Example of listing all tasks and their status for a specific database:**  

```
exec msdb.dbo.rds_task_status @db_name='my_database';
```

**Example of listing all tasks and their status on the current DB instance:**  

```
exec msdb.dbo.rds_task_status;
```

# Canceling a task
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.CancelTask"></a>

To cancel a running task, call the `rds_cancel_task` stored procedure.

**Example usage:**  

```
exec msdb.dbo.rds_cancel_task @task_id=ID_number;
```

The following parameter is required:
+ `@task_id` – The ID of the task to cancel. You can view the task ID by calling the `rds_task_status` stored procedure.

For more information on viewing and canceling running tasks, see [Importing and exporting SQL Server databases using native backup and restore](SQLServer.Procedural.Importing.md).

# Troubleshooting access to transaction log backups
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.Troubleshooting"></a>

The following are issues you might encounter when you use the stored procedures for access to transaction log backups.


****  

| Stored Procedure | Error Message | Issue | Troubleshooting suggestions | 
| --- | --- | --- | --- | 
| rds\$1tlog\$1copy\$1setup | Backups are disabled on this DB instance. Enable DB instance backups with a retention of at least "1" and try again. | Automated backups are not enabled for the DB instance. |  DB instance backup retention must be enabled with a retention of at least one day. For more information on enabling automated backups and configuring backup retention, see [Backup retention period](USER_WorkingWithAutomatedBackups.BackupRetention.md).  | 
| rds\$1tlog\$1copy\$1setup | Error running the rds\$1tlog\$1copy\$1setup stored procedure. Reconnect to the RDS endpoint and try again. | An internal error occurred. | Reconnect to the RDS endpoint and run the `rds_tlog_copy_setup` stored procedure again. | 
| rds\$1tlog\$1copy\$1setup | Running the rds\$1tlog\$1backup\$1copy\$1setup stored procedure inside a transaction is not supported. Verify that the session has no open transactions and try again.  | The stored procedure was attempted within a transaction using `BEGIN` and `END`. | Avoid using `BEGIN` and `END` when running the `rds_tlog_copy_setup` stored procedure. | 
| rds\$1tlog\$1copy\$1setup | The S3 bucket name for the input parameter `@target_s3_arn` should contain at least one character other than a space.  | An incorrect value was provided for the input parameter `@target_s3_arn`. | Ensure the input parameter `@target_s3_arn` specifies the complete Amazon S3 bucket ARN. | 
| rds\$1tlog\$1copy\$1setup | The `SQLSERVER_BACKUP_RESTORE` option isn't enabled or is in the process of being enabled. Enable the option or try again later.  | The `SQLSERVER_BACKUP_RESTORE` option is not enabled on the DB instance or was just enabled and pending internal activation. | Enable the `SQLSERVER_BACKUP_RESTORE` option as specified in the Requirements section. Wait a few minutes and run the `rds_tlog_copy_setup` stored procedure again. | 
| rds\$1tlog\$1copy\$1setup | The target S3 arn for the input parameter `@target_s3_arn` can't be empty or null.  | An `NULL` value was provided for the input parameter `@target_s3_arn`, or the value wasn't provided. | Ensure the input parameter `@target_s3_arn` specifies the complete Amazon S3 bucket ARN. | 
| rds\$1tlog\$1copy\$1setup | The target S3 arn for the input parameter `@target_s3_arn` must begin with arn:aws.  | The input parameter `@target_s3_arn` was provide without `arn:aws` on the front. | Ensure the input parameter `@target_s3_arn` specifies the complete Amazon S3 bucket ARN. | 
| rds\$1tlog\$1copy\$1setup | The target S3 ARN is already set to the provided value.  | The `rds_tlog_copy_setup` stored procedure previously ran and was configured with an Amazon S3 bucket ARN. | To modify the Amazon S3 bucket value for access to transaction log backups, provide a different `target S3 ARN`. | 
| rds\$1tlog\$1copy\$1setup | Unable to generate credentials for enabling Access to Transaction Log Backups. Confirm the S3 path ARN provided with `rds_tlog_copy_setup`, and try again later.  | There was an unspecified error while generating credentials to enable access to transaction log backups. | Review your setup configuration and try again.  | 
| rds\$1tlog\$1copy\$1setup | You cannot run the rds\$1tlog\$1copy\$1setup stored procedure while there are pending tasks. Wait for the pending tasks to complete and try again.  | Only two tasks may run at any time. There are pending tasks awaiting completion. | View pending tasks and wait for them to complete. For more information on monitoring task status, see [Tracking the status of tasks](USER.SQLServer.AddlFeat.TransactionLogAccess.TrackTaskStatus.md).  | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | A T-log backup file copy task has already been issued for database: %s with task Id: %d, please try again later.  | Only one copy task may run at any time for a given database. There is a pending copy task awaiting completion. | View pending tasks and wait for them to complete. For more information on monitoring task status, see [Tracking the status of tasks](USER.SQLServer.AddlFeat.TransactionLogAccess.TrackTaskStatus.md).  | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | At least one of these three parameter sets must be provided. SET-1:(@backup\$1file\$1start\$1time, @backup\$1file\$1end\$1time) \$1 SET-2:(@starting\$1lsn, @ending\$1lsn) \$1 SET-3:(@rds\$1backup\$1starting\$1seq\$1id, @rds\$1backup\$1ending\$1seq\$1id)  | None of the three parameter sets were provided, or a provided parameter set is missing a required parameter. | You can specify either the time, lsn, or sequence ID parameters. One set from these three sets of parameters are required. For more information on required parameters, see [Copying transaction log backups](USER.SQLServer.AddlFeat.TransactionLogAccess.Copying.md). | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | Backups are disabled on your instance. Please enable backups and try again in some time. | Automated backups are not enabled for the DB instance. |  For more information on enabling automated backups and configuring backup retention, see [Backup retention period](USER_WorkingWithAutomatedBackups.BackupRetention.md).  | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | Cannot find the given database %s.  | The value provided for input parameter `@db_name` does not match a database name on the DB instance. | Use the correct database name. To list all databases by name, run `SELECT * from sys.databases` | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | Cannot run the rds\$1tlog\$1backup\$1copy\$1to\$1S3 stored procedure for SQL Server system databases or the rdsadmin database.  | The value provided for input parameter `@db_name` matches a SQL Server system database name or the RDSAdmin database. | The following databases are not allowed to be used with access to transaction log backups: `master, model, msdb, tempdb, RDSAdmin.`  | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | Database name for the input parameter @db\$1name can't be empty or null.  | The value provided for input parameter `@db_name` was empty or `NULL`. | Use the correct database name. To list all databases by name, run `SELECT * from sys.databases` | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | DB instance backup retention period must be set to at least 1 to run the rds\$1tlog\$1backup\$1copy\$1setup stored procedure.  | Automated backups are not enabled for the DB instance. | For more information on enabling automated backups and configuring backup retention, see [Backup retention period](USER_WorkingWithAutomatedBackups.BackupRetention.md). | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | Error running the stored procedure rds\$1tlog\$1backup\$1copy\$1to\$1S3. Reconnect to the RDS endpoint and try again.  | An internal error occurred. | Reconnect to the RDS endpoint and run the `rds_tlog_backup_copy_to_S3` stored procedure again. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | Only one of these three parameter sets can be provided. SET-1:(@backup\$1file\$1start\$1time, @backup\$1file\$1end\$1time) \$1 SET-2:(@starting\$1lsn, @ending\$1lsn) \$1 SET-3:(@rds\$1backup\$1starting\$1seq\$1id, @rds\$1backup\$1ending\$1seq\$1id)  | Multiple parameter sets were provided. | You can specify either the time, lsn, or sequence ID parameters. One set from these three sets of parameters are required. For more information on required parameters, see [Copying transaction log backups](USER.SQLServer.AddlFeat.TransactionLogAccess.Copying.md).  | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | Running the rds\$1tlog\$1backup\$1copy\$1to\$1S3 stored procedure inside a transaction is not supported. Verify that the session has no open transactions and try again.  | The stored procedure was attempted within a transaction using `BEGIN` and `END`. | Avoid using `BEGIN` and `END` when running the `rds_tlog_backup_copy_to_S3` stored procedure. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | The provided parameters fall outside of the transaction backup log retention period. To list of available transaction log backup files, run the rds\$1fn\$1list\$1tlog\$1backup\$1metadata function.  | There are no available transactional log backups for the provided input parameters that fit in the copy retention window. | Try again with a valid set of parameters. For more information on required parameters, see [Copying transaction log backups](USER.SQLServer.AddlFeat.TransactionLogAccess.Copying.md). | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | There was a permissions error in processing the request. Ensure the bucket is in the same Account and Region as the DB Instance, and confirm the S3 bucket policy permissions against the template in the public documentation.  | There was an issue detected with the provided S3 bucket or its policy permissions. | Confirm your setup for access to transaction log backups is correct. For more information on setup requirements for your S3 bucket, see [Requirements](USER.SQLServer.AddlFeat.TransactionLogAccess.md#USER.SQLServer.AddlFeat.TransactionLogAccess.Requirements). | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | Running the `rds_tlog_backup_copy_to_S3` stored procedure on an RDS read replica instance isn't permitted.  | The stored procedure was attempted on a RDS read replica instance. | Connect to the RDS primary DB instance to run the `rds_tlog_backup_copy_to_S3` stored procedure. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | The LSN for the input parameter `@starting_lsn` must be less than `@ending_lsn`.  | The value provided for input parameter `@starting_lsn` was greater than the value provided for input parameter `@ending_lsn`. | Ensure the value provided for input parameter `@starting_lsn` is less than the value provided for input parameter `@ending_lsn`. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | The `rds_tlog_backup_copy_to_S3` stored procedure can only be performed by the members of `db_owner` role in the source database.  | The `db_owner` role has not been granted for the account attempting to run the `rds_tlog_backup_copy_to_S3` stored procedure on the provided `db_name`. | Ensure the account running the stored procedure is permissioned with the `db_owner` role for the provided `db_name`. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | The sequence ID for the input parameter `@rds_backup_starting_seq_id` must be less than or equal to `@rds_backup_ending_seq_id`.  | The value provided for input parameter `@rds_backup_starting_seq_id` was greater than the value provided for input parameter `@rds_backup_ending_seq_id`. | Ensure the value provided for input parameter `@rds_backup_starting_seq_id` is less than the value provided for input parameter `@rds_backup_ending_seq_id`. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | The SQLSERVER\$1BACKUP\$1RESTORE option isn't enabled or is in the process of being enabled. Enable the option or try again later.  | The `SQLSERVER_BACKUP_RESTORE` option is not enabled on the DB instance or was just enabled and pending internal activation. | Enable the `SQLSERVER_BACKUP_RESTORE` option as specified in the Requirements section. Wait a few minutes and run the `rds_tlog_backup_copy_to_S3` stored procedure again. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | The start time for the input parameter `@backup_file_start_time` must be less than `@backup_file_end_time`.  | The value provided for input parameter `@backup_file_start_time` was greater than the value provided for input parameter `@backup_file_end_time`. | Ensure the value provided for input parameter `@backup_file_start_time` is less than the value provided for input parameter `@backup_file_end_time`. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | We were unable to process the request due to a lack of access. Please check your setup and permissions for the feature.  | There may be an issue with the Amazon S3 bucket permissions, or the Amazon S3 bucket provided is in another account or Region. | Ensure the Amazon S3 bucket policy permissions are permissioned to allow RDS access. Ensure the Amazon S3 bucket is in the same account and Region as the DB instance. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | You cannot provide a KMS Key ARN as input parameter to the stored procedure for instances that are not storage-encrypted.  | When storage encryption is not enabled on the DB instance, the input parameter `@kms_key_arn` should not be provided. | Do not provide an input parameter for `@kms_key_arn`. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | You must provide a KMS Key ARN as input parameter to the stored procedure for storage encrypted instances.  | When storage encryption is enabled on the DB instance, the input parameter `@kms_key_arn` must be provided. | Provide an input parameter for `@kms_key_arn` with a value that matches the ARN of the Amazon S3 bucket to use for transaction log backups. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | You must run the `rds_tlog_copy_setup` stored procedure and set the `@target_s3_arn`, before running the `rds_tlog_backup_copy_to_S3` stored procedure.  | The access to transaction log backups setup procedure was not completed before attempting to run the `rds_tlog_backup_copy_to_S3` stored procedure. | Run the `rds_tlog_copy_setup` stored procedure before running the `rds_tlog_backup_copy_to_S3` stored procedure. For more information on running the setup procedure for access to transaction log backups, see [Setting up access to transaction log backups](USER.SQLServer.AddlFeat.TransactionLogAccess.Enabling.md).  | 