

# Amazon RDS for Microsoft SQL Server
<a name="CHAP_SQLServer"></a>

Amazon RDS supports several versions and editions of Microsoft SQL Server. The following table shows the most recent supported minor version of each major version. For the full list of supported versions, editions, and RDS engine versions, see [Microsoft SQL Server versions on Amazon RDS](SQLServer.Concepts.General.VersionSupport.md).




| Major version | Service Pack / GDR | Cumulative Update | Minor version | Knowledge Base Article | Release Date | 
| --- | --- | --- | --- | --- | --- | 
| SQL Server 2022 | Not applicable | CU24 |  16.0.4245.2  | [KB5080999](https://learn.microsoft.com/en-us/troubleshoot/sql/releases/sqlserver-2022/cumulativeupdate24) | March 12, 2026 | 
| SQL Server 2019 | GDR | CU32 GDR |  15.0.4460.4  | [KB5077469](https://support.microsoft.com/en-us/topic/kb5077469-description-of-the-security-update-for-sql-server-2019-cu32-march-10-2026-5ec2c609-35cb-483d-aa80-5e66821e5c97) | March 10, 2026 | 
| SQL Server 2017 | GDR | CU31 GDR |  14.0.3520.4  | [KB5077471](https://support.microsoft.com/en-us/topic/kb5077471-description-of-the-security-update-for-sql-server-2017-cu31-march-10-2026-f020d5eb-e356-42e8-a9ba-0ef061430b15) | March 10, 2026 | 
| SQL Server 2016 | SP3 GDR | Not applicable |  13.0.6480.4  | [KB5077474](https://support.microsoft.com/en-us/topic/kb5077474-description-of-the-security-update-for-sql-server-2016-sp3-gdr-march-10-2026-3f455bec-1221-4962-b068-0b11bf96b66a) | March 10, 2026 | 

For information about licensing for SQL Server, see [Licensing Microsoft SQL Server on Amazon RDS](SQLServer.Concepts.General.Licensing.md). For information about SQL Server builds, see this Microsoft support article about [Where to find information about the latest SQL Server builds](https://support.microsoft.com/en-us/topic/kb957826-where-to-find-information-about-the-latest-sql-server-builds-43994ba5-9aed-2323-ea7c-d29fe9c4fbe8).

With Amazon RDS, you can create DB instances and DB snapshots, point-in-time restores, and automated or manual backups. DB instances running SQL Server can be used inside a VPC. You can also use Secure Sockets Layer (SSL) to connect to a DB instance running SQL Server, and you can use transparent data encryption (TDE) to encrypt data at rest. Amazon RDS currently supports Multi-AZ deployments for SQL Server using SQL Server Database Mirroring (DBM) or Always On Availability Groups (AGs) as a high-availability, failover solution. 

To deliver a managed service experience, Amazon RDS does not provide shell access to DB instances, and it restricts access to certain system procedures and tables that require advanced privileges. Amazon RDS supports access to databases on a DB instance using any standard SQL client application such as Microsoft SQL Server Management Studio. Amazon RDS does not allow direct host access to a DB instance via Telnet, Secure Shell (SSH), or Windows Remote Desktop Connection. When you create a DB instance, the master user is assigned to the *db\$1owner* role for all user databases on that instance, and has all database-level permissions except for those that are used for backups. Amazon RDS manages backups for you. 

Before creating your first DB instance, you should complete the steps in the setting up section of this guide. For more information, see [Setting up your Amazon RDS environment](CHAP_SettingUp.md).

**Topics**
+ [

## Common management tasks for Microsoft SQL Server on Amazon RDS
](#SQLServer.Concepts.General)
+ [

## Limitations for Microsoft SQL Server DB instances
](#SQLServer.Concepts.General.FeatureSupport.Limits)
+ [

# DB instance class support for Microsoft SQL Server
](SQLServer.Concepts.General.InstanceClasses.md)
+ [

# Optimize CPUs for RDS for SQL Server license-included instances
](SQLServer.Concepts.General.OptimizeCPU.md)
+ [

# Microsoft SQL Server security
](SQLServer.Concepts.General.FeatureSupport.UnsupportedRoles.md)
+ [

## Compliance program support for Microsoft SQL Server DB instances
](#SQLServer.Concepts.General.Compliance)
+ [

# Microsoft SQL Server versions on Amazon RDS
](SQLServer.Concepts.General.VersionSupport.md)
+ [

# Amazon RDS for SQL Server version policy
](SQLServer.Concepts.General.VersionPolicy.md)
+ [

# Microsoft SQL Server features on Amazon RDS
](SQLServer.Concepts.General.FeatureSupport.md)
+ [

## Multi-AZ deployments using Microsoft SQL Server Database Mirroring or Always On availability groups
](#SQLServer.Concepts.General.Mirroring)
+ [

## Using Transparent Data Encryption to encrypt data at rest
](#SQLServer.Concepts.General.Options)
+ [

# Functions and stored procedures for Amazon RDS for Microsoft SQL Server
](SQLServer.Concepts.General.StoredProcedures.md)
+ [

# Local time zone for Microsoft SQL Server DB instances
](SQLServer.Concepts.General.TimeZone.md)
+ [

# Licensing Microsoft SQL Server on Amazon RDS
](SQLServer.Concepts.General.Licensing.md)
+ [

# Connecting to your Microsoft SQL Server DB instance
](USER_ConnectToMicrosoftSQLServerInstance.md)
+ [

# Working with SQL Server Developer Edition on RDS for SQL Server
](sqlserver-dev-edition.md)
+ [

# Working with Active Directory with RDS for SQL Server
](User.SQLServer.ActiveDirectoryWindowsAuth.md)
+ [

# Upgrades of the Microsoft SQL Server DB engine
](USER_UpgradeDBInstance.SQLServer.md)
+ [

# Working with storage in RDS for SQL Server
](Appendix.SQLServer.CommonDBATasks.DatabaseStorage.md)
+ [

# Importing and exporting SQL Server databases using native backup and restore
](SQLServer.Procedural.Importing.md)
+ [

# Working with read replicas for Microsoft SQL Server in Amazon RDS
](SQLServer.ReadReplicas.md)
+ [

# Multi-AZ deployments for Amazon RDS for Microsoft SQL Server
](USER_SQLServerMultiAZ.md)
+ [

# Additional features for Microsoft SQL Server on Amazon RDS
](User.SQLServer.AdditionalFeatures.md)
+ [

# Options for the Microsoft SQL Server database engine
](Appendix.SQLServer.Options.md)
+ [

# Common DBA tasks for Amazon RDS for Microsoft SQL Server
](Appendix.SQLServer.CommonDBATasks.md)

## Common management tasks for Microsoft SQL Server on Amazon RDS
<a name="SQLServer.Concepts.General"></a>

The following are the common management tasks you perform with an Amazon RDS for SQL Server DB instance, with links to relevant documentation for each task. 


****  

| Task area | Description | Relevant documentation | 
| --- | --- | --- | 
|  **Instance classes, storage, and PIOPS**  |  If you are creating a DB instance for production purposes, you should understand how instance classes, storage types, and Provisioned IOPS work in Amazon RDS.   |  [DB instance class support for Microsoft SQL Server](SQLServer.Concepts.General.InstanceClasses.md) [Amazon RDS storage types](CHAP_Storage.md#Concepts.Storage)  | 
|  **Multi-AZ deployments**  |  A production DB instance should use Multi-AZ deployments. Multi-AZ deployments provide increased availability, data durability, and fault tolerance for DB instances. Multi-AZ deployments for SQL Server are implemented using SQL Server's native DBM or AGs technology.   |  [Configuring and managing a Multi-AZ deployment for Amazon RDS](Concepts.MultiAZ.md) [Multi-AZ deployments using Microsoft SQL Server Database Mirroring or Always On availability groups](#SQLServer.Concepts.General.Mirroring)  | 
|  **Amazon Virtual Private Cloud (VPC)**  |  If your AWS account has a default VPC, then your DB instance is automatically created inside the default VPC. If your account does not have a default VPC, and you want the DB instance in a VPC, you must create the VPC and subnet groups before you create the DB instance.   |  [Working with a DB instance in a VPC](USER_VPC.WorkingWithRDSInstanceinaVPC.md)  | 
|  **Security groups**  |  By default, DB instances are created with a firewall that prevents access to them. You therefore must create a security group with the correct IP addresses and network configuration to access the DB instance.  |  [Controlling access with security groups](Overview.RDSSecurityGroups.md)  | 
|  **Parameter groups**  |  If your DB instance is going to require specific database parameters, you should create a parameter group before you create the DB instance.   |  [Parameter groups for Amazon RDS](USER_WorkingWithParamGroups.md)  | 
|  **Option groups**  |  If your DB instance is going to require specific database options, you should create an option group before you create the DB instance.   |  [Options for the Microsoft SQL Server database engine](Appendix.SQLServer.Options.md)  | 
|  **Connecting to your DB instance**  |  After creating a security group and associating it to a DB instance, you can connect to the DB instance using any standard SQL client application such as Microsoft SQL Server Management Studio.   |  [Connecting to your Microsoft SQL Server DB instance](USER_ConnectToMicrosoftSQLServerInstance.md)  | 
|  **Backup and restore**  |  When you create your DB instance, you can configure it to take automated backups. You can also back up and restore your databases manually by using full backup files (.bak files).   |  [Introduction to backups](USER_WorkingWithAutomatedBackups.md) [Importing and exporting SQL Server databases using native backup and restore](SQLServer.Procedural.Importing.md)  | 
|  **Monitoring**  |  You can monitor your SQL Server DB instance by using CloudWatch Amazon RDS metrics, events, and enhanced monitoring.   |  [Viewing metrics in the Amazon RDS console](USER_Monitoring.md) [Viewing Amazon RDS events](USER_ListEvents.md)  | 
|  **Log files**  |  You can access the log files for your SQL Server DB instance.   |  [Monitoring Amazon RDS log files](USER_LogAccess.md) [Amazon RDS for Microsoft SQL Server database log files](USER_LogAccess.Concepts.SQLServer.md)  | 

There are also advanced administrative tasks for working with SQL Server DB instances. For more information, see the following documentation: 
+ [Common DBA tasks for Amazon RDS for Microsoft SQL Server](Appendix.SQLServer.CommonDBATasks.md).
+ [Working with AWS Managed Active Directory with RDS for SQL Server](USER_SQLServerWinAuth.md)
+ [Accessing the tempdb database](SQLServer.TempDB.md)

## Limitations for Microsoft SQL Server DB instances
<a name="SQLServer.Concepts.General.FeatureSupport.Limits"></a>

The Amazon RDS implementation of Microsoft SQL Server on a DB instance has some limitations that you should be aware of:
+ The maximum number of databases supported on a DB instance depends on the instance class type and the availability mode—Single-AZ, Multi-AZ Database Mirroring (DBM), or Multi-AZ Availability Groups (AGs). The Microsoft SQL Server system databases don't count toward this limit. 

  The following table shows the maximum number of supported databases for each instance class type and availability mode. Use this table to help you decide if you can move from one instance class type to another, or from one availability mode to another. If your source DB instance has more databases than the target instance class type or availability mode can support, modifying the DB instance fails. You can see the status of your request in the **Events** pane.     
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SQLServer.html)

  \$1 Represents the different instance class types. 

  For example, let's say that your DB instance runs on a db.\$1.16xlarge with Single-AZ and that it has 76 databases. You modify the DB instance to upgrade to using Multi-AZ Always On AGs. This upgrade fails, because your DB instance contains more databases than your target configuration can support. If you upgrade your instance class type to db.\$1.24xlarge instead, the modification succeeds.

  If the upgrade fails, you see events and messages similar to the following:
  +  Unable to modify database instance class. The instance has 76 databases, but after conversion it would only support 75. 
  +  Unable to convert the DB instance to Multi-AZ: The instance has 76 databases, but after conversion it would only support 75. 

   If the point-in-time restore or snapshot restore fails, you see events and messages similar to the following:
  +  Database instance put into incompatible-restore. The instance has 76 databases, but after conversion it would only support 75. 
+ The following ports are reserved for Amazon RDS, and you can't use them when you create a DB instance: `1234, 1434, 3260, 3343, 3389, 47001,` and `49152-49156`.
+ Client connections from IP addresses within the range 169.254.0.0/16 are not permitted. This is the Automatic Private IP Addressing Range (APIPA), which is used for local-link addressing.
+ SQL Server Standard Edition uses only a subset of the available processors if the DB instance has more processors than the software limits (24 cores, 4 sockets, and 128GB RAM). Examples of this are the db.m5.24xlarge and db.r5.24xlarge instance classes.

  For more information, see the table of scale limits under [Editions and supported features of SQL Server 2019 (15.x)](https://docs.microsoft.com/en-us/sql/sql-server/editions-and-components-of-sql-server-version-15) in the Microsoft documentation.
+ Amazon RDS for SQL Server doesn't support importing data into the msdb database. 
+ You can't rename databases on a DB instance in a SQL Server Multi-AZ deployment.
+ Make sure that you use these guidelines when setting the following DB parameters on RDS for SQL Server:
  + `max server memory (mb)` >= 256 MB
  + `max worker threads` >= (number of logical CPUs \$1 7)

  For more information on setting DB parameters, see [Parameter groups for Amazon RDS](USER_WorkingWithParamGroups.md).
+ The maximum storage size for SQL Server DB instances is the following: 
  + General Purpose (SSD) storage – 16 TiB for all editions 
  + Provisioned IOPS storage – 64 TiB for all editions 
  + Magnetic storage – 1 TiB for all editions 

  If you have a scenario that requires a larger amount of storage, you can use sharding across multiple DB instances to get around the limit. This approach requires data-dependent routing logic in applications that connect to the sharded system. You can use an existing sharding framework, or you can write custom code to enable sharding. If you use an existing framework, the framework can't install any components on the same server as the DB instance. 
+ The minimum storage size for SQL Server DB instances is the following:
  + General Purpose (SSD) storage – 20 GiB for Enterprise, Standard, Web, and Express Editions
  + Provisioned IOPS storage – 20 GiB for Enterprise, Standard, Web, and Express Editions
  + Magnetic storage – 20 GiB for Enterprise, Standard, Web, and Express Editions
+ Amazon RDS doesn't support running these services on the same server as your RDS DB instance:
  + Data Quality Services
  + Master Data Services

  To use these features, we recommend that you install SQL Server on an Amazon EC2 instance, or use an on-premises SQL Server instance. In these cases, the EC2 or SQL Server instance acts as the Master Data Services server for your SQL Server DB instance on Amazon RDS. You can install SQL Server on an Amazon EC2 instance with Amazon EBS storage, pursuant to Microsoft licensing policies.
+ Because of limitations in Microsoft SQL Server, restoring to a point in time before successfully running `DROP DATABASE` might not reflect the state of that database at that point in time. For example, the dropped database is typically restored to its state up to 5 minutes before the `DROP DATABASE` command was issued. This type of restore means that you can't restore the transactions made during those few minutes on your dropped database. To work around this, you can reissue the `DROP DATABASE` command after the restore operation is completed. Dropping a database removes the transaction logs for that database.
+ For SQL Server, you create your databases after you create your DB instance. Database names follow the usual SQL Server naming rules with the following differences:
  + Database names can't start with `rdsadmin`.
  + They can't start or end with a space or a tab.
  + They can't contain any of the characters that create a new line.
  + They can't contain a single quote (`'`).
+ SQL Server Web Edition only allows you to use the **Dev/Test** template when creating a new RDS for SQL Server DB instance.
+ SQL Server Web Edition is designed for web hosters and web VAPs to host public and internet-accessible web pages, websites, web applications, and web services. For more information, see [Licensing Microsoft SQL Server on Amazon RDS](SQLServer.Concepts.General.Licensing.md).

# DB instance class support for Microsoft SQL Server
<a name="SQLServer.Concepts.General.InstanceClasses"></a>

The computation and memory capacity of a DB instance is determined by its DB instance class. The DB instance class you need depends on your processing power and memory requirements. For more information, see [DB instance classes](Concepts.DBInstanceClass.md). 

The following list of DB instance classes supported for Microsoft SQL Server is provided here for your convenience. For the most current list, see the RDS console: [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

Not all DB instance classes are available on all supported SQL Server minor versions. For example, some newer DB instance classes such as db.r6i aren't available on older minor versions. You can use the [describe-orderable-db-instance-options](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/rds/describe-orderable-db-instance-options.html) AWS CLI command to find out which DB instance classes are available for your SQL Server edition and version.


****  

| SQL Server edition | 2022 support range | 2019 support range | 2017 and 2016 support range | 
| --- | --- | --- | --- | 
|  Enterprise Edition  | `db.t3.xlarge`–`db.t3.2xlarge``db.r5.xlarge`–`db.r5.24xlarge``db.r5b.xlarge`–`db.r5b.24xlarge``db.r5d.xlarge`–`db.r5d.24xlarge``db.r6i.xlarge`–`db.r6i.32xlarge`db.r7i.xlarge–db.r7i.48xlarge`db.m5.xlarge`–`db.m5.24xlarge``db.m5d.xlarge`–`db.m5d.24xlarge``db.m6i.xlarge`–`db.m6i.32xlarge``db.m7i.xlarge`–`db.m7i.48xlarge``db.x2iedn.xlarge`–`db.x2iedn.32xlarge``db.z1d.xlarge`–`db.z1d.12xlarge` |  `db.t3.xlarge`–`db.t3.2xlarge` `db.r5.xlarge`–`db.r5.24xlarge` `db.r5b.xlarge`–`db.r5b.24xlarge` `db.r5d.xlarge`–`db.r5d.24xlarge` `db.r6i.xlarge`–`db.r6i.32xlarge` `db.r7i.xlarge`–`db.r7i.48xlarge` `db.m5.xlarge`–`db.m5.24xlarge` `db.m5d.xlarge`–`db.m5d.24xlarge` `db.m6i.xlarge`–`db.m6i.32xlarge` `db.m7i.xlarge`–`db.m7i.48xlarge` `db.x2iedn.xlarge`–`db.x2iedn.32xlarge` `db.z1d.xlarge`–`db.z1d.12xlarge`  |  `db.t3.xlarge`–`db.t3.2xlarge` `db.r5.xlarge`–`db.r5.24xlarge` `db.r5b.xlarge`–`db.r5b.24xlarge` `db.r5d.xlarge`–`db.r5d.24xlarge` `db.r6i.xlarge`–`db.r6i.32xlarge` `db.r7i.xlarge`–`db.r7i.48xlarge` `db.m5.xlarge`–`db.m5.24xlarge` `db.m5d.xlarge`–`db.m5d.24xlarge` `db.m6i.xlarge`–`db.m6i.32xlarge` `db.m7i.xlarge`–`db.m7i.48xlarge` `db.x2iedn.xlarge`–`db.x2iedn.32xlarge` `db.z1d.xlarge`–`db.z1d.12xlarge`  | 
|  Standard Edition  | `db.t3.xlarge`–`db.t3.2xlarge``db.r5.large`–`db.r5.24xlarge``db.r5b.large`–`db.r5b.8xlarge``db.r5d.large`–`db.r5d.24xlarge``db.r6i.large`–`db.r6i.8xlarge`db.r7i.large–db.r7i.12xlarge`db.m5.large`–`db.m5.24xlarge``db.m5d.large`–`db.m5d.24xlarge``db.m6i.large`–`db.m6i.8xlarge``db.m7i.large`–`db.m7i.12xlarge``db.x2iedn.xlarge`–`db.x2iedn.8xlarge``db.z1d.large`–`db.z1d.12xlarge` |  `db.t3.xlarge`–`db.t3.2xlarge` `db.r5.large`–`db.r5.24xlarge` `db.r5b.large`–`db.r5b.24xlarge` `db.r5d.large`–`db.r5d.24xlarge` `db.r6i.large`–`db.r6i.8xlarge` `db.r7i.large`–`db.r7i.12xlarge` `db.m5.large`–`db.m5.24xlarge` `db.m5d.large`–`db.m5d.24xlarge` `db.m6i.large`–`db.m6i.8xlarge` `db.m7i.large`–`db.m7i.12xlarge` `db.x2iedn.xlarge`–`db.x2iedn.32xlarge` `db.z1d.large`–`db.z1d.12xlarge`  | `db.t3.xlarge`–`db.t3.2xlarge``db.r5.large`–`db.r5.24xlarge``db.r5b.large`–`db.r5b.24xlarge``db.r5d.large`–`db.r5d.24xlarge``db.r6i.large`–`db.r6i.8xlarge``db.r7i.large`–`db.r7i.12xlarge``db.m5.large`–`db.m5.24xlarge``db.m5d.large`–`db.m5d.24xlarge``db.m6i.large`–`db.m6i.8xlarge`db.m7i.large–db.m7i.12xlarge`db.x2iedn.xlarge`–`db.x2iedn.32xlarge``db.z1d.large`–`db.z1d.12xlarge` | 
|  Web Edition  | `db.t3.small`–`db.t3.xlarge``db.r5.large`–`db.r5.4xlarge``db.r5b.large`–`db.r5b.4xlarge``db.r5d.large`–`db.r5d.4xlarge`db.r6i.large–db.r6i.4xlarge`db.r7i.large`–`db.r7i.4xlarge``db.m5.large`–`db.m5.4xlarge``db.m5d.large`–`db.m5d.4xlarge``db.m6i.large`–`db.m6i.4xlarge``db.m7i.large`–`db.m7i.4xlarge``db.z1d.large`–`db.z1d.13xlarge` | `db.t3.small`–`db.t3.2xlarge``db.r5.large`–`db.r5.4xlarge``db.r5b.large`–`db.r5b.4xlarge``db.r5d.large`–`db.r5d.4xlarge``db.r6i.large`–`db.r6i.4xlarge`db.r7i.large–db.r7i.4xlarge`db.m5.large`–`db.m5.4xlarge``db.m5d.large`–`db.m5d.4xlarge``db.m6i.large`–`db.m6i.4xlarge`db.m7i.large–db.m7i.4xlarge`db.z1d.large`–`db.z1d.3xlarge` | `db.t3.small`–`db.t3.2xlarge``db.r5.large`–`db.r5.4xlarge``db.r5b.large`–`db.r5b.4xlarge``db.r5d.large`–`db.r5d.4xlarge``db.r6i.large`–`db.r6i.4xlarge`db.r7i.large–db.r7i.4xlarge`db.m5.large`–`db.m5.4xlarge``db.m5d.large`–`db.m5d.4xlarge``db.m6i.large`–`db.m6i.4xlarge`db.m7i.large–db.m7i.4xlarge`db.z1d.large`–`db.z1d.3xlarge` | 
|  Express Edition  |  `db.t3.micro`–`db.t3.xlarge`  |  `db.t3.micro`–`db.t3.xlarge`  |  `db.t3.micro`–`db.t3.xlarge`  | 
| Developer Edition | `db.m6i.xlarge`–`db.m6i.32xlarge``db.r6i.xlarge`–`db.r6i.32xlarge` |  |  | 

**Note**  
 Starting with the 7th generation instance class, hyper-threading is disabled on RDS SQL Server for instance sizes 2xlarge and above. This results in the total number of vCPUs available being half of that supported by the [ corresponding EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/cpu-options-supported-instances-values.html). For example, the EC2 instance type `m7i.2xlarge` by default supports 4 cores and 2 threadsPerCore, resulting in a total of 8 vCPUs. In contrast, the RDS for SQL Server `db.m7i.2xlarge` instance, with hyper-threading disabled, results in 4 cores and 1 threadsPerCore, overall 4 vCPUs.
Starting with the 7th generation instances, your billing provides a detailed breakdown of RDS DB instance and third-party licensing fees. For more details, refer to [RDS SQL Server pricing](https://aws.amazon.com/rds/sqlserver/pricing/).

# Optimize CPUs for RDS for SQL Server license-included instances
<a name="SQLServer.Concepts.General.OptimizeCPU"></a>

With RDS for SQL Server, you can use Optimize CPU by specifying processor features to configure the vCPU count on your DB instance while maintaining the same memory and IOPS. You can achieve desired memory-to-CPU ratios for specific database workload requirements and reduce licensing costs for Microsoft Windows OS and SQL Server, which are based on vCPU count.

To specify processor feature, use the following parameters:

```
--processor-features "Name=coreCount,Value=value" \ 
	"Name=threadsPerCore,Value=value"
```
+ **coreCount** – Specify the number of CPU cores for the DB instance, to optimize licensing costs for DB instances. See [DB instance classes that support Optimize CPUDB instance class support](SQLServer.Concepts.General.OptimizeCPU.Support.md) to find the allowed values for core count for a selected instance type.
+ **threadsPerCore** – Specify the threads per core to define the number of threads per CPU core. See [DB instance classes that support Optimize CPUDB instance class support](SQLServer.Concepts.General.OptimizeCPU.Support.md) to find the allowed values for threads per core for a selected instance type.

Sample command to create an RDS for SQL Server instance with Optimize CPU settings:

```
aws rds create-db-instance \
    --engine sqlserver-ee \
    --engine-version 16.00 \
    --license-model license-included \
    --allocated-storage 300 \
    --master-username myuser \
    --master-user-password xxxxx \
    --no-multi-az \
    --vpc-security-group-ids myvpcsecuritygroup \
    --db-subnet-group-name mydbsubnetgroup \
    --db-instance-identifier my-rds-instance \
    --db-instance-class db.m7i.8xlarge \
    --processor-features "Name=coreCount,Value=8" "Name=threadsPerCore,Value=1"
```

In this example, you create a `db.m7i.8xlarge` instance, which by default has a coreCount of 16. By using Optimize CPU, you opt for a coreCount of 8, resulting in an effective vCPU count of 8.

If you create the instance without the `--processor-features` parameter, core count is set to 16 and threads per core is set to 1 by default, resulting in a default vCPU count of 16.

Some considerations to keep in mind while specifying processor features:
+ **Create** – Specify both the `coreCount` and `threadsPerCore` for the `processor-features` parameter from the allowed values. See [DB instance classes that support Optimize CPUDB instance class support](SQLServer.Concepts.General.OptimizeCPU.Support.md).
+ **Modify** – When modifying from one instance class configured with Optimize CPU settings to another one that supports Optimize CPU settings, you must specify the default processor settings using the `--use-default-processor-features` parameter or explicitly define the options during the modify request.
**Note**  
Changing the vCPU count can have implications for the licensing fee cost associated with the DB instance.
+ **Snapshot restore** – When restoring a snapshot to the same instance type as source, the restored DB instance inherits the Optimize CPU settings from the snapshot. If restoring to a different instance type, you need to define the Optimize CPU settings for the target instance or specify the `--use-default-processor-features` parameter.
+ **Point-in-time restore** – Point-in-time restore (PITR) involves restoring a specific snapshot based on the designated time for PITR and subsequently applying all transactional log backups to that snapshot, thereby bringing the instance to the specified point in time. For PITR, the Optimize CPU settings, `coreCount` and `threadsPerCore`, are derived from the source snapshot (not the point in time) unless custom values are specified during the PITR request. If the source snapshot being used is enabled with Optimize CPU settings and you are using a different instance type for PITR, you must define the Optimize CPU settings for the target instance or specify the `—-use-default-processor-features` parameter.

## Limitations
<a name="SQLServer.Concepts.General.OptimizeCPU.Limitations"></a>

The following limitations apply when using Optimize CPU:
+ Optimize CPU is supported with Enterprise, Standard, and Web Editions only.
+ Optimize CPU is available on select instances. See [DB instance classes that support Optimize CPUDB instance class support](SQLServer.Concepts.General.OptimizeCPU.Support.md).
+ Customizing the number of CPU cores is supported on instance sizes of `2xlarge` and above. With these instance types, the minimum number of vCPCU supported for Optimize CPU is 4.
+ Optimize CPU allows only 1 thread per core since Hyper-Threading is disabled for instances starting from 7th generation that support Optimize CPU.

# DB instance classes that support Optimize CPU
<a name="SQLServer.Concepts.General.OptimizeCPU.Support"></a>

RDS for SQL Server supports Optimize CPU beginning with the 7th Generation instance class type. Additionally, RDS provides a detailed billing breakdown of RDS DB instance and third-party licensing fees, starting from the 7th Generation instance class type, regardless of whether the Optimize CPU feature is enabled.

RDS for SQL Server provides support for Optimize CPU on specific instance sizes, with the smallest instance size supported being `2xlarge`. The minimum configuration supported is 4 vCPUs. The table below outlines the DB instance classes that support the Optimize CPU, including their default and valid values for CPU cores, CPU threads per core and vCPUs: 


**General purpose instances**  

| Instance type | Default vCPUs | Default CPU cores | Valid CPU cores | Valid threads per core | 
| --- | --- | --- | --- | --- | 
| `m7i.large` | 2 | 1 | 1 | 2 | 
| `m7i.xlarge` | 4 | 2 | 2 | 2 | 
| `m7i.2xlarge` | 4 | 4 | 1,2,3,4 | 1 | 
| `m7i.4xlarge` | 8 | 8 | 1,2,3,4,5,6,7,8 | 1 | 
| `m7i.8xlarge` | 16 | 16 | 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16 | 1 | 
| `m7i.12xlarge` | 24 | 24 | 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24 | 1 | 
| `m7i.16xlarge` | 32 | 32 | 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32 | 1 | 


**Memory optimized instances**  

| Instance type | Default vCPUs | Default CPU cores | Valid CPU cores | Valid threads per core | 
| --- | --- | --- | --- | --- | 
| `r7i.large` | 2 | 1 | 1 | 2 | 
| `r7i.xlarge` | 4 | 2 | 2 | 2 | 
| `r7i.2xlarge` | 4 | 4 | 1,2,3,4 | 1 | 
| `r7i.4xlarge` | 8 | 8 | 1,2,3,4,5,6,7,8 | 1 | 
| `r7i.8xlarge` | 16 | 16 | 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16 | 1 | 
| `r7i.12xlarge` | 24 | 24 | 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24 | 1 | 
| `r7i.16xlarge` | 32 | 32 | 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32 | 1 | 

# Setting the CPU cores and threads per CPU core for a DB instance class
<a name="SQLServer.Concepts.General.OptimizeCPU.Enabling"></a>

You can configure the number of CPU cores and threads per core for the DB instance class when you perform the following operations:
+ [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md)
+ [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md)
+ [Restoring to a DB instance](USER_RestoreFromSnapshot.md)
+ [Restoring a DB instance to a specified time for Amazon RDS](USER_PIT.md)

**Note**  
When you modify a DB instance to configure the number of CPU cores or threads per core, there is a brief outage similar to when you modify the instance class.

Set the CPU cores by using the AWS Management Console, AWS CLI or the RDS API.

## Console
<a name="SQLServer.Concepts.General.OptimizeCPU.Enabling.CON"></a>

**To set the cores**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose **Create database**.

1. When setting the **Instance configuration** options:

   1. Choose the **Optimize CPU** option.

   1. Set your **vCPU** option by choosing the number of cores.  
![\[Database create page when setting OCPU settings\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/OCPU-screenshot.png)

1. After completing other selections, select **Create database**.

## AWS CLI
<a name="SQLServer.Concepts.General.OptimizeCPU.Enabling.CLI"></a>

**To set the cores**

1. To configure Optimize CPU using the AWS CLI, include the `--processor-features` option in the command. Specify the number of CPU cores with the `coreCount` and `threadsPerCore` as `1`.

1. Use the following syntax:

   ```
   aws rds create-db-instance \
       --engine sqlserver-ee \
       --engine-version 16.00 \
       --license-model license-included \
       --allocated-storage 300 \
       --master-username myuser \
       --master-user-password xxxxx \
       --no-multi-az \
       --vpc-security-group-ids myvpcsecuritygroup \
       --db-subnet-group-name mydbsubnetgroup \
       --db-instance-identifier my-rds-instance \
       --db-instance-class db.m7i.4xlarge \
       --processor-features "Name=coreCount,Value=6" "Name=threadsPerCore,Value=1"
   ```

**Example of viewing valid processor values for a DB instance class**  
Use the `describe-orderable-db-instance-options` command to show the default vCPUs, cores, and threads per core. For example, the output for the following command shows the processor options for the db.r7i.2xlarge instance class.  

```
aws rds describe-orderable-db-instance-options --engine sqlserver-ee \
--db-instance-class db.r7i.2xlarge

Sample output: 
-------------------------------------------------------------
|            DescribeOrderableDBInstanceOptions             |
+-----------------------------------------------------------+
||               OrderableDBInstanceOptions                ||
|+------------------------------------+--------------------+|
||  DBInstanceClass                   |  db.r7i.2xlarge    ||
||  Engine                            |  sqlserver-ee      ||
||  EngineVersion                     |  13.00.6300.2.v1   ||
||  LicenseModel                      |  license-included  ||
||  MaxIopsPerDbInstance              |                    ||
||  MaxIopsPerGib                     |                    ||
||  MaxStorageSize                    |  64000             ||
||  MinIopsPerDbInstance              |                    ||
||  MinIopsPerGib                     |                    ||
||  MinStorageSize                    |  20                ||
||  MultiAZCapable                    |  True              ||
||  OutpostCapable                    |  False             ||
||  ReadReplicaCapable                |  True              ||
||  StorageType                       |  gp2               ||
||  SupportsClusters                  |  False             ||
||  SupportsDedicatedLogVolume        |  False             ||
||  SupportsEnhancedMonitoring        |  True              ||
||  SupportsGlobalDatabases           |  False             ||
||  SupportsIAMDatabaseAuthentication |  False             ||
||  SupportsIops                      |  False             ||
||  SupportsKerberosAuthentication    |  True              ||
||  SupportsPerformanceInsights       |  True              ||
||  SupportsStorageAutoscaling        |  True              ||
||  SupportsStorageEncryption         |  True              ||
||  SupportsStorageThroughput         |  False             ||
||  Vpc                               |  True              ||
|+------------------------------------+--------------------+|
|||                   AvailabilityZones                   |||
||+-------------------------------------------------------+||
|||                         Name                          |||
||+-------------------------------------------------------+||
|||  us-west-2a                                           |||
|||  us-west-2b                                           |||
|||  us-west-2c                                           |||
||+-------------------------------------------------------+||
|||              AvailableProcessorFeatures               |||
||+-----------------+-----------------+-------------------+||
|||  AllowedValues  |  DefaultValue   |       Name        |||
||+-----------------+-----------------+-------------------+||
|||  1,2,3,4        |  4              |  coreCount        |||
|||  1              |  1              |  threadsPerCore   |||
||+-----------------+-----------------+-------------------+||
```
In addition, you can run the following commands for DB instance class processor information:  
+ `describe-db-instances` – Shows the processor information for the specified DB instance
+ `describe-db-snapshots` – Shows the processor information for the specified DB snapshot
+ `describe-valid-db-instance-modifications` – Shows the valid modifications to the processor for the specified DB instance
In the output of the preceding command, the values for the processor features are `null` if Optimize CPU is not configured.

**Example of setting the number of CPU cores for a DB instance**  
The following example modifies *mydbinstance* by setting the number of CPU cores to 4 threadsPerCore as 1. Apply the changes immediately by using `--apply-immediately`. If you want to apply the changes during the next scheduled maintenance window, omit `--apply-immediately option`.  

```
aws rds modify-db-instance \
    --db-instance-identifier mydbinstance \
    --db-instance-class db.r7i.8xlarge \
    --processor-features "Name=coreCount,Value=4" "Name=threadsPerCore,Value=1" \
    --apply-immediately
```

**Example of returning to default processor settings for a DB instance**  
The following example modifies *mydbinstance* by returning it to the default processor values for it.  

```
aws rds modify-db-instance \
    --db-instance-identifier mydbinstance \
    --db-instance-class db.r7i.8xlarge \
    --use-default-processor-features \
    --apply-immediately
```

# Microsoft SQL Server security
<a name="SQLServer.Concepts.General.FeatureSupport.UnsupportedRoles"></a>

The Microsoft SQL Server database engine uses role-based security. The master user name that you specify when you create a DB instance is a SQL Server Authentication login that is a member of the `processadmin`, `public`, and `setupadmin` fixed server roles.

Any user who creates a database is assigned to the db\$1owner role for that database and has all database-level permissions except for those that are used for backups. Amazon RDS manages backups for you.

The following server-level roles aren't available in Amazon RDS for SQL Server:
+ bulkadmin
+ dbcreator
+ diskadmin
+ securityadmin
+ serveradmin
+ sysadmin

The following server-level permissions aren't available on RDS for SQL Server DB instances:
+ ALTER ANY DATABASE
+ ALTER ANY EVENT NOTIFICATION
+ ALTER RESOURCES
+ ALTER SETTINGS (you can use the DB parameter group API operations to modify parameters; for more information, see [Parameter groups for Amazon RDS](USER_WorkingWithParamGroups.md)) 
+ AUTHENTICATE SERVER
+ CONTROL\$1SERVER
+ CREATE DDL EVENT NOTIFICATION
+ CREATE ENDPOINT
+ CREATE SERVER ROLE
+ CREATE TRACE EVENT NOTIFICATION
+ DROP ANY DATABASE
+ EXTERNAL ACCESS ASSEMBLY
+ SHUTDOWN (You can use the RDS reboot option instead)
+ UNSAFE ASSEMBLY
+ ALTER ANY AVAILABILITY GROUP
+ CREATE ANY AVAILABILITY GROUP

## SSL support for Microsoft SQL Server DB instances
<a name="SQLServer.Concepts.General.SSL"></a>

You can use SSL to encrypt connections between your applications and your Amazon RDS DB instances running Microsoft SQL Server. You can also force all connections to your DB instance to use SSL. If you force connections to use SSL, it happens transparently to the client, and the client doesn't have to do any work to use SSL. 

SSL is supported in all AWS Regions and for all supported SQL Server editions. For more information, see [Using SSL with a Microsoft SQL Server DB instance](SQLServer.Concepts.General.SSL.Using.md). 

# Using SSL with a Microsoft SQL Server DB instance
<a name="SQLServer.Concepts.General.SSL.Using"></a>

You can use Secure Sockets Layer (SSL) to encrypt connections between your client applications and your Amazon RDS DB instances running Microsoft SQL Server. SSL support is available in all AWS regions for all supported SQL Server editions. 

When you create a SQL Server DB instance, Amazon RDS creates an SSL certificate for it. The SSL certificate includes the DB instance endpoint as the Common Name (CN) for the SSL certificate to guard against spoofing attacks. 

There are 2 ways to use SSL to connect to your SQL Server DB instance: 
+ Force SSL for all connections — this happens transparently to the client, and the client doesn't have to do any work to use SSL. 
**Note**  
When you set `rds.force_ssl` to `1` and use SSMS version 19.3, 20.0, and 20.2, check for the following:  
Enable **Trust Server Certificate** in SSMS.
Import the certificate in your system.
+ Encrypt specific connections — this sets up an SSL connection from a specific client computer, and you must do work on the client to encrypt connections. 

For information about Transport Layer Security (TLS) support for SQL Server, see [ TLS 1.2 support for Microsoft SQL Server](https://support.microsoft.com/en-ca/help/3135244/tls-1-2-support-for-microsoft-sql-server).

## Forcing connections to your DB instance to use SSL
<a name="SQLServer.Concepts.General.SSL.Forcing"></a>

You can force all connections to your DB instance to use SSL. If you force connections to use SSL, it happens transparently to the client, and the client doesn't have to do any work to use SSL. 

If you want to force SSL, use the `rds.force_ssl` parameter. By default, the `rds.force_ssl` parameter is set to `0 (off)`. Set the `rds.force_ssl` parameter to `1 (on)` to force connections to use SSL. The `rds.force_ssl` parameter is static, so after you change the value, you must reboot your DB instance for the change to take effect. 

**To force all connections to your DB instance to use SSL**

1. Determine the parameter group that is attached to your DB instance: 

   1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

   1. In the top right corner of the Amazon RDS console, choose the AWS Region of your DB instance. 

   1. In the navigation pane, choose **Databases**, and then choose the name of your DB instance to show its details. 

   1. Choose the **Configuration** tab. Find the **Parameter group** in the section. 

1. If necessary, create a new parameter group. If your DB instance uses the default parameter group, you must create a new parameter group. If your DB instance uses a nondefault parameter group, you can choose to edit the existing parameter group or to create a new parameter group. If you edit an existing parameter group, the change affects all DB instances that use that parameter group. 

   To create a new parameter group, follow the instructions in [Creating a DB parameter group in Amazon RDS](USER_WorkingWithParamGroups.Creating.md). 

1. Edit your new or existing parameter group to set the `rds.force_ssl` parameter to `true`. To edit the parameter group, follow the instructions in [Modifying parameters in a DB parameter group in Amazon RDS](USER_WorkingWithParamGroups.Modifying.md). 

1. If you created a new parameter group, modify your DB instance to attach the new parameter group. Modify the **DB Parameter Group** setting of the DB instance. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md). 

1. Reboot your DB instance. For more information, see [Rebooting a DB instance](USER_RebootInstance.md). 

## Encrypting specific connections
<a name="SQLServer.Concepts.General.SSL.Client"></a>

You can force all connections to your DB instance to use SSL, or you can encrypt connections from specific client computers only. To use SSL from a specific client, you must obtain certificates for the client computer, import certificates on the client computer, and then encrypt the connections from the client computer. 

**Note**  
All SQL Server instances created after August 5, 2014, use the DB instance endpoint in the Common Name (CN) field of the SSL certificate. Prior to August 5, 2014, SSL certificate verification was not available for VPC-based SQL Server instances. If you have a VPC-based SQL Server DB instance that was created before August 5, 2014, and you want to use SSL certificate verification and ensure that the instance endpoint is included as the CN for the SSL certificate for that DB instance, then rename the instance. When you rename a DB instance, a new certificate is deployed and the instance is rebooted to enable the new certificate.

### Obtaining certificates for client computers
<a name="SQLServer.Concepts.General.SSL.Certificates"></a>

To encrypt connections from a client computer to an Amazon RDS DB instance running Microsoft SQL Server, you need a certificate on your client computer. 

To obtain that certificate, download the certificate to your client computer. You can download a root certificate that works for all regions. You can also download a certificate bundle that contains both the old and new root certificate. In addition, you can download region-specific intermediate certificates. For more information about downloading certificates, see [Using SSL/TLS to encrypt a connection to a DB instance or cluster ](UsingWithRDS.SSL.md).

After you have downloaded the appropriate certificate, import the certificate into your Microsoft Windows operating system by following the procedure in the section following. 

### Importing certificates on client computers
<a name="SQLServer.Concepts.General.SSL.Importing"></a>

You can use the following procedure to import your certificate into the Microsoft Windows operating system on your client computer. 

**To import the certificate into your Windows operating system:**

1. On the **Start** menu, type **Run** in the search box and press **Enter**. 

1. In the **Open** box, type **MMC** and then choose **OK**. 

1. In the MMC console, on the **File** menu, choose **Add/Remove Snap-in**. 

1. In the **Add or Remove Snap-ins** dialog box, for **Available snap-ins**, select **Certificates**, and then choose **Add**. 

1. In the **Certificates snap-in** dialog box, choose **Computer account**, and then choose **Next**. 

1. In the **Select computer** dialog box, choose **Finish**. 

1. In the **Add or Remove Snap-ins** dialog box, choose **OK**. 

1. In the MMC console, expand **Certificates**, open the context (right-click) menu for **Trusted Root Certification Authorities**, choose **All Tasks**, and then choose **Import**. 

1. On the first page of the Certificate Import Wizard, choose **Next**. 

1. On the second page of the Certificate Import Wizard, choose **Browse**. In the browse window, change the file type to **All files (\$1.\$1)** because .pem is not a standard certificate extension. Locate the .pem file that you downloaded previously. 
**Note**  
When connecting from Windows clients such as SQL Server Management Studio (SSMS), we recommend using the PKCS\$17 (.p7b) certificate format instead of the global-bundle.pem file. The .p7b format ensures the complete certificate chain — including Root and Intermediate Certificate Authorities (CAs) — is correctly imported into the Windows Certificate Store. This prevents connection failures that can occur when mandatory encryption is enabled, as .pem imports may not install the full chain properly.

1. Choose **Open** to select the certificate file, and then choose **Next**. 

1. On the third page of the Certificate Import Wizard, choose **Next**. 

1. On the fourth page of the Certificate Import Wizard, choose **Finish**. A dialog box appears indicating that the import was successful. 

1. In the MMC console, expand **Certificates**, expand **Trusted Root Certification Authorities**, and then choose **Certificates**. Locate the certificate to confirm it exists, as shown here.  
![\[In the MMC console, in the navigation pane, the Certificates folder is selected drilled down from Console Root, Certificates (Local Computer), and Trusted Root Certification Authority. In the main page, select the required CA certificate.\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/rds_sql_ssl_cert.png)

### Encrypting connections to an Amazon RDS DB instance running Microsoft SQL Server
<a name="SQLServer.Concepts.General.SSL.Encrypting"></a>

After you have imported a certificate into your client computer, you can encrypt connections from the client computer to an Amazon RDS DB instance running Microsoft SQL Server. 

For SQL Server Management Studio, use the following procedure. For more information about SQL Server Management Studio, see [Use SQL Server management studio](http://msdn.microsoft.com/en-us/library/ms174173.aspx). 

**To encrypt connections from SQL Server Management Studio**

1. Launch SQL Server Management Studio. 

1. For **Connect to server**, type the server information, login user name, and password. 

1. Choose **Options**. 

1. Select **Encrypt connection**. 

1. Choose **Connect**.

1. Confirm that your connection is encrypted by running the following query. Verify that the query returns `true` for `encrypt_option`. 

   ```
   select ENCRYPT_OPTION from SYS.DM_EXEC_CONNECTIONS where SESSION_ID = @@SPID
   ```

For any other SQL client, use the following procedure. 

**To encrypt connections from other SQL clients**

1. Append `encrypt=true` to your connection string. This string might be available as an option, or as a property on the connection page in GUI tools. 
**Note**  
To enable SSL encryption for clients that connect using JDBC, you might need to add the Amazon RDS SQL certificate to the Java CA certificate (cacerts) store. You can do this by using the [ keytool](http://docs.oracle.com/javase/7/docs/technotes/tools/solaris/keytool.html) utility. 

1. Confirm that your connection is encrypted by running the following query. Verify that the query returns `true` for `encrypt_option`. 

   ```
   select ENCRYPT_OPTION from SYS.DM_EXEC_CONNECTIONS where SESSION_ID = @@SPID
   ```

# Configuring SQL Server security protocols and ciphers
<a name="SQLServer.Ciphers"></a>

You can turn certain security protocols and ciphers on and off using DB parameters. The security parameters that you can configure (except for TLS version 1.2) are shown in the following table. 


****  

| DB parameter | Allowed values (default in bold) | Description | 
| --- | --- | --- | 
| rds.tls10 | default, enabled, disabled | TLS 1.0. | 
| rds.tls11 | default, enabled, disabled | TLS 1.1. | 
| rds.tls12 | default | TLS 1.2. You can't modify this value. | 
| rds.fips | 0, 1 |  When you set the parameter to 1, RDS forces the use of modules that are compliant with the Federal Information Processing Standard (FIPS) 140-2 standard. For more information, see [Use SQL Server 2016 in FIPS 140-2-compliant mode](https://docs.microsoft.com/en-us/troubleshoot/sql/security/sql-2016-fips-140-2-compliant-mode) in the Microsoft documentation.  | 
| rds.rc4 | default, enabled, disabled | RC4 stream cipher. | 
| rds.diffie-hellman | default, enabled, disabled | Diffie-Hellman key-exchange encryption. | 
| rds.diffie-hellman-min-key-bit-length | default, 1024, 2048, 3072, 4096 | Minimum bit length for Diffie-Hellman keys. | 
| rds.curve25519 | default, enabled, disabled | Curve25519 elliptic-curve encryption cipher. This parameter isn't supported for all engine versions. | 
| rds.3des168 | default, enabled, disabled | Triple Data Encryption Standard (DES) encryption cipher with a 168-bit key length. | 

**Note**  
For minor engine versions after 16.00.4120.1, 15.00.4365.2, 14.00.3465.1, 13.00.6435.1, and 12.00.6449.1, the default setting for the DB parameters `rds.tls10`, `rds.tls11`, `rds.rc4`, `rds.curve25519`, and `rds.3des168` is *disabled*. Otherwise the default setting is *enabled*.  
For minor engine versions after 16.00.4120.1, 15.00.4365.2, 14.00.3465.1, 13.00.6435.1, and 12.00.6449.1, the default setting for `rds.diffie-hellman-min-key-bit-length`is 3072. Otherwise the default setting is 2048.

Use the following process to configure the security protocols and ciphers:

1. Create a custom DB parameter group.

1. Modify the parameters in the parameter group.

1. Associate the DB parameter group with your DB instance.

For more information on DB parameter groups, see [Parameter groups for Amazon RDS](USER_WorkingWithParamGroups.md).

## Creating the security-related parameter group
<a name="CreateParamGroup.Ciphers"></a>

Create a parameter group for your security-related parameters that corresponds to the SQL Server edition and version of your DB instance.

### Console
<a name="CreateParamGroup.Ciphers.Console"></a>

The following procedure creates a parameter group for SQL Server Standard Edition 2016.

**To create the parameter group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Parameter groups**.

1. Choose **Create parameter group**.

1. In the **Create parameter group** pane, do the following:

   1. For **Parameter group family**, choose **sqlserver-se-13.0**.

   1. For **Group name**, enter an identifier for the parameter group, such as **sqlserver-ciphers-se-13**.

   1. For **Description**, enter **Parameter group for security protocols and ciphers**.

1. Choose **Create**.

### CLI
<a name="CreateParamGroup.Ciphers.CLI"></a>

The following procedure creates a parameter group for SQL Server Standard Edition 2016.

**To create the parameter group**
+ Run one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-db-parameter-group \
      --db-parameter-group-name sqlserver-ciphers-se-13 \
      --db-parameter-group-family "sqlserver-se-13.0" \
      --description "Parameter group for security protocols and ciphers"
  ```

  For Windows:

  ```
  aws rds create-db-parameter-group ^
      --db-parameter-group-name sqlserver-ciphers-se-13 ^
      --db-parameter-group-family "sqlserver-se-13.0" ^
      --description "Parameter group for security protocols and ciphers"
  ```

## Modifying security-related parameters
<a name="ModifyParams.Ciphers"></a>

Modify the security-related parameters in the parameter group that corresponds to the SQL Server edition and version of your DB instance.

### Console
<a name="ModifyParams.Ciphers.Console"></a>

The following procedure modifies the parameter group that you created for SQL Server Standard Edition 2016. This example turns off TLS version 1.0.

**To modify the parameter group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Parameter groups**.

1. Choose the parameter group, such as **sqlserver-ciphers-se-13**.

1. Under **Parameters**, filter the parameter list for **rds**.

1. Choose **Edit parameters**.

1. Choose **rds.tls10**.

1. For **Values**, choose **disabled**.

1. Choose **Save changes**.

### CLI
<a name="ModifyParams.Ciphers.CLI"></a>

The following procedure modifies the parameter group that you created for SQL Server Standard Edition 2016. This example turns off TLS version 1.0.

**To modify the parameter group**
+ Run one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-parameter-group \
      --db-parameter-group-name sqlserver-ciphers-se-13 \
      --parameters "ParameterName='rds.tls10',ParameterValue='disabled',ApplyMethod=pending-reboot"
  ```

  For Windows:

  ```
  aws rds modify-db-parameter-group ^
      --db-parameter-group-name sqlserver-ciphers-se-13 ^
      --parameters "ParameterName='rds.tls10',ParameterValue='disabled',ApplyMethod=pending-reboot"
  ```

## Associating the security-related parameter group with your DB instance
<a name="AssocParamGroup.Ciphers"></a>

To associate the parameter group with your DB instance, use the AWS Management Console or the AWS CLI.

### Console
<a name="AssocParamGroup.Ciphers.Console"></a>

You can associate the parameter group with a new or existing DB instance:
+ For a new DB instance, associate it when you launch the instance. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).
+ For an existing DB instance, associate it by modifying the instance. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

### CLI
<a name="AssocParamGroup.Ciphers.CLI"></a>

You can associate the parameter group with a new or existing DB instance.

**To create a DB instance with the parameter group**
+ Specify the same DB engine type and major version as you used when creating the parameter group.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-db-instance \
      --db-instance-identifier mydbinstance \
      --db-instance-class db.m5.2xlarge \
      --engine sqlserver-se \
      --engine-version 13.00.5426.0.v1 \
      --allocated-storage 100 \
      --master-user-password secret123 \
      --master-username admin \
      --storage-type gp2 \
      --license-model li \
      --db-parameter-group-name sqlserver-ciphers-se-13
  ```

  For Windows:

  ```
  aws rds create-db-instance ^
      --db-instance-identifier mydbinstance ^
      --db-instance-class db.m5.2xlarge ^
      --engine sqlserver-se ^
      --engine-version 13.00.5426.0.v1 ^
      --allocated-storage 100 ^
      --master-user-password secret123 ^
      --master-username admin ^
      --storage-type gp2 ^
      --license-model li ^
      --db-parameter-group-name sqlserver-ciphers-se-13
  ```
**Note**  
Specify a password other than the prompt shown here as a security best practice.

**To modify a DB instance and associate the parameter group**
+ Run one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-instance \
      --db-instance-identifier mydbinstance \
      --db-parameter-group-name sqlserver-ciphers-se-13 \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds modify-db-instance ^
      --db-instance-identifier mydbinstance ^
      --db-parameter-group-name sqlserver-ciphers-se-13 ^
      --apply-immediately
  ```

# Updating applications to connect to Microsoft SQL Server DB instances using new SSL/TLS certificates
<a name="ssl-certificate-rotation-sqlserver"></a>

As of January 13, 2023, Amazon RDS has published new Certificate Authority (CA) certificates for connecting to your RDS DB instances using Secure Socket Layer or Transport Layer Security (SSL/TLS). Following, you can find information about updating your applications to use the new certificates.

This topic can help you to determine whether any client applications use SSL/TLS to connect to your DB instances. If they do, you can further check whether those applications require certificate verification to connect. 

**Note**  
Some applications are configured to connect to SQL Server DB instances only if they can successfully verify the certificate on the server.   
For such applications, you must update your client application trust stores to include the new CA certificates. 

After you update your CA certificates in the client application trust stores, you can rotate the certificates on your DB instances. We strongly recommend testing these procedures in a development or staging environment before implementing them in your production environments.

For more information about certificate rotation, see [Rotating your SSL/TLS certificate](UsingWithRDS.SSL-certificate-rotation.md). For more information about downloading certificates, see [Using SSL/TLS to encrypt a connection to a DB instance or cluster ](UsingWithRDS.SSL.md). For information about using SSL/TLS with Microsoft SQL Server DB instances, see [Using SSL with a Microsoft SQL Server DB instance](SQLServer.Concepts.General.SSL.Using.md).

**Topics**
+ [

## Determining whether any applications are connecting to your Microsoft SQL Server DB instance using SSL
](#ssl-certificate-rotation-sqlserver.determining-server)
+ [

## Determining whether a client requires certificate verification in order to connect
](#ssl-certificate-rotation-sqlserver.determining-client)
+ [

## Updating your application trust store
](#ssl-certificate-rotation-sqlserver.updating-trust-store)

## Determining whether any applications are connecting to your Microsoft SQL Server DB instance using SSL
<a name="ssl-certificate-rotation-sqlserver.determining-server"></a>

Check the DB instance configuration for the value of the `rds.force_ssl` parameter. By default, the `rds.force_ssl` parameter is set to 0 (off). If the `rds.force_ssl` parameter is set to 1 (on), clients are required to use SSL/TLS for connections. For more information about parameter groups, see [Parameter groups for Amazon RDS](USER_WorkingWithParamGroups.md).

Run the following query to get the current encryption option for all the open connections to a DB instance. The column `ENCRYPT_OPTION` returns `TRUE` if the connection is encrypted.

```
select SESSION_ID,
    ENCRYPT_OPTION,
    NET_TRANSPORT,
    AUTH_SCHEME
    from SYS.DM_EXEC_CONNECTIONS
```

This query shows only the current connections. It doesn't show whether applications that have connected and disconnected in the past have used SSL.

## Determining whether a client requires certificate verification in order to connect
<a name="ssl-certificate-rotation-sqlserver.determining-client"></a>

You can check whether different types of clients require certificate verification to connect.

**Note**  
If you use connectors other than the ones listed, see the specific connector's documentation for information about how it enforces encrypted connections. For more information, see [Connection modules for Microsoft SQL databases](https://docs.microsoft.com/en-us/sql/connect/sql-connection-libraries?view=sql-server-ver15) in the Microsoft SQL Server documentation.

### SQL Server Management Studio
<a name="ssl-certificate-rotation-sqlserver.determining-client.management-studio"></a>

Check whether encryption is enforced for SQL Server Management Studio connections:

1. Launch SQL Server Management Studio.

1. For **Connect to server**, enter the server information, login user name, and password.

1. Choose **Options**.

1. Check if **Encrypt connection** is selected in the connect page.

For more information about SQL Server Management Studio, see [Use SQL Server Management Studio](http://msdn.microsoft.com/en-us/library/ms174173.aspx).

### Sqlcmd
<a name="ssl-certificate-rotation-sqlserver.determining-client.sqlcmd"></a>

The following example with the `sqlcmd` client shows how to check a script's SQL Server connection to determine whether successful connections require a valid certificate. For more information, see [Connecting with sqlcmd](https://docs.microsoft.com/en-us/sql/connect/odbc/linux-mac/connecting-with-sqlcmd?view=sql-server-ver15) in the Microsoft SQL Server documentation.

When using `sqlcmd`, an SSL connection requires verification against the server certificate if you use the `-N` command argument to encrypt connections, as in the following example.

```
$ sqlcmd -N -S dbinstance.rds.amazon.com -d ExampleDB                     
```

**Note**  
If `sqlcmd` is invoked with the `-C` option, it trusts the server certificate, even if that doesn't match the client-side trust store.

### ADO.NET
<a name="ssl-certificate-rotation-sqlserver.determining-client.adonet"></a>

In the following example, the application connects using SSL, and the server certificate must be verified.

```
using SQLC = Microsoft.Data.SqlClient;
 
...
 
    static public void Main()  
    {  
        using (var connection = new SQLC.SqlConnection(
            "Server=tcp:dbinstance.rds.amazon.com;" +
            "Database=ExampleDB;User ID=LOGIN_NAME;" +
            "Password=YOUR_PASSWORD;" + 
            "Encrypt=True;TrustServerCertificate=False;"
            ))
        {  
            connection.Open();  
            ...
        }
```

### Java
<a name="ssl-certificate-rotation-sqlserver.determining-client.java"></a>

In the following example, the application connects using SSL, and the server certificate must be verified.

```
String connectionUrl =   
    "jdbc:sqlserver://dbinstance.rds.amazon.com;" +  
    "databaseName=ExampleDB;integratedSecurity=true;" +  
    "encrypt=true;trustServerCertificate=false";
```

To enable SSL encryption for clients that connect using JDBC, you might need to add the Amazon RDS certificate to the Java CA certificate store. For instructions, see [Configuring the client for encryption](https://docs.microsoft.com/en-us/SQL/connect/jdbc/configuring-the-client-for-ssl-encryption?view=sql-server-2017) in the Microsoft SQL Server documentation. You can also provide the trusted CA certificate file name directly by appending `trustStore=path-to-certificate-trust-store-file` to the connection string.

**Note**  
If you use `TrustServerCertificate=true` (or its equivalent) in the connection string, the connection process skips the trust chain validation. In this case, the application connects even if the certificate can't be verified. Using `TrustServerCertificate=false` enforces certificate validation and is a best practice.

## Updating your application trust store
<a name="ssl-certificate-rotation-sqlserver.updating-trust-store"></a>

You can update the trust store for applications that use Microsoft SQL Server. For instructions, see [Encrypting specific connections](SQLServer.Concepts.General.SSL.Using.md#SQLServer.Concepts.General.SSL.Client). Also, see [Configuring the client for encryption](https://docs.microsoft.com/en-us/SQL/connect/jdbc/configuring-the-client-for-ssl-encryption?view=sql-server-2017) in the Microsoft SQL Server documentation.

If you are using an operating system other than Microsoft Windows, see the software distribution documentation for SSL/TLS implementation for information about adding a new root CA certificate. For example, OpenSSL and GnuTLS are popular options. Use the implementation method to add trust to the RDS root CA certificate. Microsoft provides instructions for configuring certificates on some systems.

For information about downloading the root certificate, see [Using SSL/TLS to encrypt a connection to a DB instance or cluster ](UsingWithRDS.SSL.md).

For sample scripts that import certificates, see [Sample script for importing certificates into your trust store](UsingWithRDS.SSL-certificate-rotation.md#UsingWithRDS.SSL-certificate-rotation-sample-script).

**Note**  
When you update the trust store, you can retain older certificates in addition to adding the new certificates.

## Compliance program support for Microsoft SQL Server DB instances
<a name="SQLServer.Concepts.General.Compliance"></a>

AWS Services in scope have been fully assessed by a third-party auditor and result in a certification, attestation of compliance, or Authority to Operate (ATO). For more information, see [AWS services in scope by compliance program](https://aws.amazon.com/compliance/services-in-scope/).

### HIPAA support for Microsoft SQL Server DB instances
<a name="SQLServer.Concepts.General.HIPAA"></a>

You can use Amazon RDS for Microsoft SQL Server databases to build HIPAA-compliant applications. You can store healthcare-related information, including protected health information (PHI), under a Business Associate Agreement (BAA) with AWS. For more information, see [HIPAA compliance](https://aws.amazon.com/compliance/hipaa-compliance/).

Amazon RDS for SQL Server supports HIPAA for the following versions and editions:
+ SQL Server 2022 Enterprise, Standard, and Web Editions
+ SQL Server 2019 Enterprise, Standard, and Web Editions
+ SQL Server 2017 Enterprise, Standard, and Web Editions
+ SQL Server 2016 Enterprise, Standard, and Web Editions

To enable HIPAA support on your DB instance, set up the following three components.


****  

| Component | Details | 
| --- | --- | 
|  Auditing  |  To set up auditing, set the parameter `rds.sqlserver_audit` to the value `fedramp_hipaa`. If your DB instance is not already using a custom DB parameter group, you must create a custom parameter group and attach it to your DB instance before you can modify the `rds.sqlserver_audit` parameter. For more information, see [Parameter groups for Amazon RDS](USER_WorkingWithParamGroups.md).  | 
|  Transport encryption  |  To set up transport encryption, force all connections to your DB instance to use Secure Sockets Layer (SSL). For more information, see [Forcing connections to your DB instance to use SSL](SQLServer.Concepts.General.SSL.Using.md#SQLServer.Concepts.General.SSL.Forcing).  | 
|  Encryption at rest  |  To set up encryption at rest, you have two options: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SQLServer.html)  | 

# Microsoft SQL Server versions on Amazon RDS
<a name="SQLServer.Concepts.General.VersionSupport"></a>

You can specify any currently supported Microsoft SQL Server version when creating a new DB instance. You can specify the Microsoft SQL Server major version (such as Microsoft SQL Server 14.00), and any supported minor version for the specified major version. If no version is specified, Amazon RDS defaults to a supported version, typically the most recent version. If a major version is specified but a minor version is not, Amazon RDS defaults to a recent release of the major version you have specified.

The following table shows the supported SQL Server versions for all editions and all AWS Regions, except where noted. 

**Note**  
You can also use the [ describe-db-engine-versions](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-engine-versions.html) AWS CLI command to see a list of supported versions, as well as defaults for newly created DB instances. You can view the major versions of your SQL Server databases by running the [describe-db-major-engine-versions](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-major-engine-versions.html) AWS CLI command or by using the [DescribeDBMajorEngineVersions](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBMajorEngineVersions.html) RDS API operation.


| Major version | Minor version | RDS API `EngineVersion` and CLI `engine-version` | 
| --- | --- | --- | 
| SQL Server 2022 |  16.00.4245.2 (CU24) 16.00.4236.2 (CU23) 16.00.4230.2 (CU22 GDR) 16.00.4225.2 (CU22) 16.00.4215.2 (CU21) 16.00.4210.1 (CU20 GDR) 16.00.4205.1 (CU20) 16.00.4195.2 (CU19) 16.00.4185.3 (CU18) 16.00.4175.1 (CU17) 16.00.4165.4 (CU16) 16.00.4150.1 (CU15) 16.00.4140.3 (CU14 GDR) 16.00.4135.4 (CU14) 16.00.4131.2 (CU13) 16.00.4125.3 (CU13) 16.00.4120.1 (CU12 GDR) 16.00.4115.5 (CU12) 16.00.4105.2 (CU11) 16.00.4095.4 (CU10) 16.00.4085.2 (CU9)  |  `16.00.4245.2.v1` `16.00.4236.2.v1` `16.00.4230.2.v1` `16.00.4225.2.v1` `16.00.4215.2.v1` `16.00.4210.1.v1` `16.00.4205.1.v1` `16.00.4195.2.v1` `16.00.4185.3.v1` `16.00.4175.1.v1` `16.00.4165.4.v1` `16.00.4150.1.v1` `16.00.4140.3.v1` `16.00.4135.4.v1` `16.00.4131.2.v1` `16.00.4125.3.v1` `16.00.4120.1.v1` `16.00.4115.5.v1` `16.00.4105.2.v1` `16.00.4095.4.v1` `16.00.4085.2.v1`  | 
| SQL Server 2019 |  15.00.4460.4 (CU32 GDR) 15.00.4455.2 (CU32 GDR) 15.00.4445.1 (CU32 GDR) 15.00.4440.1 (CU32 GDR) 15.00.4435.7 (CU32) 15.00.4430.1 (CU32) 15.00.4420.2 (CU31) 15.00.4415.2 (CU30) 15.00.4410.1 (CU29 GDR) 15.00.4395.2 (CU28) 15.00.4390.2 (CU28) 15.00.4385.2 (CU28) 15.00.4382.1 (CU27) 15.00.4375.4 (CU27) 15.00.4365.2 (CU26) 15.00.4355.3 (CU25) 15.00.4345.5 (CU24) 15.00.4335.1 (CU23) 15.00.4322.2 (CU22) 15.00.4316.3 (CU21) 15.00.4312.2 (CU20) 15.00.4236.7 (CU16) 15.00.4198.2 (CU15) 15.00.4153.1 (CU12) 15.00.4073.23 (CU8) 15.00.4043.16 (CU5)  |  `15.00.4460.4.v1` `15.00.4455.2.v1` `15.00.4445.1.v1` `15.00.4440.1.v1` `15.00.4435.7.v1` `15.00.4430.1.v1` `15.00.4420.2.v1` `15.00.4415.2.v1` `15.00.4410.1.v1` `15.00.4395.2.v1` `15.00.4390.2.v1` `15.00.4385.2.v1` `15.00.4382.1.v1` `15.00.4375.4.v1` `15.00.4365.2.v1` `15.00.4355.3.v1` `15.00.4345.5.v1` `15.00.4335.1.v1` `15.00.4322.2.v1` `15.00.4316.3.v1` `15.00.4312.2.v1` `15.00.4236.7.v1` `15.00.4198.2.v1` `15.00.4153.1.v1` `15.00.4073.23.v1` `15.00.4043.16.v1`  | 
| SQL Server 2017 |  14.00.3520.4 (CU31 GDR) 14.00.3515.1 (CU31 GDR) 14.00.3505.1 (CU31 GDR) 14.00.3500.1.(CU31 GDR) 14.00.3495.9 (CU31 GDR) 14.00.3485.1 (CU31 GDR) 14.00.3480.1 (CU31) 14.00.3475.1 (CU31) 14.00.3471.2 (CU31) 14.00.3465.1 (CU31) 14.00.3460.9 (CU31) 14.00.3451.2 (CU30) 14.00.3421.10 (CU27) 14.00.3401.7 (CU25) 14.00.3381.3 (CU23) 14.00.3356.20 (CU22) 14.00.3294.2 (CU20) 14.00.3281.6 (CU19)  |  `14.00.3520.4.v1` `14.00.3515.1.v1` `14.00.3505.1.v1` `14.00.3500.1.v1` `14.00.3495.9.v1` `14.00.3485.1.v1` `14.00.3480.1.v1` `14.00.3475.1.v1` `14.00.3471.2.v1` `14.00.3465.1.v1` `14.00.3460.9.v1` `14.00.3451.2.v1` `14.00.3421.10.v1` `14.00.3401.7.v1` `14.00.3381.3.v1` `14.00.3356.20.v1` `14.00.3294.2.v1` `14.00.3281.6.v1`  | 
| SQL Server 2016 |  13.00.6480.4 (GDR) 13.00.6475.1 (GDR) 13.00.6470.1 (GDR) 13.00.6465.1 (GDR) 13.00.6460.7 (GDR) 13.00.6455.2 (GDR) 13.00.6450.1 (GDR) 13.00.6445.1 (GDR) 13.00.6441.1 (GDR) 13.00.6435.1 (GDR) 13.00.6430.49 (GDR) 13.00.6419.1 (SP3 \$1 Hotfix) 13.00.6300.2 (SP3)  |  `14.00.6480.4.v1` `14.00.6475.1.v1` `14.00.6470.1.v1` `13.00.6465.1.v1` `13.00.6460.7.v1` `13.00.6455.2.v1` `13.00.6450.1.v1` `13.00.6445.1.v1` `13.00.6441.1.v1` `13.00.6435.1.v1` `13.00.6430.49.v1` `13.00.6419.1.v1` `13.00.6300.2.v1`  | 

## Version management in Amazon RDS
<a name="SQLServer.Concepts.General.Version-Management"></a>

Amazon RDS includes flexible version management that enables you to control when and how your DB instance is patched or upgraded. This enables you to do the following for your DB engine:
+ Maintain compatibility with database engine patch versions.
+ Test new patch versions to verify that they work with your application before you deploy them in production.
+ Plan and perform version upgrades to meet your service level agreements and timing requirements.

### Microsoft SQL Server engine patching in Amazon RDS
<a name="SQLServer.Concepts.General.Patching"></a>

Amazon RDS periodically aggregates official Microsoft SQL Server database patches into a DB instance engine version that's specific to Amazon RDS. For more information about the Microsoft SQL Server patches in each engine version, see [Version and feature support on Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SQLServer.html#SQLServer.Concepts.General.FeatureSupport).

Currently, you manually perform all engine upgrades on your DB instance. For more information, see [Upgrades of the Microsoft SQL Server DB engine](USER_UpgradeDBInstance.SQLServer.md). 

### Deprecation schedule for major engine versions of Microsoft SQL Server on Amazon RDS
<a name="SQLServer.Concepts.General.Deprecated-Versions"></a>

The following table displays the planned schedule of deprecations for major engine versions of Microsoft SQL Server.


| Date | Information | 
| --- | --- | 
| July 14, 2026 |  Microsoft will stop critical patch updates for SQL Server 2016. For more information, see [Microsoft SQL Server 2016](https://learn.microsoft.com/en-us/lifecycle/products/sql-server-2016) in the Microsoft documentation.  | 
| July 14, 2026 |  Amazon RDS plans to end support of Microsoft SQL Server 2016 on RDS for SQL Server. At that time, any remaining instances will be scheduled to migrate to SQL Server 2019 (latest minor version available). For more information, see [Announcement: Amazon RDS for SQL Server ending support for Microsoft SQL Server 2016](https://repost.aws/articles/ARGkeWligDSU-MQgBwUQj0nA/announcement-amazon-rds-for-sql-server-ending-support-for-microsoft-sql-server-2016). To avoid an automatic upgrade from Microsoft SQL Server 2016, you can upgrade at a time that is convenient to you. For more information, see [Upgrading a DB instance engine version](USER_UpgradeDBInstance.Upgrading.md).  | 
| January 15, 2026 | Amazon RDS is starting to disable the creation of new RDS for SQL Server DB instances using Microsoft SQL Server 2016. For more information, see [Announcement: Amazon RDS for SQL Server ending support for Microsoft SQL Server 2016](https://repost.aws/articles/ARGkeWligDSU-MQgBwUQj0nA/announcement-amazon-rds-for-sql-server-ending-support-for-microsoft-sql-server-2016). | 
| July 9, 2024 |  Microsoft will stop critical patch updates for SQL Server 2014. For more information, see [Microsoft SQL Server 2014](https://learn.microsoft.com/en-us/lifecycle/products/sql-server-2014) in the Microsoft documentation.  | 
|  June 1, 2024 |  Amazon RDS plans to end support of Microsoft SQL Server 2014 on RDS for SQL Server. At that time, any remaining instances will be scheduled to migrate to SQL Server 2016 (latest minor version available). For more information, see [Announcement: Amazon RDS for SQL Server ending support for SQL Server 2014 major versions](https://repost.aws/articles/AR-eyAH1PSSuevuZRUE9FV3A). To avoid an automatic upgrade from Microsoft SQL Server 2014, you can upgrade at a time that is convenient to you. For more information, see [Upgrading a DB instance engine version](USER_UpgradeDBInstance.Upgrading.md).  | 
| July 12, 2022 |  Microsoft will stop critical patch updates for SQL Server 2012. For more information, see [Microsoft SQL Server 2012](https://docs.microsoft.com/en-us/lifecycle/products/microsoft-sql-server-2012) in the Microsoft documentation.  | 
| June 1, 2022 |  Amazon RDS plans to end support of Microsoft SQL Server 2012 on RDS for SQL Server. At that time, any remaining instances will be scheduled to migrate to SQL Server 2014 (latest minor version available). For more information, see [Announcement: Amazon RDS for SQL Server ending support for SQL Server 2012 major versions](https://repost.aws/questions/QUFNiETqrMQ_WT_AXSxOYNOA). To avoid an automatic upgrade from Microsoft SQL Server 2012, you can upgrade at a time that is convenient to you. For more information, see [Upgrading a DB instance engine version](USER_UpgradeDBInstance.Upgrading.md).  | 
| September 1, 2021 | Amazon RDS is starting to disable the creation of new RDS for SQL Server DB instances using Microsoft SQL Server 2012. For more information, see [Announcement: Amazon RDS for SQL Server ending support for SQL Server 2012 major versions](https://repost.aws/questions/QUFNiETqrMQ_WT_AXSxOYNOA). | 
| July 12, 2019 |  The Amazon RDS team deprecated support for Microsoft SQL Server 2008 R2 in June 2019. Remaining instances of Microsoft SQL Server 2008 R2 are migrating to SQL Server 2012 (latest minor version available).  To avoid an automatic upgrade from Microsoft SQL Server 2008 R2, you can upgrade at a time that is convenient to you. For more information, see [Upgrading a DB instance engine version](USER_UpgradeDBInstance.Upgrading.md).  | 
| April 25, 2019 | Before the end of April 2019, you will no longer be able to create new Amazon RDS for SQL Server database instances using Microsoft SQL Server 2008R2. | 

# Amazon RDS for SQL Server version policy
<a name="SQLServer.Concepts.General.VersionPolicy"></a>

This topic describes the version policy for Amazon RDS for SQL Server, including supported major and minor versions, release timelines, deprecation procedures, and upgrade guidance.

## Amazon RDS for SQL Server major versions
<a name="SQLServer.Concepts.General.VersionPolicy.MajorVersions"></a>

Amazon RDS supports several major versions of Microsoft SQL Server, including SQL Server 2014, 2016, 2017, 2019, and 2022.


**Microsoft SQL Server major version support**  

| Microsoft SQL Server major version | RDS for SQL Server support | 
| --- | --- | 
| 2008 | No longer supported | 
| 2012 | No longer supported | 
| 2014 | No longer supported | 
| 2016 | Supported | 
| 2017 | Supported | 
| 2019 | Supported | 
| 2022 | Supported | 

## Amazon RDS for SQL Server minor versions
<a name="SQLServer.Concepts.General.VersionPolicy.MinorVersions"></a>

RDS for SQL Server customers can specify any currently supported Microsoft SQL Server version when creating a new DB instance. You can specify the Microsoft SQL Server major version (such as Microsoft SQL Server 2022), and any supported minor version for the specified major version. If no version is specified, Amazon RDS defaults to a supported version, typically the most recent version. If a major version is specified but a minor version is not, Amazon RDS defaults to a recent release of the major version you have specified.

The following table shows the supported versions for all editions and all AWS Regions, except where noted. You can also use the `describe-db-engine-versions` CLI command to see a list of supported versions, as well as defaults for newly created DB instances.


**Supported minor versions for Amazon RDS for SQL Server**  

| Major version | Minor version | RDS API `EngineVersion` and CLI `engine-version` | 
| --- | --- | --- | 
| SQL Server 2022 |  16.00.4245.2 (CU24) 16.00.4236.2 (CU23 v2) 16.00.4230.2 (GDR) 16.00.4225.2 (CU22) 16.00.4215.2 (CU21) 16.00.4210.1 (GDR) 16.00.4205.1 (CU20) 16.00.4195.2 (CU19) 16.00.4185.3 (CU18) 16.00.4175.1 (CU17) 16.00.4165.4 (CU16) 16.00.4150.1 (GDR) 16.00.4145.4 (CU15) 16.00.4140.3 (GDR) 16.00.4135.4 (CU14) 16.00.4131.2 (CU13) 16.00.4125.3 (CU13) 16.00.4120.1 (GDR) 16.00.4115.5 (CU12) 16.00.4105.2 (CU11) 16.00.4095.4 (CU10) 16.00.4085.2 (CU9)  |  `16.00.4245.2.v1` `16.00.4236.2.v1` `16.00.4230.2.v1` `16.00.4225.2.v1` `16.00.4215.1.v1` `16.00.4210.1.v1` `16.00.4205.1.v1` `16.00.4195.2.v1` `16.00.4185.3.v1` `16.00.4175.1.v1` `16.00.4165.4.v1` `16.00.4150.1.v1` `16.00.4145.4.v1` `16.00.4140.3.v1` `16.00.4135.4.v1` `16.00.4131.2.v1` `16.00.4125.3.v1` `16.00.4120.1.v1` `16.00.4115.5.v1` `16.00.4105.2.v1` `16.00.4095.4.v1` `16.00.4085.2.v1`  | 
| SQL Server 2019 |  15.00.4460.4 (GDR) 15.00.4455.2 (GDR) 15.00.4445.1 (GDR) 15.00.4440.1 (GDR) 15.00.4435.7 (GDR) 15.00.4430.1 (CU32) 15.00.4420.2 (CU31) 15.00.4415.2 (CU30) 15.00.4410.1 (GDR) 15.00.4405.4 (CU29) 15.00.4395.2 (GDR) 15.00.4390.2 (GDR) 15.00.4385.2 (CU28) 15.00.4382.1 (CU27) 15.00.4375.4 (CU27) 15.00.4365.2 (CU26) 15.00.4355.3 (CU25) 15.00.4345.5 (CU24) 15.00.4335.1 (CU23) 15.00.4322.2 (CU22) 15.00.4316.3 (CU21) 15.00.4312.2 (CU20) 15.00.4236.7 (CU16 SU) 15.00.4198.2 (CU15) 15.00.4153.1 (CU12) 15.00.4073.23 (CU8) 15.00.4043.16 (CU5)  |  `15.00.4460.4.v1` `15.00.4455.2.v1` `15.00.4445.1.v1` `15.00.4440.1.v1` `15.00.4435.7.v1` `15.00.4430.1.v1` `15.00.4420.2.v1` `15.00.4415.2.v1` `15.00.4410.1.v1` `15.00.4405.4.v1` `15.00.4395.2.v1` `15.00.4390.2.v1` `15.00.4385.2.v1` `15.00.4382.1.v1` `15.00.4375.4.v1` `15.00.4365.2.v1` `15.00.4355.3.v1` `15.00.4345.5.v1` `15.00.4335.1.v1` `15.00.4322.2.v1` `15.00.4316.3.v1` `15.00.4312.2.v1` `15.00.4236.7.v1` `15.00.4198.2.v1` `15.00.4153.1.v1` `15.00.4073.23.v1` `15.00.4043.16.v1`  | 
| SQL Server 2017 |  14.00.3520.4 (GDR) 14.00.3515.1 (GDR) 14.00.3505.1 (GDR) 14.00.3500.1 (GDR) 14.00.3495.9 (GDR) 14.00.3485.1 (GDR) 14.00.3480.1 (GDR) 14.00.3475.1 (GDR) 14.00.3471.2 (GDR) 14.00.3465.1 (GDR) 14.00.3460.9 (CU31) 14.00.3451.2 (CU30) 14.00.3421.10 (CU27) 14.00.3401.7 (CU25) 14.00.3381.3 (CU23) 14.00.3356.20 (CU22) 14.00.3294.2 (CU20) 14.00.3281.6 (CU19)  |  `14.00.3520.4.v1` `14.00.3515.1.v1` `14.00.3505.1.v1` `14.00.3500.1.v1` `14.00.3495.9.v1` `14.00.3485.1.v1` `14.00.3480.1.v1` `14.00.3475.1.v1` `14.00.3471.2.v1` `14.00.3465.1.v1` `14.00.3460.9.v1` `14.00.3451.2.v1` `14.00.3421.10.v1` `14.00.3401.7.v1` `14.00.3381.3.v1` `14.00.3356.20.v1` `14.00.3294.2.v1` `14.00.3281.6.v1`  | 
| SQL Server 2016 |  13.00.6480.4 (GDR) 13.00.6475.1 (GDR) 13.00.6470.1 (GDR) 13.00.6465.1 (GDR) 13.00.6460.7 (GDR) 13.00.6455.2 (GDR) 13.00.6450.1 (GDR) 13.00.6445.1 (GDR) 13.00.6441.1 (GDR) 13.00.6435.1 (SP3 GDR) 13.00.6430.49 (SP3 GDR) 13.00.6419.1 (SP3 GDR) 13.00.6300.2 (SP3)  |  `13.00.6480.4.v1` `13.00.6475.1.v1` `13.00.6470.1.v1` `13.00.6465.1.v1` `13.00.6460.7.v1` `13.00.6455.2.v1` `13.00.6450.1.v1` `13.00.6445.1.v1` `13.00.6441.1.v1` `13.00.6435.1.v1` `13.00.6430.49.v1` `13.00.6419.1.v1` `13.00.6300.2.v1`  | 

## When does Amazon RDS for SQL Server introduce support for new major versions
<a name="SQLServer.Concepts.General.VersionPolicy.NewMajorVersions"></a>

Amazon RDS for SQL Server typically introduces support for new major SQL Server database versions within 6–12 months after Microsoft's general availability release date. When adding support for a new major version, Amazon RDS selects the first minor version that has undergone comprehensive testing to ensure stability.

## How long Amazon RDS for SQL Server major versions remain available
<a name="SQLServer.Concepts.General.VersionPolicy.MajorVersionAvailability"></a>

Amazon RDS for SQL Server maintains support for major versions until Microsoft's Extended End Date, which serves as the End of Support (EOS) date. (See [Microsoft Documentation](https://learn.microsoft.com/en-us/lifecycle/products/).) Below are the key dates to help plan your testing and upgrade cycles. Note that these dates indicate the earliest possible required upgrade timeline, and may be extended later by Amazon.


**Amazon RDS for SQL Server major version end of support dates**  

| SQL Server major version | Microsoft End Of Support date | Expected date for upgrading to a newer version | 
| --- | --- | --- | 
| 2016 | 7/14/2026 | 7/14/2026 | 
| 2017 | 10/12/2027 | 10/12/2027 | 
| 2019 | 1/8/2030 | 1/8/2030 | 
| 2022 | 1/11/2033 | 1/11/2033 | 

## How often Amazon RDS for SQL Server minor versions are released
<a name="SQLServer.Concepts.General.VersionPolicy.MinorVersionRelease"></a>

In general, Amazon RDS for SQL Server minor versions are released within 30 days after they are made available by Microsoft. The release schedule may vary to include additional features or fixes.

## How long Amazon RDS for SQL Server minor versions remain available
<a name="SQLServer.Concepts.General.VersionPolicy.MinorVersionAvailability"></a>

In general, Amazon RDS for SQL Server supports the latest three minor versions for each major version. The exact number of minor versions for each major version may vary due to Microsoft support timeline of each minor version, Amazon RDS maintenance schedule, and other factors.


**Amazon RDS for SQL Server minor version end of support dates**  

| Major version | Minor version | Microsoft End Of Support date | Expected date for upgrading to a newer version | 
| --- | --- | --- | --- | 
| SQL Server 2022 |  16.00.4245.2 (CU24) 16.00.4236.2 (CU23 v2) 16.00.4230.2 (GDR) 16.00.4225.2 (CU22) 16.00.4215.2 (CU21) 16.00.4210.1 (GDR) 16.00.4205.1 (CU20) 16.00.4195.2 (CU19) 16.00.4185.3 (CU18) 16.00.4175.1 (CU17) 16.00.4165.4 (CU16) 16.00.4150.1 (GDR) 16.00.4145.4 (CU15) 16.00.4140.3 (GDR) 16.00.4135.4 (CU14) 16.00.4131.2 (CU13) 16.00.4125.3 (CU13) 16.00.4120.1 (GDR) 16.00.4115.5 (CU12) 16.00.4105.2 (CU11) 16.00.4095.4 (CU10) 16.00.4085.2 (CU9)  | 1/11/2033 | 1/11/2033 | 
| SQL Server 2019 |  15.00.4460.4 (GDR) 15.00.4455.2 (GDR) 15.00.4445.1 (GDR) 15.00.4440.1 (GDR) 15.00.4435.7 (GDR) 15.00.4430.1 (CU32) 15.00.4420.2 (CU31) 15.00.4415.2 (CU30) 15.00.4410.1 (GDR) 15.00.4405.4 (CU29) 15.00.4395.2 (GDR) 15.00.4390.2 (GDR) 15.00.4385.2 (CU28) 15.00.4382.1 (CU27) 15.00.4375.4 (CU27) 15.00.4365.2 (CU26) 15.00.4355.3 (CU25) 15.00.4345.5 (CU24) 15.00.4335.1 (CU23) 15.00.4322.2 (CU22) 15.00.4316.3 (CU21) 15.00.4312.2 (CU20) 15.00.4236.7 (CU16 SU) 15.00.4198.2 (CU15) 15.00.4153.1 (CU12) 15.00.4073.23 (CU8) 15.00.4043.16 (CU5)  | 1/8/2030 | 1/8/2030 | 
| SQL Server 2017 |  14.00.3520.4 (GDR) 14.00.3515.1 (GDR) 14.00.3505.1 (GDR) 14.00.3500.1 (GDR) 14.00.3495.9 (GDR) 14.00.3485.1 (GDR) 14.00.3480.1 (GDR) 14.00.3475.1 (GDR) 14.00.3471.2 (GDR) 14.00.3465.1 (GDR) 14.00.3460.9 (CU31) 14.00.3451.2 (CU30) 14.00.3421.10 (CU27) 14.00.3401.7 (CU25) 14.00.3381.3 (CU23) 14.00.3356.20 (CU22) 14.00.3294.2 (CU20) 14.00.3281.6 (CU19)  | 10/12/2027 | 10/12/2027 | 
| SQL Server 2016 |  13.00.6480.4 (GDR) 13.00.6475.1 (GDR) 13.00.6470.1 (GDR) 13.00.6465.1 (GDR) 13.00.6460.7 (GDR) 13.00.6455.2 (GDR) 13.00.6450.1 (GDR) 13.00.6445.1 (GDR) 13.00.6441.1 (GDR) 13.00.6435.1 (SP3 GDR) 13.00.6430.49 (SP3 GDR) 13.00.6419.1 (SP3 GDR) 13.00.6300.2 (SP3)  | 7/14/2026 | 7/14/2026 | 

## What happens when an Amazon RDS for SQL Server database version is deprecated
<a name="SQLServer.Concepts.General.VersionPolicy.Deprecation"></a>

When a major version of the database engine is deprecated in Amazon RDS, we provide a minimum six (6) month period after the announcement of a deprecation for you to initiate a manual upgrade to a supported major version. At the end of this period, an automatic upgrade to the next major version will be applied to any instances still running the deprecated version during their scheduled maintenance windows.

When a minor version of a database engine is deprecated in Amazon RDS, we provide a three (3) month period after the announcement before beginning automatic upgrades. At the end of this period, all instances still running the deprecated minor version will be scheduled for automatic upgrade to the latest supported minor version during their scheduled maintenance windows.

Once a major or minor database engine version is deprecated in Amazon RDS, any DB instance restored from a DB snapshot created with the unsupported version will automatically and immediately be upgraded to a currently supported version.

## Mandatory Amazon RDS for SQL Server upgrades
<a name="SQLServer.Concepts.General.VersionPolicy.MandatoryUpgrades"></a>

Amazon RDS for SQL Server may require mandatory upgrades to newer minor versions when critical fixes are necessary. Before implementing these upgrades, Amazon communicates a detailed plan that includes timing of key milestones, impact on your database instances, and recommended actions. These mandatory upgrades are automated and scheduled to begin during your instance's designated maintenance window.

## Testing your DB instance with a new SQL Server version before upgrading
<a name="SQLServer.Concepts.General.VersionPolicy.Testing"></a>

You can test the upgrade process and how the new version works with your application and workload. Restore from an instance snapshot to create a new RDS for SQL Server instance. You can create an instance snapshot yourself from an existing Amazon RDS instance. Amazon RDS also automatically creates periodic snapshots for your instance. You can then initiate a version upgrade for the new instance. You can experiment on the upgraded copy of your instance before deciding whether to upgrade your original instance.

# Microsoft SQL Server features on Amazon RDS
<a name="SQLServer.Concepts.General.FeatureSupport"></a>

The supported SQL Server versions on Amazon RDS include the following features. In general, a version also includes features from the previous versions, unless otherwise noted in the Microsoft documentation.

**Topics**
+ [

## Microsoft SQL Server 2022 features
](#SQLServer.Concepts.General.FeatureSupport.2022)
+ [

## Microsoft SQL Server 2019 features
](#SQLServer.Concepts.General.FeatureSupport.2019)
+ [

## Microsoft SQL Server 2017 features
](#SQLServer.Concepts.General.FeatureSupport.2017)
+ [

## Microsoft SQL Server 2016 features
](#SQLServer.Concepts.General.FeatureSupport.2016)
+ [

## Microsoft SQL Server 2014 end of support on Amazon RDS
](#SQLServer.Concepts.General.FeatureSupport.2014)
+ [

## Microsoft SQL Server 2012 end of support on Amazon RDS
](#SQLServer.Concepts.General.FeatureSupport.2012)
+ [

## Microsoft SQL Server 2008 R2 end of support on Amazon RDS
](#SQLServer.Concepts.General.FeatureSupport.2008)
+ [

# Change data capture support for Microsoft SQL Server DB instances
](SQLServer.Concepts.General.CDC.md)
+ [

# Features not supported and features with limited support
](SQLServer.Concepts.General.FeatureNonSupport.md)

## Microsoft SQL Server 2022 features
<a name="SQLServer.Concepts.General.FeatureSupport.2022"></a>

SQL Server 2022 includes many new features, such as the following: 
+ Parameter Sensitive Plan Optimization – allows multiple cached plans for a single parameterized statement, potentially reducing issues with parameter sniffing.
+ SQL Server Ledger – provides the ability to cryptographically prove that your data hasn't been altered without authorization.
+ Instant file initialization for transaction log file growth events – results in faster execution of log growth events up to 64MB, including for databases with TDE enabled.
+ System page latch concurrency enhancements – reduces page latch contention while allocating and deallocating data pages and extents, providing significant performance enhancements to `tempdb` heavy workloads.

For the full list of SQL Server 2022 features, see [What's new in SQL Server 2022 (16.x)](https://learn.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-2022?view=sql-server-ver16) in the Microsoft documentation.

For a list of unsupported features, see [Features not supported and features with limited support](SQLServer.Concepts.General.FeatureNonSupport.md). 

## Microsoft SQL Server 2019 features
<a name="SQLServer.Concepts.General.FeatureSupport.2019"></a>

SQL Server 2019 includes many new features, such as the following: 
+ Accelerated database recovery (ADR) – Reduces crash recovery time after a restart or a long-running transaction rollback.
+ Intelligent Query Processing (IQP):
  + Row mode memory grant feedback – Corrects excessive grants automatically, that would otherwise result in wasted memory and reduced concurrency.
  + Batch mode on rowstore – Enables batch mode execution for analytic workloads without requiring columnstore indexes.
  + Table variable deferred compilation – Improves plan quality and overall performance for queries that reference table variables.
+ Intelligent performance:
  + `OPTIMIZE_FOR_SEQUENTIAL_KEY` index option – Improves throughput for high-concurrency inserts into indexes.
  + Improved indirect checkpoint scalability – Helps databases with heavy DML workloads.
  + Concurrent Page Free Space (PFS) updates – Enables handling as a shared latch rather than an exclusive latch.
+ Monitoring improvements:
  + `WAIT_ON_SYNC_STATISTICS_REFRESH` wait type – Shows accumulated instance-level time spent on synchronous statistics refresh operations.
  + Database-scoped configurations – Include `LIGHTWEIGHT_QUERY_PROFILING` and `LAST_QUERY_PLAN_STATS`.
  + Dynamic management functions (DMFs) – Include `sys.dm_exec_query_plan_stats` and `sys.dm_db_page_info`.
+ Verbose truncation warnings – The data truncation error message defaults to include table and column names and the truncated value.
+ Resumable online index creation – In SQL Server 2017, only resumable online index rebuild is supported.

For the full list of SQL Server 2019 features, see [What's new in SQL Server 2019 (15.x)](https://docs.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-ver15) in the Microsoft documentation.

For a list of unsupported features, see [Features not supported and features with limited support](SQLServer.Concepts.General.FeatureNonSupport.md). 

## Microsoft SQL Server 2017 features
<a name="SQLServer.Concepts.General.FeatureSupport.2017"></a>

SQL Server 2017 includes many new features, such as the following: 
+ Adaptive query processing
+ Automatic plan correction (an automatic tuning feature)
+ GraphDB
+ Resumable index rebuilds

For the full list of SQL Server 2017 features, see [What's new in SQL Server 2017](https://docs.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-2017) in the Microsoft documentation.

For a list of unsupported features, see [Features not supported and features with limited support](SQLServer.Concepts.General.FeatureNonSupport.md). 

## Microsoft SQL Server 2016 features
<a name="SQLServer.Concepts.General.FeatureSupport.2016"></a>

Amazon RDS supports the following features of SQL Server 2016:
+ Always Encrypted
+ JSON Support
+ Operational Analytics
+ Query Store
+ Temporal Tables

For the full list of SQL Server 2016 features, see [What's new in SQL Server 2016](https://docs.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-2016) in the Microsoft documentation.

## Microsoft SQL Server 2014 end of support on Amazon RDS
<a name="SQLServer.Concepts.General.FeatureSupport.2014"></a>

SQL Server 2014 has reached its end of support on Amazon RDS.

RDS is upgrading all existing DB instances that are still using SQL Server 2014 to the latest minor version of SQL Server 2016. For more information, see [Version management in Amazon RDS](SQLServer.Concepts.General.VersionSupport.md#SQLServer.Concepts.General.Version-Management).

## Microsoft SQL Server 2012 end of support on Amazon RDS
<a name="SQLServer.Concepts.General.FeatureSupport.2012"></a>

SQL Server 2012 has reached its end of support on Amazon RDS.

RDS is upgrading all existing DB instances that are still using SQL Server 2012 to the latest minor version of SQL Server 2016. For more information, see [Version management in Amazon RDS](SQLServer.Concepts.General.VersionSupport.md#SQLServer.Concepts.General.Version-Management).

## Microsoft SQL Server 2008 R2 end of support on Amazon RDS
<a name="SQLServer.Concepts.General.FeatureSupport.2008"></a>

SQL Server 2008 R2 has reached its end of support on Amazon RDS.

RDS is upgrading all existing DB instances that are still using SQL Server 2008 R2 to the latest minor version of SQL Server 2012. For more information, see [Version management in Amazon RDS](SQLServer.Concepts.General.VersionSupport.md#SQLServer.Concepts.General.Version-Management).

# Change data capture support for Microsoft SQL Server DB instances
<a name="SQLServer.Concepts.General.CDC"></a>

Amazon RDS supports change data capture (CDC) for your DB instances running Microsoft SQL Server. CDC captures changes that are made to the data in your tables, and stores metadata about each change that you can access later. For more information, see [Change data capture](https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/track-data-changes-sql-server#Capture) in the Microsoft documentation. 

Amazon RDS supports CDC for the following SQL Server editions and versions:
+ Microsoft SQL Server Enterprise Edition (All versions) 
+ Microsoft SQL Server Standard Edition: 
  + 2022
  + 2019
  + 2017
  + 2016 version 13.00.4422.0 SP1 CU2 and later

To use CDC with your Amazon RDS DB instances, first enable or disable CDC at the database level by using RDS-provided stored procedures. After that, any user that has the `db_owner` role for that database can use the native Microsoft stored procedures to control CDC on that database. For more information, see [Using change data capture for Amazon RDS for SQL Server](Appendix.SQLServer.CommonDBATasks.CDC.md). 

You can use CDC and AWS Database Migration Service to enable ongoing replication from SQL Server DB instances. 

# Features not supported and features with limited support
<a name="SQLServer.Concepts.General.FeatureNonSupport"></a>

The following Microsoft SQL Server features aren't supported on Amazon RDS: 
+ Backing up to Microsoft Azure Blob Storage
+ Buffer pool extension
+ Custom password policies
+ Data Quality Services
+ Database Log Shipping
+ Database snapshots (Amazon RDS supports only DB instance snapshots)
+ Extended stored procedures, including xp\$1cmdshell
+ FILESTREAM support
+ File tables
+ Machine Learning and R Services (requires OS access to install it)
+ Maintenance plans
+ Performance Data Collector
+ Policy-Based Management
+ PolyBase
+ Replication
+ Server-level triggers
+ Service Broker endpoints
+ Stretch database
+ TRUSTWORTHY database property (requires sysadmin role)
+ T-SQL endpoints (all operations using CREATE ENDPOINT are unavailable)
+ WCF Data Services

The following Microsoft SQL Server features have limited support on Amazon RDS:
+ Distributed queries/linked servers. For more information, see [Implement linked servers with Amazon RDS for Microsoft SQL Server](https://aws.amazon.com/blogs/database/implement-linked-servers-with-amazon-rds-for-microsoft-sql-server/).
+ Common Runtime Language (CLR). On RDS for SQL Server 2016 and lower versions, CLR is supported in `SAFE` mode and using assembly bits only. CLR isn't supported on RDS for SQL Server 2017 and higher versions. For more information, see [Common Runtime Language Integration](https://docs.microsoft.com/en-us/sql/relational-databases/clr-integration/common-language-runtime-integration-overview) in the Microsoft documentation.
+ Link servers with Oracle OLEDB in Amazon RDS for SQL Server. For more information, see [Support for Linked Servers with Oracle OLEDB in Amazon RDS for SQL Server](Appendix.SQLServer.Options.LinkedServers_Oracle_OLEDB.md).

The following features aren't supported on Amazon RDS with SQL Server 2022:
+ Suspend database for snapshot
+ External Data Source
+ Backup and restore to S3 compatible object storage
+ Object store integration
+ TLS 1.3 and MS-TDS 8.0
+ Backup compression offloading with QAT
+ SQL Server Analysis Services (SSAS)
+ Database mirroring with Multi-AZ deployments. SQL Server Always On is the only supported method with Multi-AZ deployments.

## Multi-AZ deployments using Microsoft SQL Server Database Mirroring or Always On availability groups
<a name="SQLServer.Concepts.General.Mirroring"></a>

Amazon RDS supports Multi-AZ deployments for DB instances running Microsoft SQL Server by using SQL Server Database Mirroring (DBM) or Always On Availability Groups (AGs). Multi-AZ deployments provide increased availability, data durability, and fault tolerance for DB instances. In the event of planned database maintenance or unplanned service disruption, Amazon RDS automatically fails over to the up-to-date secondary replica so database operations can resume quickly without manual intervention. The primary and secondary instances use the same endpoint, whose physical network address transitions to the passive secondary replica as part of the failover process. You don't have to reconfigure your application when a failover occurs. 

Amazon RDS manages failover by actively monitoring your Multi-AZ deployment and initiating a failover when a problem with your primary occurs. Failover doesn't occur unless the standby and primary are fully in sync. Amazon RDS actively maintains your Multi-AZ deployment by automatically repairing unhealthy DB instances and re-establishing synchronous replication. You don't have to manage anything. Amazon RDS handles the primary, the witness, and the standby instance for you. When you set up SQL Server Multi-AZ, RDS configures passive secondary instances for all of the databases on the instance. 

For more information, see [Multi-AZ deployments for Amazon RDS for Microsoft SQL Server](USER_SQLServerMultiAZ.md). 

## Using Transparent Data Encryption to encrypt data at rest
<a name="SQLServer.Concepts.General.Options"></a>

Amazon RDS supports Microsoft SQL Server Transparent Data Encryption (TDE), which transparently encrypts stored data. Amazon RDS uses option groups to enable and configure these features. For more information about the TDE option, see [Support for Transparent Data Encryption in SQL Server](Appendix.SQLServer.Options.TDE.md). 

# Functions and stored procedures for Amazon RDS for Microsoft SQL Server
<a name="SQLServer.Concepts.General.StoredProcedures"></a>

Following, you can find a list of the Amazon RDS functions and stored procedures that help automate SQL Server tasks. 

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Concepts.General.StoredProcedures.html)

# Local time zone for Microsoft SQL Server DB instances
<a name="SQLServer.Concepts.General.TimeZone"></a>

The time zone of an Amazon RDS DB instance running Microsoft SQL Server is set by default. The current default is Coordinated Universal Time (UTC). You can set the time zone of your DB instance to a local time zone instead, to match the time zone of your applications.

You set the time zone when you first create your DB instance. You can create your DB instance by using the [AWS Management Console](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html), the Amazon RDS API [CreateDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html.html) action, or the AWS CLI [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) command.

If your DB instance is part of a Multi-AZ deployment (using SQL Server DBM or AGs), then when you fail over, your time zone remains the local time zone that you set. For more information, see [Multi-AZ deployments using Microsoft SQL Server Database Mirroring or Always On availability groups](CHAP_SQLServer.md#SQLServer.Concepts.General.Mirroring).

When you request a point-in-time restore, you specify the time to restore to. The time is shown in your local time zone. For more information, see [Restoring a DB instance to a specified time for Amazon RDS](USER_PIT.md). 

The following are limitations to setting the local time zone on your DB instance:
+ You can't modify the time zone of an existing SQL Server DB instance. 
+ You can't restore a snapshot from a DB instance in one time zone to a DB instance in a different time zone. 
+ We strongly recommend that you don't restore a backup file from one time zone to a different time zone. If you restore a backup file from one time zone to a different time zone, you must audit your queries and applications for the effects of the time zone change. For more information, see [Importing and exporting SQL Server databases using native backup and restore](SQLServer.Procedural.Importing.md). 

## Supported time zones
<a name="SQLServer.Concepts.General.TimeZone.Zones"></a>

You can set your local time zone to one of the values listed in the following table.


| Time zone | Standard time offset | Description | Notes | 
| --- | --- | --- | --- | 
| Afghanistan Standard Time | (UTC\$104:30) | Kabul | This time zone doesn't observe daylight saving time. | 
| Alaskan Standard Time | (UTC–09:00) | Alaska |  | 
| Aleutian Standard Time | (UTC–10:00) | Aleutian Islands |  | 
| Altai Standard Time | (UTC\$107:00) | Barnaul, Gorno-Altaysk |  | 
| Arab Standard Time | (UTC\$103:00) | Kuwait, Riyadh | This time zone doesn't observe daylight saving time. | 
| Arabian Standard Time | (UTC\$104:00) | Abu Dhabi, Muscat |  | 
| Arabic Standard Time | (UTC\$103:00) | Baghdad | This time zone doesn't observe daylight saving time. | 
| Argentina Standard Time | (UTC–03:00) | City of Buenos Aires | This time zone doesn't observe daylight saving time. | 
| Astrakhan Standard Time | (UTC\$104:00) | Astrakhan, Ulyanovsk |  | 
| Atlantic Standard Time | (UTC–04:00) | Atlantic Time (Canada) |  | 
| AUS Central Standard Time | (UTC\$109:30) | Darwin | This time zone doesn't observe daylight saving time. | 
| Aus Central W. Standard Time | (UTC\$108:45) | Eucla |  | 
| AUS Eastern Standard Time | (UTC\$110:00) | Canberra, Melbourne, Sydney |  | 
| Azerbaijan Standard Time | (UTC\$104:00) | Baku |  | 
| Azores Standard Time | (UTC–01:00) | Azores |  | 
| Bahia Standard Time | (UTC–03:00) | Salvador |  | 
| Bangladesh Standard Time | (UTC\$106:00) | Dhaka | This time zone doesn't observe daylight saving time. | 
| Belarus Standard Time | (UTC\$103:00) | Minsk | This time zone doesn't observe daylight saving time. | 
| Bougainville Standard Time | (UTC\$111:00) | Bougainville Island |  | 
| Canada Central Standard Time | (UTC–06:00) | Saskatchewan | This time zone doesn't observe daylight saving time. | 
| Cape Verde Standard Time | (UTC–01:00) | Cabo Verde Is. | This time zone doesn't observe daylight saving time. | 
| Caucasus Standard Time | (UTC\$104:00) | Yerevan |  | 
| Cen. Australia Standard Time | (UTC\$109:30) | Adelaide |  | 
| Central America Standard Time | (UTC–06:00) | Central America | This time zone doesn't observe daylight saving time. | 
| Central Asia Standard Time | (UTC\$106:00) | Astana | This time zone doesn't observe daylight saving time. | 
| Central Brazilian Standard Time | (UTC–04:00) | Cuiaba |  | 
| Central Europe Standard Time | (UTC\$101:00) | Belgrade, Bratislava, Budapest, Ljubljana, Prague |  | 
| Central European Standard Time | (UTC\$101:00) | Sarajevo, Skopje, Warsaw, Zagreb |  | 
| Central Pacific Standard Time | (UTC\$111:00) | Solomon Islands, New Caledonia | This time zone doesn't observe daylight saving time. | 
| Central Standard Time | (UTC–06:00) | Central Time (US and Canada) |  | 
| Central Standard Time (Mexico) | (UTC–06:00) | Guadalajara, Mexico City, Monterrey |  | 
| Chatham Islands Standard Time | (UTC\$112:45) | Chatham Islands |  | 
| China Standard Time | (UTC\$108:00) | Beijing, Chongqing, Hong Kong, Urumqi | This time zone doesn't observe daylight saving time. | 
| Cuba Standard Time | (UTC–05:00) | Havana |  | 
| Dateline Standard Time | (UTC–12:00) | International Date Line West | This time zone doesn't observe daylight saving time. | 
| E. Africa Standard Time | (UTC\$103:00) | Nairobi | This time zone doesn't observe daylight saving time. | 
| E. Australia Standard Time | (UTC\$110:00) | Brisbane | This time zone doesn't observe daylight saving time. | 
| E. Europe Standard Time | (UTC\$102:00) | Chisinau |  | 
| E. South America Standard Time | (UTC–03:00) | Brasilia |  | 
| Easter Island Standard Time | (UTC–06:00) | Easter Island |  | 
| Eastern Standard Time | (UTC–05:00) | Eastern Time (US and Canada) |  | 
| Eastern Standard Time (Mexico) | (UTC–05:00) | Chetumal |  | 
| Egypt Standard Time | (UTC\$102:00) | Cairo |  | 
| Ekaterinburg Standard Time | (UTC\$105:00) | Ekaterinburg |  | 
| Fiji Standard Time | (UTC\$112:00) | Fiji |  | 
| FLE Standard Time | (UTC\$102:00) | Helsinki, Kyiv, Riga, Sofia, Tallinn, Vilnius |  | 
| Georgian Standard Time | (UTC\$104:00) | Tbilisi | This time zone doesn't observe daylight saving time. | 
| GMT Standard Time | (UTC) | Dublin, Edinburgh, Lisbon, London |  This time zone isn't the same as Greenwich Mean Time. This time zone does observe daylight saving time. | 
| Greenland Standard Time | (UTC–03:00) | Greenland |  | 
| Greenwich Standard Time | (UTC) | Monrovia, Reykjavik | This time zone doesn't observe daylight saving time. | 
| GTB Standard Time | (UTC\$102:00) | Athens, Bucharest |  | 
| Haiti Standard Time | (UTC–05:00) | Haiti |  | 
| Hawaiian Standard Time | (UTC–10:00) | Hawaii |  | 
| India Standard Time | (UTC\$105:30) | Chennai, Kolkata, Mumbai, New Delhi | This time zone doesn't observe daylight saving time. | 
| Iran Standard Time | (UTC\$103:30) | Tehran |  | 
| Israel Standard Time | (UTC\$102:00) | Jerusalem |  | 
| Jordan Standard Time | (UTC\$102:00) | Amman |  | 
| Kaliningrad Standard Time | (UTC\$102:00) | Kaliningrad |  | 
| Kamchatka Standard Time | (UTC\$112:00) | Petropavlovsk-Kamchatsky – Old |  | 
| Korea Standard Time | (UTC\$109:00) | Seoul | This time zone doesn't observe daylight saving time. | 
| Libya Standard Time | (UTC\$102:00) | Tripoli |  | 
| Line Islands Standard Time | (UTC\$114:00) | Kiritimati Island |  | 
| Lord Howe Standard Time | (UTC\$110:30) | Lord Howe Island |  | 
| Magadan Standard Time | (UTC\$111:00) | Magadan | This time zone doesn't observe daylight saving time. | 
| Magallanes Standard Time | (UTC–03:00) | Punta Arenas |  | 
| Marquesas Standard Time | (UTC–09:30) | Marquesas Islands |  | 
| Mauritius Standard Time | (UTC\$104:00) | Port Louis | This time zone doesn't observe daylight saving time. | 
| Middle East Standard Time | (UTC\$102:00) | Beirut |  | 
| Montevideo Standard Time | (UTC–03:00) | Montevideo |  | 
| Morocco Standard Time | (UTC\$101:00) | Casablanca |  | 
| Mountain Standard Time | (UTC–07:00) | Mountain Time (US and Canada) |  | 
| Mountain Standard Time (Mexico) | (UTC–07:00) | Chihuahua, La Paz, Mazatlan |  | 
| Myanmar Standard Time | (UTC\$106:30) | Yangon (Rangoon) | This time zone doesn't observe daylight saving time. | 
| N. Central Asia Standard Time | (UTC\$107:00) | Novosibirsk |  | 
| Namibia Standard Time | (UTC\$102:00) | Windhoek |  | 
| Nepal Standard Time | (UTC\$105:45) | Kathmandu | This time zone doesn't observe daylight saving time. | 
| New Zealand Standard Time | (UTC\$112:00) | Auckland, Wellington |  | 
| Newfoundland Standard Time | (UTC–03:30) | Newfoundland |  | 
| Norfolk Standard Time | (UTC\$111:00) | Norfolk Island |  | 
| North Asia East Standard Time | (UTC\$108:00) | Irkutsk |  | 
| North Asia Standard Time | (UTC\$107:00) | Krasnoyarsk |  | 
| North Korea Standard Time | (UTC\$109:00) | Pyongyang |  | 
| Omsk Standard Time | (UTC\$106:00) | Omsk |  | 
| Pacific SA Standard Time | (UTC–03:00) | Santiago |  | 
| Pacific Standard Time | (UTC–08:00) | Pacific Time (US and Canada) |  | 
| Pacific Standard Time (Mexico) | (UTC–08:00) | Baja California |  | 
| Pakistan Standard Time | (UTC\$105:00) | Islamabad, Karachi | This time zone doesn't observe daylight saving time. | 
| Paraguay Standard Time | (UTC–04:00) | Asuncion |  | 
| Romance Standard Time | (UTC\$101:00) | Brussels, Copenhagen, Madrid, Paris |  | 
| Russia Time Zone 10 | (UTC\$111:00) | Chokurdakh |  | 
| Russia Time Zone 11 | (UTC\$112:00) | Anadyr, Petropavlovsk-Kamchatsky |  | 
| Russia Time Zone 3 | (UTC\$104:00) | Izhevsk, Samara |  | 
| Russian Standard Time | (UTC\$103:00) | Moscow, St. Petersburg, Volgograd | This time zone doesn't observe daylight saving time. | 
| SA Eastern Standard Time | (UTC–03:00) | Cayenne, Fortaleza | This time zone doesn't observe daylight saving time. | 
| SA Pacific Standard Time | (UTC–05:00) | Bogota, Lima, Quito, Rio Branco |  This time zone doesn't observe daylight saving time. | 
| SA Western Standard Time | (UTC–04:00) | Georgetown, La Paz, Manaus, San Juan | This time zone doesn't observe daylight saving time. | 
| Saint Pierre Standard Time | (UTC–03:00) | Saint Pierre and Miquelon |  | 
| Sakhalin Standard Time | (UTC\$111:00) | Sakhalin |  | 
| Samoa Standard Time | (UTC\$113:00) | Samoa |  | 
| Sao Tome Standard Time | (UTC\$101:00) | Sao Tome |  | 
| Saratov Standard Time | (UTC\$104:00) | Saratov |  | 
| SE Asia Standard Time | (UTC\$107:00) | Bangkok, Hanoi, Jakarta | This time zone doesn't observe daylight saving time. | 
| Singapore Standard Time | (UTC\$108:00) | Kuala Lumpur, Singapore | This time zone doesn't observe daylight saving time. | 
| South Africa Standard Time | (UTC\$102:00) | Harare, Pretoria | This time zone doesn't observe daylight saving time. | 
| Sri Lanka Standard Time | (UTC\$105:30) | Sri Jayawardenepura | This time zone doesn't observe daylight saving time. | 
| Sudan Standard Time | (UTC\$102:00) | Khartoum |  | 
| Syria Standard Time | (UTC\$102:00) | Damascus |  | 
| Taipei Standard Time | (UTC\$108:00) | Taipei | This time zone doesn't observe daylight saving time. | 
| Tasmania Standard Time | (UTC\$110:00) | Hobart |  | 
| Tocantins Standard Time | (UTC–03:00) | Araguaina |  | 
| Tokyo Standard Time | (UTC\$109:00) | Osaka, Sapporo, Tokyo | This time zone doesn't observe daylight saving time. | 
| Tomsk Standard Time | (UTC\$107:00) | Tomsk |  | 
| Tonga Standard Time | (UTC\$113:00) | Nuku'alofa | This time zone doesn't observe daylight saving time. | 
| Transbaikal Standard Time | (UTC\$109:00) | Chita |  | 
| Turkey Standard Time | (UTC\$103:00) | Istanbul |  | 
| Turks And Caicos Standard Time | (UTC–05:00) | Turks and Caicos |  | 
| Ulaanbaatar Standard Time | (UTC\$108:00) | Ulaanbaatar | This time zone doesn't observe daylight saving time. | 
| US Eastern Standard Time | (UTC–05:00) | Indiana (East) |  | 
| US Mountain Standard Time | (UTC–07:00) | Arizona | This time zone doesn't observe daylight saving time. | 
| UTC | UTC | Coordinated Universal Time | This time zone doesn't observe daylight saving time. | 
| UTC–02 | (UTC–02:00) | Coordinated Universal Time–02 | This time zone doesn't observe daylight saving time. | 
| UTC–08 | (UTC–08:00) | Coordinated Universal Time–08 |  | 
| UTC–09 | (UTC–09:00) | Coordinated Universal Time–09 |  | 
| UTC–11 | (UTC–11:00) | Coordinated Universal Time–11 | This time zone doesn't observe daylight saving time. | 
| UTC\$112 | (UTC\$112:00) | Coordinated Universal Time\$112 | This time zone doesn't observe daylight saving time. | 
| UTC\$113 | (UTC\$113:00) | Coordinated Universal Time\$113 |  | 
| Venezuela Standard Time | (UTC–04:00) | Caracas | This time zone doesn't observe daylight saving time. | 
| Vladivostok Standard Time | (UTC\$110:00) | Vladivostok |  | 
| Volgograd Standard Time | (UTC\$104:00) | Volgograd |  | 
| W. Australia Standard Time | (UTC\$108:00) | Perth | This time zone doesn't observe daylight saving time. | 
| W. Central Africa Standard Time | (UTC\$101:00) | West Central Africa | This time zone doesn't observe daylight saving time. | 
| W. Europe Standard Time | (UTC\$101:00) | Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna |  | 
| W. Mongolia Standard Time | (UTC\$107:00) | Hovd |  | 
| West Asia Standard Time | (UTC\$105:00) | Ashgabat, Tashkent | This time zone doesn't observe daylight saving time. | 
| West Bank Standard Time | (UTC\$102:00) | Gaza, Hebron |  | 
| West Pacific Standard Time | (UTC\$110:00) | Guam, Port Moresby | This time zone doesn't observe daylight saving time. | 
| Yakutsk Standard Time | (UTC\$109:00) | Yakutsk |  | 

# Licensing Microsoft SQL Server on Amazon RDS
<a name="SQLServer.Concepts.General.Licensing"></a>

When you set up an Amazon RDS DB instance for Microsoft SQL Server, the software license is included. 

This means that you don't need to purchase SQL Server licenses separately. AWS holds the license for the SQL Server database software. Amazon RDS pricing includes the software license, underlying hardware resources, and Amazon RDS management capabilities. 

Amazon RDS supports the following Microsoft SQL Server editions: 
+ Enterprise
+ Standard
+ Web
+ Express

**Note**  
SQL Server Web Edition is designed for Web hosters and Web VAPs to host public and internet-accessible web pages, websites, web applications, and web services. This level of support is required for compliance with Microsoft's usage rights. For more information, see [AWS service terms](http://aws.amazon.com/serviceterms). 

Amazon RDS supports Multi-AZ deployments for DB instances running Microsoft SQL Server by using SQL Server Database Mirroring (DBM), Always On Availability Groups (AGs), and block level replication for SQL Server Web Edition. There are no additional licensing requirements for Multi-AZ deployments. For more information, see [Multi-AZ deployments for Amazon RDS for Microsoft SQL Server](USER_SQLServerMultiAZ.md). 

## Restoring license-terminated DB instances
<a name="SQLServer.Concepts.General.Licensing.Restoring"></a>

Amazon RDS takes snapshots of license-terminated DB instances. If your instance is terminated for licensing issues, you can restore it from the snapshot to a new DB instance. New DB instances have a license included.

For more information, see [Restoring license-terminated DB instances for Amazon RDS for SQL Server](Appendix.SQLServer.CommonDBATasks.RestoreLTI.md). 

## Development and test
<a name="SQLServer.Concepts.General.Licensing.Development"></a>

For development and testing scenarios, you can use Express Edition for many development, testing, and other non-production needs. You can also use Developer Edition, which includes all SQL Server Enterprise Edition features but is licensed only for non-production use. You can download and install SQL Server Developer Edition on RDS for SQL Server. For more information, see [Working with SQL Server Developer Edition on RDS for SQL Server](sqlserver-dev-edition.md) using a Custom Engine Version (CEV) with Bring Your Own Media (BYOM).You can also download and install SQL Server Developer Edition on RDS Custom for SQL Server using the same approach. For more information, see [Preparing a CEV using Bring Your Own Media (BYOM)](custom-cev-sqlserver.preparing.md#custom-cev-sqlserver.preparing.byom). For more information on the difference between SQL Server editions, see [Editions and supported features of SQL Server 2019](https://learn.microsoft.com/en-us/sql/sql-server/editions-and-components-of-sql-server-2019?view=sql-server-ver15) in the Microsoft documentation.

# Connecting to your Microsoft SQL Server DB instance
<a name="USER_ConnectToMicrosoftSQLServerInstance"></a>

After Amazon RDS provisions your DB instance, you can use any standard SQL client application to connect to the DB instance. In this topic, you connect to your DB instance by using either Microsoft SQL Server Management Studio (SSMS) or SQL Workbench/J.

For an example that walks you through the process of creating and connecting to a sample DB instance, see [Creating and connecting to a Microsoft SQL Server DB instance](CHAP_GettingStarted.CreatingConnecting.SQLServer.md). 

## Before you connect
<a name="sqlserver-before-connect"></a>

Before you can connect to your DB instance, it has to be available and accessible.

1. Make sure that its status is `available`. You can check this on the details page for your instance in the AWS Management Console or by using the [describe-db-instances](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html) AWS CLI command.  
![\[Check that the DB instance is available\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/sqlserver-available.png)

1. Make sure that it is accessible to your source. Depending on your scenario, it may not need to be publicly accessible. For more information, see [Amazon VPC and Amazon RDS](USER_VPC.md).

1. Make sure that the inbound rules of your VPC security group allow access to your DB instance. For more information, see [Can't connect to Amazon RDS DB instance](CHAP_Troubleshooting.md#CHAP_Troubleshooting.Connecting).

## Finding the DB instance endpoint and port number
<a name="sqlserver-endpoint"></a>

You need both the endpoint and the port number to connect to the DB instance.

**To find the endpoint and port**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the upper-right corner of the Amazon RDS console, choose the AWS Region of your DB instance.

1. Find the Domain Name System (DNS) name (endpoint) and port number for your DB instance:

   1. Open the RDS console and choose **Databases** to display a list of your DB instances.

   1. Choose the SQL Server DB instance name to display its details.

   1. On the **Connectivity & security** tab, copy the endpoint.  
![\[Locate DB instance endpoint and port\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/SQL-Connect-Endpoint.png)

   1. Note the port number.

# Connecting to your DB instance with Microsoft SQL Server Management Studio
<a name="USER_ConnectToMicrosoftSQLServerInstance.SSMS"></a>

In this procedure, you connect to your sample DB instance by using Microsoft SQL Server Management Studio (SSMS). To download a standalone version of this utility, see [Download SQL Server Management Studio (SSMS)](https://docs.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms) in the Microsoft documentation.

**To connect to a DB instance using SSMS**

1. Start SQL Server Management Studio.

   The **Connect to Server** dialog box appears.  
![\[Connect to Server dialog\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/RDSMSFTSQLConnect01.png)

1. Provide the information for your DB instance:

   1. For **Server type**, choose **Database Engine**. 

   1. For **Server name**, enter the DNS name (endpoint) and port number of your DB instance, separated by a comma. 
**Important**  
Change the colon between the endpoint and port number to a comma. 

      Your server name should look like the following example.

      ```
      database-2.cg034itsfake.us-east-1.rds.amazonaws.com,1433
      ```

   1. For **Authentication**, choose **SQL Server Authentication**. 

   1. For **Login**, enter the master user name for your DB instance. 

   1. For **Password**, enter the password for your DB instance. 

1. Choose **Connect**. 

   After a few moments, SSMS connects to your DB instance.

   If you can't connect to your DB instance, see [Security group considerations](USER_ConnectToMicrosoftSQLServerInstance.Security.md) and [Troubleshooting connections to your SQL Server DB instance](USER_ConnectToMicrosoftSQLServerInstance.Troubleshooting.md). 

1. Your SQL Server DB instance comes with SQL Server's standard built-in system databases (`master`, `model`, `msdb`, and `tempdb`). To explore the system databases, do the following:

   1. In SSMS, on the **View** menu, choose **Object Explorer**.

   1. Expand your DB instance, expand **Databases**, and then expand **System Databases**.  
![\[Object Explorer displaying the system databases\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/SQL-SSMS-SystemDBs.png)

1. Your SQL Server DB instance also comes with a database named `rdsadmin`. Amazon RDS uses this database to store the objects that it uses to manage your database. The `rdsadmin` database also includes stored procedures that you can run to perform advanced tasks. For more information, see [Common DBA tasks for Amazon RDS for Microsoft SQL Server](Appendix.SQLServer.CommonDBATasks.md).

1. You can now start creating your own databases and running queries against your DB instance and databases as usual. To run a test query against your DB instance, do the following:

   1. In SSMS, on the **File** menu point to **New** and then choose **Query with Current Connection**.

   1. Enter the following SQL query.

      ```
      select @@VERSION
      ```

   1. Run the query. SSMS returns the SQL Server version of your Amazon RDS DB instance.  
![\[SQL Query Window\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/SQL-Connect-Query.png)

# Connecting to your DB instance with SQL Workbench/J
<a name="USER_ConnectToMicrosoftSQLServerInstance.JDBC"></a>

This example shows how to connect to a DB instance running the Microsoft SQL Server database engine by using the SQL Workbench/J database tool. To download SQL Workbench/J, see [SQL Workbench/J](http://www.sql-workbench.net/). 

SQL Workbench/J uses JDBC to connect to your DB instance. You also need the JDBC driver for SQL Server. To download this driver, see [Download Microsoft JDBC Driver for SQL Server](https://learn.microsoft.com/en-us/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server?view=sql-server-ver16). 

**To connect to a DB instance using SQL Workbench/J**

1. Open SQL Workbench/J. The **Select Connection Profile** dialog box appears, as shown following.  
![\[Select Connection Profile dialog\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/workbench_profile.png)

1. In the first box at the top of the dialog box, enter a name for the profile. 

1. For **Driver**, choose **SQL JDBC 4.0**. 

1. For **URL**, enter **jdbc:sqlserver://**, then enter the endpoint of your DB instance. For example, the URL value might be the following.

   ```
   jdbc:sqlserver://sqlsvr-pdz.abcd12340.us-west-2.rds.amazonaws.com:1433
   ```

1. For **Username**, enter the master user name for the DB instance. 

1. For **Password**, enter the password for the master user. 

1. Choose the save icon in the dialog toolbar, as shown following.  
![\[Save the profile\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/save_example.png)

1. Choose **OK**. After a few moments, SQL Workbench/J connects to your DB instance. If you can't connect to your DB instance, see [Security group considerations](USER_ConnectToMicrosoftSQLServerInstance.Security.md) and [Troubleshooting connections to your SQL Server DB instance](USER_ConnectToMicrosoftSQLServerInstance.Troubleshooting.md). 

1. In the query pane, enter the following SQL query.

   ```
   select @@VERSION
   ```

1. Choose the `Execute` icon in the toolbar, as shown following.  
![\[Run the query\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/execute_example.png)

   The query returns the version information for your DB instance, similar to the following.

   ```
   Microsoft SQL Server 2017 (RTM-CU22) (KB4577467) - 14.0.3356.20 (X64)
   ```

# Security group considerations
<a name="USER_ConnectToMicrosoftSQLServerInstance.Security"></a>

To connect to your DB instance, your DB instance must be associated with a security group. This security group contains the IP addresses and network configuration that you use to access the DB instance. You might have associated your DB instance with an appropriate security group when you created your DB instance. If you assigned a default, no-configured security group when you created your DB instance, your DB instance firewall prevents connections.

In some cases, you might need to create a new security group to make access possible. For instructions on creating a new security group, see [Controlling access with security groups](Overview.RDSSecurityGroups.md). For a topic that walks you through the process of setting up rules for your VPC security group, see [Tutorial: Create a VPC for use with a DB instance (IPv4 only)](CHAP_Tutorials.WebServerDB.CreateVPC.md).

After you have created the new security group, modify your DB instance to associate it with the security group. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md). 

You can enhance security by using SSL to encrypt connections to your DB instance. For more information, see [Using SSL with a Microsoft SQL Server DB instance](SQLServer.Concepts.General.SSL.Using.md). 

# Troubleshooting connections to your SQL Server DB instance
<a name="USER_ConnectToMicrosoftSQLServerInstance.Troubleshooting"></a>

The following table shows error messages that you might encounter when you attempt to connect to your SQL Server DB instance.


****  
<a name="rds-sql-server-connection-troubleshooting-guidance"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToMicrosoftSQLServerInstance.Troubleshooting.html)

**Note**  
For more information on connection issues, see [Can't connect to Amazon RDS DB instance](CHAP_Troubleshooting.md#CHAP_Troubleshooting.Connecting).

# Working with SQL Server Developer Edition on RDS for SQL Server
<a name="sqlserver-dev-edition"></a>

RDS for SQL Server supports SQL Server Developer Edition. Developer Edition includes all SQL Server Enterprise Edition features but is licensed only for non-production use. You can create RDS for SQL Server Developer Edition instances using your own installation media through the custom engine version (CEV) feature.

## Benefits
<a name="sqlserver-dev-edition.benefits"></a>

You can use RDS for SQL Server Developer Edition to:
+ Lower costs in development and test environments while maintaining feature parity with production databases.
+ Access Enterprise Edition capabilities in non-production environments without Enterprise licensing fees.
+ Use Amazon RDS-automated management features, including backups, patching, and monitoring.

**Note**  
SQL Server Developer Edition is licensed for development and testing purposes only and cannot be used in production environments.

## Region availability
<a name="sqlserver-dev-edition.regions"></a>

RDS for SQL Server Developer Edition is available in the following AWS Regions:
+ US East (N. Virginia)
+ US East (Ohio)
+ US West (Oregon)
+ US West (N. California)
+ Asia Pacific (Mumbai)
+ Asia Pacific (Seoul)
+ Asia Pacific (Singapore)
+ Asia Pacific (Osaka)
+ Asia Pacific (Sydney)
+ Asia Pacific (Tokyo)
+ Europe (Ireland)
+ Europe (Frankfurt)
+ Europe (London)
+ Europe (Stockholm)
+ Europe (Paris)
+ Canada (Central)
+ South America (São Paulo)
+ Africa (Cape Town)
+ AWS GovCloud (US-East)
+ AWS GovCloud (US-West)

## Licensing and usage
<a name="sqlserver-dev-edition.licensing"></a>

SQL Server Developer Edition is licensed by Microsoft for development and test environments only. You cannot use Developer Edition as a production server. When you use SQL Server Developer Edition on Amazon RDS, you are responsible for complying with Microsoft's SQL Server Developer Edition licensing terms. You pay only for the AWS infrastructure costs - there is no additional SQL Server licensing fee. For pricing details, see [RDS for SQL Server pricing](https://aws.amazon.com/rds/sqlserver/pricing/).

## Prerequisites
<a name="sqlserver-dev-edition.prerequisites"></a>

Before using SQL Server Developer Edition on RDS for SQL Server, ensure you have the following requirements:
+ You must obtain the installation binaries directly from Microsoft and ensure compliance with Microsoft's licensing terms.
+ You must have access to use the following resources to create a Developer Edition DB instance:
  + AWS account with `AmazonRDSFullAccess` and `s3:GetObject` permissions.
+ An Amazon S3 bucket is required for storing installation media. You will need an ISO and cumulative update file to upload to the Amazon S3 bucket as part of CEV creation. For more information, see [Uploading installation media to an Amazon S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/upload-objects.html).
+ All installation media files must reside within the same Amazon S3 bucket and the same folder path within that Amazon S3 bucket in the same Region where the custom engine version is created.

### Supported versions
<a name="sqlserver-dev-edition.supported-versions"></a>

Developer Edition on RDS for SQL Server supports the following versions:
+ SQL Server 2022 CU 21 (16.00.4215.2)
+ SQL Server 2019 CU 32 GDR (15.00.4455.2)

To list all supported engine versions for Developer Edition CEV creation, use the following AWS CLI command:

```
aws rds describe-db-engine-versions --engine sqlserver-dev-ee --output json --query "{DBEngineVersions: DBEngineVersions[?Status=='requires-custom-engine-version'].{Engine: Engine, EngineVersion: EngineVersion, Status: Status, DBEngineVersionDescription: DBEngineVersionDescription}}"
```

The command returns output similar to the following example:

```
{
    "DBEngineVersions": [
        {
            "Engine": "sqlserver-dev-ee",
            "EngineVersion": "16.00.4215.2.v1",
            "Status": "requires-custom-engine-version",
            "DBEngineDescription": "Microsoft SQL Server Enterprise Developer Edition",
            "DBEngineVersionDescription": "SQL Server 2022 16.00.4215.2.v1"
        }
    ]
}
```

The engine version status as `requires_custom_engine_version` identifies template engine versions that are supported. These templates show which SQL Server versions you can import.

## Limitations
<a name="sqlserver-dev-edition.limitations"></a>

The following limitations apply to SQL Server Developer Edition on Amazon RDS:
+ Currently only supported on M6i and R6i instance classes.
+ Multi-AZ deployments and read replicas are not supported.
+ You must provide and manage your own SQL Server installation media.
+ Custom engine versions for SQL Server Developer Edition (sqlserver-dev-ee) cannot be shared cross-Region or cross-account.

# Preparing a CEV for RDS for SQL Server
<a name="sqlserver-dev-edition.preparing"></a>

## Prerequisites
<a name="sqlserver-dev-prerequisites"></a>

Before creating a custom engine version, make sure you have completed the following prerequisites:

### Prepare SQL Server Developer Edition installation media
<a name="sqlserver-dev-prepare-media"></a>

You must obtain the SQL Server Developer Edition installation media from Microsoft and prepare it for upload to S3.

**To download installation media from Microsoft**

1. **Option A:** Use your [Visual Studio subscription](https://visualstudio.microsoft.com/subscriptions/) to download the Developer Edition ISO. Only the English version is supported.

1. **Option B: Using SQL Server Installer**

   1. Download the [SQL Server Developer Edition installer](https://download.microsoft.com/download/c/c/9/cc9c6797-383c-4b24-8920-dc057c1de9d3/SQL2022-SSEI-Dev.exe).

   1. Run the installer and choose **Download Media** to download the full ISO.

   1. Choose **English** as the preferred language.

   1. Choose **ISO** as the media type.

   1. Choose **Download**.

**To download cumulative updates**

1. Visit the [Microsoft Catalog Update](https://www.catalog.update.microsoft.com/Home.aspx) page.

1. Find a SQL Server Developer Edition supported by RDS for SQL Server, for example "SQL Server 2022 Cumulative Update".

1. Download the latest supported CU executable file and save it to your machine.

1. Example files: `SQLServer2022-KB5065865-x64.exe` (CU21 for SQL Server 2022)

**Important**  
RDS for SQL Server only supports specific Cumulative Update (CU) versions. You must use the exact version listed in the table below. Do not use newer CU versions even if available from Microsoft, as they may not be compatible with RDS.

Alternatively, you can also download the required Cumulative Update (CU) file directly from the following:

The following table lists the supported SQL Server Developer Edition version and its corresponding Cumulative Update for use with RDS:


| SQL Server Version | Supported CU | KB Article | Download File Name | 
| --- | --- | --- | --- | 
|  SQL Server 2022  |  `CU21`  |  [KB5065865](https://learn.microsoft.com/en-us/troubleshoot/sql/releases/sqlserver-2022/cumulativeupdate21)  |  `SQLServer2022-KB5065865-x64.exe`  | 
|  SQL Server 2019  |  `CU32 GDR`  |  [KB5068404](https://support.microsoft.com/en-us/topic/kb5068404-description-of-the-security-update-for-sql-server-2019-cu32-november-11-2025-c203bfbf-036e-46d2-bc10-6c01200dc48a)  |  `SQLServer2019-KB5068404-x64.exe`  | 

# Creating a custom engine version for RDS for SQL Server
<a name="sqlserver-dev-edition.creating-cev"></a>

A custom engine version (CEV) for RDS for SQL Server consists of your SQL Server Developer Edition installation media imported into Amazon RDS. It is necessary to upload the base ISO installer and cumulative update files (.exe) to your Amazon S3 bucket. Once uploaded, you should provide the Amazon S3 location to RDS for it to download, validate, and subsequently create your CEV.

## Naming limitations
<a name="sqlserver-dev-edition.create-cev.naming-limitations"></a>

When creating a CEV, you must follow specific naming conventions:
+ CEV name must follow the pattern `major-version.minor-version.customized-string`.
+ `customized-string` can contain 1-50 alphanumeric characters, underscores, dashes, and periods. For example: `16.00.4215.2.my-dev-cev` for SQL Server 2022.

To list all supported engine versions, use the following AWS CLI command:

```
aws rds describe-db-engine-versions --engine sqlserver-dev-ee --output json --query "{DBEngineVersions: DBEngineVersions[?Status=='requires-custom-engine-version'].{Engine: Engine, EngineVersion: EngineVersion, Status: Status, DBEngineVersionDescription: DBEngineVersionDescription}}" 

{
    "DBEngineVersions": [
        {
            "Engine": "sqlserver-dev-ee",
            "EngineVersion": "16.00.4215.2.v1",
            "Status": "requires-custom-engine-version",
            "DBEngineDescription": "Microsoft SQL Server Enterprise Developer Edition",
            "DBEngineVersionDescription": "SQL Server 2022 16.00.4215.2.v1"
        }
    ]
}
```

## AWS CLI
<a name="sqlserver-dev-edition.create-cev.CLI"></a>

**To create the custom engine version**
+ Use the [create-custom-db-engine-version](https://docs.aws.amazon.com/cli/latest/reference/rds/create-custom-db-engine-version.html) command.

  The following options are required:
  + `--engine`
  + `--engine-version`
  + `--database-installation-files-s3-bucket-name`
  + `--database-installation-files`
  + `--region`

  You can also specify the following options:
  + `--database-installation-files-s3-prefix`
  + `--description`
  + `--tags`

  ```
  aws rds create-custom-db-engine-version \
  --engine sqlserver-dev-ee \
  --engine-version 16.00.4215.2.cev-dev-ss2022-cu21 \
  --region us-west-2 \
  --database-installation-files-s3-bucket-name my-s3-installation-media-bucket \
  --database-installation-files-s3-prefix sqlserver-dev-media \
  --database-installation-files "SQLServer2022-x64-ENU-Dev.iso" "SQLServer2022-KB5065865-x64.exe"
  ```

CEV creation typically takes 15-30 minutes. To monitor the CEV creation progress, use the following command:

```
# Check CEV status
aws rds describe-db-engine-versions \
--engine sqlserver-dev-ee \
--engine-version 16.00.4215.2.my-dev-cev \
--region us-west-2
```

## Lifecycle of an RDS for SQL Server CEV
<a name="sqlserver-dev-cev-lifecycle"></a>

When working with SQL Server Developer Edition on RDS for SQL Server, your custom engine versions transition through various lifecycle states.


| Lifecycle State | Description | When It Occurs | Available Actions | 
| --- | --- | --- | --- | 
|  pending-validation  |  Initial state when CEV is created  |  This is the initial state after creation with the `create-custom-db-engine-version` command.  |  Monitor status via `describe-db-engine-version`.  | 
|  validating  |  CEV Validation state  |  Amazon RDS is validating your custom engine version (CEV). This asynchronous process may take some time to complete.  |  Monitor the status until validation is complete.  | 
|  available  |  The custom engine version (CEV) validation completed successfully.  |  Custom Engine Version (CEV) is now available. Amazon RDS successfully validated your SQL Server ISO and cumulative update files. You can now create DB instances using this CEV.  |  Create DB instances using this CEV  | 
|  failed  |  RDS for SQL Server can't create the custom engine version (CEV) because the validation check failed.  |  ISO and cumulative media validation failed.   |  ISO validation failed. Check the failure reason in `describe-db-engine-version`, fix any file issues such as hash mismatches or corrupted content, and then recreate your custom engine version (CEV).  | 
|  deleting  |  The custom engine version (CEV) is deleting  |  After customer calls `delete-custom-db-engine-version` until deletion workflow completes.  |  Monitor status via `describe-db-engine-version`.  | 
|  incompatible-installation-media  |  Amazon RDS was unable to validate the installation media provided for the custom engine version (CEV)  |  The custom engine version (CEV) validation has failed. This is a terminal state.  |  See failureReason via `describe-db-engine-versions` for info on why validation failed; delete CEV.  | 

### Describe CEV Status
<a name="sqlserver-dev-cev-status-check"></a>

You can see the state of your CEVs using the AWS CLI:

```
1. aws rds describe-db-engine-versions \
2. --engine sqlserver-dev-ee \
3. --engine-version 16.00.4215.2.my-dev-cev \
4. --region us-west-2 \
5. --query 'DBEngineVersions[0].{Version:EngineVersion,Status:Status}'
```

Sample output

```
| DescribeDBEngineVersions                     |
+------------+---------------------------------+
| Status | Version                             |
+------------+---------------------------------+
| available | 16.00.4215.2.cev-dev-ss2022-cu21    |
+------------+---------------------------------+
```

When a CEV shows `failed` status, you can determine the reason using the following command:

```
1. aws rds describe-db-engine-versions \
2. --engine sqlserver-dev-ee \
3. --engine-version 16.00.4215.2.my-dev-cev \
4. --region us-west-2 \
5. --query 'DBEngineVersions[0].{Version:EngineVersion,Status:Status,FailureReason:FailureReason}'
```

# Creating an RDS for SQL Server Developer Edition DB instance
<a name="sqlserver-dev-edition.creating-instance"></a>

Launching Developer Edition instance on RDS for SQL Server follows a two-step process: first create a CEV with `create-custom-db-engine-version`, Once your custom engine version is in the available state, you can create Amazon RDS database instances using the CEV.

**Key differences for Developer Edition instance creation**


| Parameter | Developer Edition | 
| --- | --- | 
|  `--engine`  |  sqlserver-dev-ee  | 
|  `--engine-version`  |  Custom engine version (e.g., 16.00.4215.2.cev-dev-ss2022-cu21)  | 
|  `--license-model`  |  bring-your-own-license  | 

## AWS CLI
<a name="sqlserver-dev-edition.creating-instance.CLI"></a>

To create a SQL Server Developer Edition DB instance, use the [create-db-instance](https://docs.aws.amazon.com//cli/latest/reference/rds/create-db-instance.html) command with the following parameters:

The following options are required:
+ `--db-instance-identifier` 
+ `--db-instance-class` 
+ `--engine` – `sqlserver-dev-ee`
+ `--region`

**Examples:**

For Linux, macOS, or Unix:

```
aws rds create-db-instance \
--db-instance-identifier my-dev-sqlserver \
--db-instance-class db.m6i.xlarge \
--engine sqlserver-dev-ee \
--engine-version 16.00.4215.2.my-dev-cev \
--allocated-storage 200 \
--master-username admin \
--master-user-password changeThisPassword \
--license-model bring-your-own-license \
--no-multi-az \
--vpc-security-group-ids sg-xxxxxxxxx \
--db-subnet-group-name my-db-subnet-group \
--backup-retention-period 7 \
--region us-west-2
```

For Windows:

```
aws rds create-db-instance ^
--db-instance-identifier my-dev-sqlserver ^
--db-instance-class db.m6i.xlarge ^
--engine sqlserver-dev-ee ^
--engine-version 16.00.4215.2.cev-dev-ss2022-cu21 ^
--allocated-storage 200 ^
--master-username admin ^
--master-user-password master_user_password ^
--license-model bring-your-own-license ^
--no-multi-az ^
--vpc-security-group-ids sg-xxxxxxxxx ^
--db-subnet-group-name my-db-subnet-group ^
--backup-retention-period 7 ^
--region us-west-2
```

Refer to [Creating a DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html#USER_CreateDBInstance.Creating) to create using the AWS console.

# Applying database minor version upgrades
<a name="sqlserver-dev-edition.minor-version-upgrades"></a>

RDS for SQL Server Developer Edition requires creating a new custom engine version (CEV) with latest cumulative update to apply database minor version upgrade. Database minor version upgrades for SQL Server Developer Edition involve the following steps:

1. Before upgrading, verify current engine version on the instance, identify the target database engine version from Amazon RDS supported versions. For information about what SQL Server versions are available on Amazon RDS, see [Working with SQL Server Developer Edition on RDS for SQL Server](sqlserver-dev-edition.md).

1. Obtain and upload installation media (ISO and CU), then [create a new custom engine version](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/sqlserver-dev-edition.creating-cev.html).

1. Apply database minor version upgrade by using Amazon RDS [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) with the new CEV.

   ```
   aws rds modify-db-instance \
   --db-instance-identifier <instance-id> \
   --engine-version <new-cev-version> \
   --no-apply-immediately ## use --apply-immediately for immediate update
   ```
**Note**  
`--no-apply-immediately` (the default) to apply the changes during the next maintenance window.

# Viewing and managing custom engine versions
<a name="sqlserver-dev-edition.managing"></a>

To view all your RDS for SQL Server Developer Edition CEVs, use the `describe-db-engine-versions` command with the `--engine` input as `sqlserver-dev-ee`.

```
aws rds describe-db-engine-versions \
--engine sqlserver-dev-ee \
--include-all \
--region us-west-2
```

To view the details of a specific CEV, use the following command:

```
aws rds describe-db-engine-versions \
--engine sqlserver-dev-ee \
--engine-version 16.00.4215.2.cev-dev-ss2022-cu21 \
--region us-west-2
```

**Note**  
This command only returns CEVs with an `available` status. To view CEVs in validating or other states, include the `--include-all` flag.

## Deleting custom engine versions
<a name="sqlserver-dev-deleting-cevs"></a>

Before deleting a CEV, make sure it isn't being used by any of the following:
+ An Amazon RDS DB instance
+ A snapshot of an Amazon RDS DB instance
+ An automated backup of an Amazon RDS DB instance

**Note**  
You can't delete a CEV if there are any resources associated with it.

To delete a custom engine version, use the [ delete-custom-db-engine-version](https://docs.aws.amazon.com//cli/latest/reference/rds/delete-custom-db-engine-version.html) command.
+ `--engine`: Specify `sqlserver-dev-ee` for Developer Edition
+ `--engine-version`: The exact CEV version identifier to delete
+ `--region`: AWS Region where the CEV exists

```
aws rds delete-custom-db-engine-version \
--engine sqlserver-dev-ee \
--engine-version 16.00.4215.2.my-dev-cev \
--region us-west-2
```

To monitor the CEV deletion process, use the `describe-db-engine-versions` command and specify your RDS for SQL Server CEV engine version

```
aws rds describe-db-engine-versions \
--engine sqlserver-dev-ee \
--engine-version 16.00.4215.2.my-dev-cev \
--region us-west-2
```

Status Values:
+ `deleting`: CEV deletion in progress
+ No results returned: CEV successfully deleted

# Troubleshooting SQL Server Developer Edition for RDS for SQL Server
<a name="sqlserver-dev-edition.troubleshooting"></a>

The following table lists some common errors and corresponding solutions when working with SQL Server Developer Edition for RDS for SQL Server


**Common Errors and Solutions**  

| Error Code | Description | Solution | 
| --- | --- | --- | 
| InvalidParameterValue | Invalid CEV parameters or file references | Verify file names, paths, and parameter syntax | 
| DBSubnetGroupNotFound | Subnet group doesn't exist | Create subnet group or verify name | 
| InvalidVPCNetworkState | VPC configuration issue | Check VPC, subnets, and routing tables | 
| InvalidEngineVersion | CEV not available or invalid | Verify CEV status and name | 
| InvalidDBInstanceClass | Instance class not supported | Choose supported instance class | 
| CustomDBEngineVersionQuotaExceededFault | You have reached the maximum number of custom engine versions | If required increase service quota, alternatively Delete unused CEVs if required | 
| CreateCustomDBEngineVersionFault | Amazon RDS was unable to access the specified installer file in the Amazon S3 bucket. | Amazon RDS cannot access the SQL Server installation files in the specified Amazon S3 location. Grant the Amazon RDS service principal (rds.amazonaws.com) s3:GetObject permission in your Amazon S3 bucket policy. Validate that the Amazon S3 bucket Region is the same as the Region where you are creating the CEV. | 

# Working with Active Directory with RDS for SQL Server
<a name="User.SQLServer.ActiveDirectoryWindowsAuth"></a>

You can join an RDS for SQL Server DB instance to a Microsoft Active Directory (AD) domain. Your AD domain can be hosted on AWS Managed AD within AWS, or on a self-managed AD in a location of your choice, including your corporate data centers, on AWS EC2, or with other cloud providers.

You can authenticate domain users using NTLM authentication and Kerberos authentication with self-managed Active Directory and AWS Managed Microsoft AD.

In the following sections, you can find information about working with self-managed Active Directory and AWS Managed Active Directory for Microsoft SQL Server on Amazon RDS.

**Topics**
+ [

# Working with self-managed Active Directory with an Amazon RDS for SQL Server DB instance
](USER_SQLServer_SelfManagedActiveDirectory.md)
+ [

# Working with AWS Managed Active Directory with RDS for SQL Server
](USER_SQLServerWinAuth.md)

# Working with self-managed Active Directory with an Amazon RDS for SQL Server DB instance
<a name="USER_SQLServer_SelfManagedActiveDirectory"></a>

Amazon RDS for SQL Server seamlessly integrates with your self-managed Active Directory (AD) domain, regardless of where your AD is hosted - whether in your data center, on Amazon EC2, or with other cloud providers. This integration enables direct user authentication through NTLM or Kerberos protocols, eliminating the need for complex intermediary domains or forest trusts. When you connect to your RDS SQL Server DB instance, authentication requests are securely forwarded to your designated AD domain, maintaining your existing identity management structure while leveraging Amazon RDS's managed database capabilities.

**Topics**
+ [

## Region and version availability
](#USER_SQLServer_SelfManagedActiveDirectory.RegionVersionAvailability)
+ [

# Requirements
](USER_SQLServer_SelfManagedActiveDirectory.Requirements.md)
+ [

## Considerations
](#USER_SQLServer_SelfManagedActiveDirectory.Limitations)
+ [

# Setting up self-managed Active Directory
](USER_SQLServer_SelfManagedActiveDirectory.SettingUp.md)
+ [

# Joining your DB instance to self-managed Active Directory
](USER_SQLServer_SelfManagedActiveDirectory.Joining.md)
+ [

# Managing a DB instance in a self-managed Active Directory Domain
](USER_SQLServer_SelfManagedActiveDirectory.Managing.md)
+ [

## Understanding self-managed Active Directory Domain membership
](#USER_SQLServer_SelfManagedActiveDirectory.Understanding)
+ [

# Troubleshooting self-managed Active Directory
](USER_SQLServer_SelfManagedActiveDirectory.TroubleshootingSelfManagedActiveDirectory.md)
+ [

## Restoring a SQL Server DB instance and then adding it to a self-managed Active Directory domain
](#USER_SQLServer_SelfManagedActiveDirectory.Restore)

## Region and version availability
<a name="USER_SQLServer_SelfManagedActiveDirectory.RegionVersionAvailability"></a>

Amazon RDS supports self-managed AD for SQL Server using NTLM and Kerberos in all commercial AWS Regions and AWS GovCloud (US) Regions.

# Requirements
<a name="USER_SQLServer_SelfManagedActiveDirectory.Requirements"></a>

Make sure you've met the following requirements before joining an RDS for SQL Server DB instance to your self-managed AD domain.

**Topics**
+ [

## Configure your on-premises AD
](#USER_SQLServer_SelfManagedActiveDirectory.Requirements.OnPremConfig)
+ [

## Configure your network connectivity
](#USER_SQLServer_SelfManagedActiveDirectory.Requirements.NetworkConfig)
+ [

## Configure your AD domain service account
](#USER_SQLServer_SelfManagedActiveDirectory.Requirements.DomainAccountConfig)
+ [

## Configuring secure communication over LDAPS
](#USER_SQLServer_SelfManagedActiveDirectory.Requirements.LDAPS)

## Configure your on-premises AD
<a name="USER_SQLServer_SelfManagedActiveDirectory.Requirements.OnPremConfig"></a>

Make sure that you have an on-premises or other self-managed Microsoft AD that you can join the Amazon RDS for SQL Server instance to. Your on-premises AD should have the following configuration:
+ If you have AD sites defined, make sure the subnets in the VPC associated with your RDS for SQL Server DB instance are defined in your AD site. Confirm there aren't any conflicts between the subnets in your VPC and the subnets in your other AD sites.
+ Your AD domain controller has a domain functional level of Windows Server 2008 R2 or higher.
+ Your AD domain name can't be in Single Label Domain (SLD) format. RDS for SQL Server does not support SLD domains.
+ The fully qualified domain name (FQDN) for your AD can't exceed 47 characters.

## Configure your network connectivity
<a name="USER_SQLServer_SelfManagedActiveDirectory.Requirements.NetworkConfig"></a>

Make sure that you have met the following network configurations:
+ Configure connectivity between the Amazon VPC where you want to create the RDS for SQL Server DB instance and your self-managed AD. You can set up connectivity using AWS Direct Connect, AWS VPN, VPC peering, or AWS Transit Gateway.
+ For VPC security groups, the default security group for your default Amazon VPC is already added to your RDS for SQL Server DB instance in the console. Ensure that the security group and the VPC network ACLs for the subnet(s) where you're creating your RDS for SQL Server DB instance allow traffic on the ports and in the directions shown in the following diagram.  
![\[Self-managed AD network configuration port rules.\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/SQLServer_SelfManagedActiveDirectory_Requirements_NetworkConfig.png)

  The following table identifies the role of each port.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_SQLServer_SelfManagedActiveDirectory.Requirements.html)
+ Generally, the domain DNS servers are located in the AD domain controllers. You do not need to configure the VPC DHCP option set to use this feature. For more information, see [DHCP option sets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_DHCP_Options.html) in the *Amazon VPC User Guide*.

**Important**  
If you're using VPC network ACLs, you must also allow outbound traffic on dynamic ports (49152-65535) from your RDS for SQL Server DB instance. Ensure that these traffic rules are also mirrored on the firewalls that apply to each of the AD domain controllers, DNS servers, and RDS for SQL Server DB instances.  
While VPC security groups require ports to be opened only in the direction that network traffic is initiated, most Windows firewalls and VPC network ACLs require ports to be open in both directions.

## Configure your AD domain service account
<a name="USER_SQLServer_SelfManagedActiveDirectory.Requirements.DomainAccountConfig"></a>

Make sure that you have met the following requirements for an AD domain service account:
+ Make sure that you have a domain service account in your self-managed AD domain with delegated permissions to join computers to the domain. A domain service account is a user account in your self-managed AD that has been delegated permission to perform certain tasks.
+ The domain service account needs to be delegated the following permissions in the Organizational Unit (OU) that you're joining your RDS for SQL Server DB instance to:
  + Validated ability to write to the DNS host name
  + Validated ability to write to the service principal name
  + Create and delete computer objects

  These represent the minimum set of permissions that are required to join computer objects to your self-managed AD. For more information, see [Errors when attempting to join computers to a domain](https://learn.microsoft.com/en-US/troubleshoot/windows-server/identity/access-denied-when-joining-computers) in the Microsoft Windows Server documentation.
+ To use Kerberos authentication, you need to provide Service Principal Names (SPNs) and DNS permissions to your AD domain service account:
  + **Write SPN**: Delegate the **Write SPN** permission to the AD domain service account in the OU where you need to join the RDS for SQL Server DB instance. This permissions is different from the validated write SPN.
  + **DNS permissions**: Provide the following permissions to the AD domain service account in the DNS manager at the server level for your domain controller:
    + List contents
    + Read all properties
    + Read permissions

**Important**  
Do not move computer objects that RDS for SQL Server creates in the Organizational Unit after your DB instance is created. Moving the associated objects will cause your RDS for SQL Server DB instance to become misconfigured. If you need to move the computer objects created by Amazon RDS, use the [ModifyDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) RDS API operation to modify the domain parameters with the desired location of the computer objects.

## Configuring secure communication over LDAPS
<a name="USER_SQLServer_SelfManagedActiveDirectory.Requirements.LDAPS"></a>

Communication over LDAPS is recommended for RDS to query and access computer objects as well as SPNs in the domain controller. To use secure LDAP, use a valid SSL certificate on your domain controller that meets the requirements for secure LDAPS. If a valid SSL certificate does not exist on the domain controller, the RDS for SQL Server DB instance defaults to using LDAP. For more information on certificate validity, see [ Requirements for an LDAPS certificate](https://learn.microsoft.com/en-us/troubleshoot/windows-server/active-directory/enable-ldap-over-ssl-3rd-certification-authority#requirements-for-an-ldaps-certificate).

## Considerations
<a name="USER_SQLServer_SelfManagedActiveDirectory.Limitations"></a>

When adding an RDS for SQL Server DB instance to a self-managed AD, keep the consider the following:
+ Your DB instances sync with AWS's NTP service and not the AD domain's time server. For database connections between linked SQL Server instances within your AD domain, you can only SQL authentication and not Windows authentication.
+ Group Policy Object settings from your self-managed AD domain are not propagated to your RDS for SQL Server instances.

# Setting up self-managed Active Directory
<a name="USER_SQLServer_SelfManagedActiveDirectory.SettingUp"></a>

To set up a self-managed AD, take the following steps.

**Topics**
+ [

## Step 1: Create an Organizational Unit in your AD
](#USER_SQLServer_SelfManagedActiveDirectory.SettingUp.CreateOU)
+ [

## Step 2: Create an AD domain service account in your AD
](#USER_SQLServer_SelfManagedActiveDirectory.SettingUp.CreateADuser)
+ [

## Step 3: Delegate control to the AD domain service account
](#USER_SQLServer_SelfManagedActiveDirectory.SettingUp.DelegateControl)
+ [

## Step 4: Create an AWS KMS key
](#USER_SQLServer_SelfManagedActiveDirectory.SettingUp.CreateKMSkey)
+ [

## Step 5: Create an AWS secret
](#USER_SQLServer_SelfManagedActiveDirectory.SettingUp.CreateSecret)

## Step 1: Create an Organizational Unit in your AD
<a name="USER_SQLServer_SelfManagedActiveDirectory.SettingUp.CreateOU"></a>

**Important**  
 We recommend creating a dedicated OU and service credential scoped to that OU for any AWS account that owns an RDS for SQL Server DB instance joined your self-managed AD domain. By dedicating an OU and service credential, you can avoid conflicting permissions and follow the principal of least privilege. 

**To create an OU in your AD**

1. Connect to your AD domain as a domain administrator.

1. Open **Active Directory Users and Computers** and select the domain where you want to create your OU.

1. Right-click the domain and choose **New**, then **Organizational Unit**.

1. Enter a name for the OU.

1. Keep the box selected for **Protect container from accidental deletion**.

1. Click **OK**. Your new OU will appear under your domain.

## Step 2: Create an AD domain service account in your AD
<a name="USER_SQLServer_SelfManagedActiveDirectory.SettingUp.CreateADuser"></a>

The domain service account credentials will be used for the secret in AWS Secrets Manager.

**To create an AD domain service account in your AD**

1. Open **Active Directory Users and Computers** and select the domain and OU where you want to create your user.

1. Right-click the **Users** object and choose **New**, then **User**.

1. Enter a first name, last name, and logon name for the user. Click **Next**.

1. Enter a password for the user. Don't select **"User must change password at next login"**. Don't select **"Account is disabled"**. Click **Next**.

1. Click **OK**. Your new user will appear under your domain.

## Step 3: Delegate control to the AD domain service account
<a name="USER_SQLServer_SelfManagedActiveDirectory.SettingUp.DelegateControl"></a>

**To delegate control to the AD domain service account in your domain**

1. Open **Active Directory Users and Computers** MMC snap-in and select the domain where you want to create your user.

1. Right-click the OU that you created earlier and choose **Delegate Control**.

1. On the **Delegation of Control Wizard**, click **Next**.

1. On the **Users or Groups** section, click **Add**.

1. On the **Select Users, Computers, or Groups** section, enter the AD domain service account you created and click **Check Names**. If your AD domain service account check is successful, click **OK**.

1. On the **Users or Groups** section, confirm your AD domain service account was added and click **Next**.

1. On the **Tasks to Delegate** section, choose **Create a custom task to delegate** and click **Next**.

1. On the **Active Directory Object Type** section:

   1. Choose **Only the following objects in the folder**.

   1. Select **Computer Objects**.

   1. Select **Create selected objects in this folder**.

   1. Select **Delete selected objects in this folder** and click **Next**.

1. On the **Permissions** section:

   1. Keep **General** selected.

   1. Select **Validated write to DNS host name**.

   1. Select **Validated write to service principal name** and click **Next**.

   1. To enable Kerberos authentication, keep **Property-specific** selected and select **Write servicePrincipalName** from the list.

1. For **Completing the Delegation of Control Wizard**, review and confirm your settings and click **Finish**.

1. For Kerberos authentication, open the DNS Manager and open **Server** properties.

   1. In the Windows dialog box, type `dnsmgmt.msc`.

   1. Add the AD domain service account under the **Security** tab.

   1. Select the **Read** permission and apply your changes.

## Step 4: Create an AWS KMS key
<a name="USER_SQLServer_SelfManagedActiveDirectory.SettingUp.CreateKMSkey"></a>

The KMS key is used to encrypt your AWS secret.

**To create an AWS KMS key**
**Note**  
 For **Encryption Key**, don't use the AWS default KMS key. Be sure to create the AWS KMS key in the same AWS account that contains the RDS for SQL Server DB instance that you want to join to your self-managed AD. 

1. In the AWS KMS console, choose **Create key**.

1. For **Key Type**, choose **Symmetric**.

1. For **Key Usage**, choose **Encrypt and decrypt**.

1. For **Advanced options**:

   1. For **Key material origin**, choose **KMS**.

   1. For **Regionality**, choose **Single-Region key** and click **Next**.

1. For **Alias**, provide a name for the KMS key.

1. (Optional) For **Description**, provide a description of the KMS key.

1. (Optional) For **Tags**, provide a tag the KMS key and click **Next**.

1. For **Key administrators**, provide the name of an IAM user and select it.

1. For **Key deletion**, keep the box selected for **Allow key administrators to delete this key** and click **Next**.

1. For **Key users**, provide the same IAM user from the previous step and select it. Click **Next**.

1. Review the configuration.

1. For **Key policy**, include the following to the policy **Statement**:

   ```
   {
       "Sid": "Allow use of the KMS key on behalf of RDS",
       "Effect": "Allow",
       "Principal": {
           "Service": [
               "rds.amazonaws.com"
           ]
       },
       "Action": "kms:Decrypt",
       "Resource": "*"
   }
   ```

1. Click **Finish**.

## Step 5: Create an AWS secret
<a name="USER_SQLServer_SelfManagedActiveDirectory.SettingUp.CreateSecret"></a>

**To create a secret**
**Note**  
 Be sure to create the secret in the same AWS account that contains the RDS for SQL Server DB instance that you want to join to your self-managed AD. 

1. In AWS Secrets Manager, choose **Store a new secret**.

1. For **Secret type**, choose **Other type of secret**.

1. For **Key/value pairs**, add your two keys:

   1. For the first key, enter `SELF_MANAGED_ACTIVE_DIRECTORY_USERNAME`.

   1. For the value of the first key, enter only the username (without the domain prefix) of the AD user. Do not include the domain name as this causes instance creation to fail.

   1. For the second key, enter `SELF_MANAGED_ACTIVE_DIRECTORY_PASSWORD`.

   1. For the value of the second key, enter the password that you created for the AD user on your domain.

1. For **Encryption key**, enter the KMS key that you created in a previous step and click **Next**.

1. For **Secret name**, enter a descriptive name that helps you find your secret later.

1. (Optional) For **Description**, enter a description for the secret name.

1. For **Resource permission**, click **Edit**.

1. Add the following policy to the permission policy:
**Note**  
We recommend that you use the `aws:sourceAccount` and `aws:sourceArn` conditions in the policy to avoid the *confused deputy* problem. Use your AWS account for `aws:sourceAccount` and the RDS for SQL Server DB instance ARN for `aws:sourceArn`. For more information, see [Preventing cross-service confused deputy problems](cross-service-confused-deputy-prevention.md).

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement":
       [
           {
               "Effect": "Allow",
               "Principal":
               {
                   "Service": "rds.amazonaws.com"
               },
               "Action": "secretsmanager:GetSecretValue",
               "Resource": "*",
               "Condition":
               {
                   "StringEquals":
                   {
                       "aws:sourceAccount": "123456789012"
                   },
                   "ArnLike":
                   {
                       "aws:sourceArn": "arn:aws:rds:us-west-2:123456789012:db:*"
                   }
               }
           }
       ]
   }
   ```

------

1. Click **Save** then click **Next**.

1. For **Configure rotation settings**, keep the default values and choose **Next**.

1. Review the settings for the secret and click **Store**.

1. Choose the secret you created and copy the value for the **Secret ARN**. This will be used in the next step to set up self-managed Active Directory.

# Joining your DB instance to self-managed Active Directory
<a name="USER_SQLServer_SelfManagedActiveDirectory.Joining"></a>

To join your RDS for SQL Server DB instance to your self-managed AD, follow these steps:

## Step 1: Create or modify a SQL Server DB instance
<a name="USER_SQLServer_SelfManagedActiveDirectory.SettingUp.CreateModify"></a>

You can use the console, CLI, or RDS API to associate an RDS for SQL Server DB instance with a self-managed AD domain. You can do this in one of the following ways:
+ Create a new SQL Server DB instance using the console, the [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) CLI command, or the [CreateDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html) RDS API operation.

  For instructions, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).
+ Modify an existing SQL Server DB instance using the console, the [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) CLI command, or the [ModifyDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) RDS API operation.

  For instructions, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).
+ Restore a SQL Server DB instance from a DB snapshot using the console, the [restore-db-instance-from-db-snapshot](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html) CLI command, or the [RestoreDBInstanceFromDBSnapshot](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromDBSnapshot.html) RDS API operation.

  For instructions, see [Restoring to a DB instance](USER_RestoreFromSnapshot.md).
+ Restore a SQL Server DB instance to a point-in-time using the console, the [restore-db-instance-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-to-point-in-time.html) CLI command, or the [RestoreDBInstanceToPointInTime](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceToPointInTime.html) RDS API operation.

  For instructions, see [Restoring a DB instance to a specified time for Amazon RDS](USER_PIT.md).

When you use the AWS CLI, the following parameters are required for the DB instance to be able to use the self-managed AD domain that you created:
+ For the `--domain-fqdn` parameter, use the fully qualified domain name (FQDN) of your self-managed AD.
+ For the `--domain-ou` parameter, use the OU that you created in your self-managed AD.
+ For the `--domain-auth-secret-arn` parameter, use the value of the **Secret ARN** that you created in a previous step.
+ For the `--domain-dns-ips` parameter, use the primary and secondary IPv4 addresses of the DNS servers for your self-managed AD. If you don't have a secondary DNS server IP address, enter the primary IP address twice.

The following example CLI commands show how to create, modify, and remove an RDS for SQL Server DB instance with a self-managed AD domain.

**Important**  
If you modify a DB instance to join it to or remove it from a self-managed AD domain, a reboot of the DB instance is required for the modification to take effect. You can choose to apply the changes immediately or wait until the next maintenance window. Choosing the **Apply Immediately** option will cause downtime for a single-AZ DB instance. A multi-AZ DB instance will perform a failover before completing a reboot. For more information, see [Using the schedule modifications setting](USER_ModifyInstance.ApplyImmediately.md). 

The following CLI command creates a new RDS for SQL Server DB instance and joins it to a self-managed AD domain.

For Linux, macOS, or Unix:

```
aws rds create-db-instance \
    --db-instance-identifier my-DB-instance \
    --db-instance-class db.m5.xlarge \
    --allocated-storage 50 \
    --engine sqlserver-se \
    --engine-version 15.00.4043.16.v1 \
    --license-model license-included \
    --master-username my-master-username \
    --master-user-password my-master-password \
    --domain-fqdn my_AD_domain.my_AD.my_domain \
    --domain-ou OU=my-AD-test-OU,DC=my-AD-test,DC=my-AD,DC=my-domain \
    --domain-auth-secret-arn "arn:aws:secretsmanager:region:account-number:secret:my-AD-test-secret-123456" \
    --domain-dns-ips "10.11.12.13" "10.11.12.14"
```

For Windows:

```
aws rds create-db-instance ^
    --db-instance-identifier my-DB-instance ^
    --db-instance-class db.m5.xlarge ^
    --allocated-storage 50 ^
    --engine sqlserver-se ^
    --engine-version 15.00.4043.16.v1 ^
    --license-model license-included ^
    --master-username my-master-username ^
    --master-user-password my-master-password ^
    --domain-fqdn my-AD-test.my-AD.mydomain ^
    --domain-ou OU=my-AD-test-OU,DC=my-AD-test,DC=my-AD,DC=my-domain ^
    --domain-auth-secret-arn "arn:aws:secretsmanager:region:account-number:secret:my-AD-test-secret-123456" \ ^
    --domain-dns-ips "10.11.12.13" "10.11.12.14"
```

The following CLI command modifies an existing RDS for SQL Server DB instance to use a self-managed AD domain.

For Linux, macOS, or Unix:

```
aws rds modify-db-instance \
    --db-instance-identifier my-DB-instance \
    --domain-fqdn my_AD_domain.my_AD.my_domain \
    --domain-ou OU=my-AD-test-OU,DC=my-AD-test,DC=my-AD,DC=my-domain \
    --domain-auth-secret-arn "arn:aws:secretsmanager:region:account-number:secret:my-AD-test-secret-123456" \ 
    --domain-dns-ips "10.11.12.13" "10.11.12.14"
```

For Windows:

```
aws rds modify-db-instance ^
    --db-instance-identifier my-DBinstance ^
    --domain-fqdn my_AD_domain.my_AD.my_domain ^
    --domain-ou OU=my-AD-test-OU,DC=my-AD-test,DC=my-AD,DC=my-domain ^
    --domain-auth-secret-arn "arn:aws:secretsmanager:region:account-number:secret:my-AD-test-secret-123456" ^ 
    --domain-dns-ips "10.11.12.13" "10.11.12.14"
```

The following CLI command removes an RDS for SQL Server DB instance from a self-managed AD domain.

For Linux, macOS, or Unix:

```
aws rds modify-db-instance \
    --db-instance-identifier my-DB-instance \
    --disable-domain
```

For Windows:

```
aws rds modify-db-instance ^
    --db-instance-identifier my-DB-instance ^
    --disable-domain
```

## Step 2: Using Kerberos or NTLM Authentication
<a name="USER_SQLServer_SelfManagedActiveDirectory.SettingUp.KerbNTLM"></a>

### NTLM authentication
<a name="USER_SQLServer_SelfManagedActiveDirectory.SettingUp.KerbNTLM.NTLM"></a>

Each Amazon RDS DB instance has an endpoint and each endpoint has a DNS name and port number for the DB instance. To connect to your DB instance using a SQL client application, you need the DNS name and port number for your DB instance. To authenticate using NTLM authentication, you must connect to the RDS endpoint or the listener endpoint if you are using a Multi-AZ deployment.

During planned database maintenance or unplanned service disruption, Amazon RDS automatically fails over to the up-to-date secondary database so operations can resume quickly without manual intervention. The primary and secondary instances use the same endpoint, whose physical network address transitions to the secondary as part of the failover process. You don't have to reconfigure your application when a failover occurs.

### Kerberos authentication
<a name="USER_SQLServer_SelfManagedActiveDirectory.SettingUp.Kerb"></a>

Kerberos-based authentication for RDS for SQL Server requires connections be made to a specific Service Principal Name (SPN). However, after a failover event, the application might not be aware of the new SPN. To address this, RDS for SQL Server offers a Kerberos-based endpoint.

The Kerberos-based endpoint follows a specific format. If your RDS endpoint is `rds-instance-name.account-region-hash.aws-region.rds.amazonaws.com`, the corresponding Kerberos-based endpoint would be `rds-instance-name.account-region-hash.aws-region.awsrds.fully qualified domain name (FQDN)`.

For example, if the RDS endpoint is `ad-test.cocv6zwtircu.us-east-1.rds.amazonaws.com` and the domain name is `corp-ad.company.com`, the Kerberos-based endpoint would be `ad-test.cocv6zwtircu.us-east-1.awsrds.corp-ad.company.com`.

This Kerberos-based endpoint can be used to authenticate with the SQL Server instance using Kerberos, even after a failover event, as the endpoint is automatically updated to point to the new SPN of the primary SQL Server instance.

### Finding your CNAME
<a name="USER_SQLServer_SelfManagedActiveDirectory.SettingUp.CNAME"></a>

To find your CNAME, connect to your domain controller and open **DNS Manager**. Navigate to **Forward Lookup Zones** and your FQDN.

Navigate through **awsrds**, **aws-region**, and **account and region specific hash**.

![\[Modify the amount of storage for a DB instance\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/kerb-endpoint-selfManagedAD-RDSMS.png)


If after connecting CNAME from remote client, an NTLM connection is returned, check if required ports are allowlisted.

To check if your connection is using Kerberos, run the following query:

```
SELECT net_transport, auth_scheme
    FROM sys.dm_exec_connections
    WHERE session_id = @@SSPID;
```

If your instance returns an NTLM connection when you connect to a Kerberos endpoint, verify your network configuration and user configurations. See [Configure your network connectivity](USER_SQLServer_SelfManagedActiveDirectory.Requirements.md#USER_SQLServer_SelfManagedActiveDirectory.Requirements.NetworkConfig).

## Step 3: Create Windows Authentication SQL Server logins
<a name="USER_SQLServer_SelfManagedActiveDirectory.CreateLogins"></a>

Use the Amazon RDS master user credentials to connect to the SQL Server DB instance as you do for any other DB instance. Because the DB instance is joined to the self-managed AD domain, you can provision SQL Server logins and users. You do this from the AD users and groups utility in your self-managed AD domain. Database permissions are managed through standard SQL Server permissions granted and revoked to these Windows logins.

In order for a self-managed AD domain service account to authenticate with SQL Server, a SQL Server Windows login must exist for the self-managed AD domain service account or a self-managed AD group that the user is a member of. Fine-grained access control is handled through granting and revoking permissions on these SQL Server logins. A self-managed AD domain service account that doesn't have a SQL Server login or belong to a self-managed AD group with such a login can't access the SQL Server DB instance.

The ALTER ANY LOGIN permission is required to create a self-managed AD SQL Server login. If you haven't created any logins with this permission, connect as the DB instance's master user using SQL Server Authentication and create your self-managed AD SQL Server logins under the context of the master user.

You can run a data definition language (DDL) command such as the following to create a SQL Server login for an self-managed AD domain service account or group.

**Note**  
Specify users and groups using the pre-Windows 2000 login name in the format `my_AD_domain\my_AD_domain_user`. You can't use a user principal name (UPN) in the format *`my_AD_domain_user`*`@`*`my_AD_domain`*.

```
USE [master]
GO
CREATE LOGIN [my_AD_domain\my_AD_domain_user] FROM WINDOWS WITH DEFAULT_DATABASE = [master], DEFAULT_LANGUAGE = [us_english];
GO
```

For more information, see [CREATE LOGIN (Transact-SQL)](https://msdn.microsoft.com/en-us/library/ms189751.aspx) in the Microsoft Developer Network documentation.

Users (both humans and applications) from your domain can now connect to the RDS for SQL Server instance from a self-managed AD domain-joined client machine using Windows authentication.

# Managing a DB instance in a self-managed Active Directory Domain
<a name="USER_SQLServer_SelfManagedActiveDirectory.Managing"></a>

 You can use the console, AWS CLI, or the Amazon RDS API to manage your DB instance and its relationship with your self-managed AD domain. For example, you can move the DB instance into, out of, or between domains. 

 For example, using the Amazon RDS API, you can do the following: 
+ To reattempt a self-managed domain join for a failed membership, use the [ModifyDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) API operation and specify the same set of parameters:
  + `--domain-fqdn`
  + `--domain-dns-ips`
  + `--domain-ou`
  + `--domain-auth-secret-arn`
+ To remove a DB instance from a self-managed domain, use the `ModifyDBInstance` API operation and specify `--disable-domain` for the domain parameter.
+ To move a DB instance from one self-managed domain to another, use the `ModifyDBInstance` API operation and specify the domain parameters for the new domain:
  + `--domain-fqdn`
  + `--domain-dns-ips`
  + `--domain-ou`
  + `--domain-auth-secret-arn`
+ To list self-managed AD domain membership for each DB instance, use the [DescribeDBInstances](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/DescribeDBInstances.html) API operation.

## Understanding self-managed Active Directory Domain membership
<a name="USER_SQLServer_SelfManagedActiveDirectory.Understanding"></a>

After you create or modify your DB instance while specifying AD details, the instance becomes a member of the self-managed AD domain. The AWS console indicates the status of the self-managed Active Directory domain membership for the DB instance. The status of the DB instance can be one of the following: 
+  **joined** – The instance is a member of the AD domain.
+  **joining** – The instance is in the process of becoming a member of the AD domain.
+  **pending-join** – The instance membership is pending.
+  **pending-maintenance-join** – AWS will attempt to make the instance a member of the AD domain during the next scheduled maintenance window.
+  **pending-removal** – The removal of the instance from the AD domain is pending.
+  **pending-maintenance-removal** – AWS will attempt to remove the instance from the AD domain during the next scheduled maintenance window.
+  **failed** – A configuration problem has prevented the instance from joining the AD domain. Check and fix your configuration before reissuing the instance modify command.
+  **removing** – The instance is being removed from the self-managed AD domain.

**Important**  
A request to become a member of a self-managed AD domain can fail because of a network connectivity issue. For example, you might create a DB instance or modify an existing instance and have the attempt fail for the DB instance to become a member of a self-managed AD domain. In this case, either reissue the command to create or modify the DB instance or modify the newly created instance to join the self-managed AD domain.

# Troubleshooting self-managed Active Directory
<a name="USER_SQLServer_SelfManagedActiveDirectory.TroubleshootingSelfManagedActiveDirectory"></a>

The following are issues you might encounter when you set up or modify self-managed AD.


****  

| Error Code | Description | Common causes | Troubleshooting suggestions | 
| --- | --- | --- | --- | 
| Error 2 / 0x2 | The system cannot find the file specified. | The format or location for the Organizational Unit (OU) specified with the `—domain-ou` parameter is invalid. The domain service account specified via AWS Secrets Manager lack the permissions required to join the OU. | Review the `—domain-ou` parameter. Ensure the domain service account has the correct permissions to the OU. For more information, see [Configure your AD domain service account](USER_SQLServer_SelfManagedActiveDirectory.Requirements.md#USER_SQLServer_SelfManagedActiveDirectory.Requirements.DomainAccountConfig).  | 
| Error 5 / 0x5 | Access is denied. | Misconfigured permissions for the domain service account, or the computer account already exists in the domain. | Review the domain service account permissions in the domain, and verify that the RDS computer account is not duplicated in the domain. You can verify the name of the RDS computer account by running `SELECT @@SERVERNAME` on your RDS for SQL Server DB instance. If you are using Multi-AZ, try rebooting with failover and then verify that the RDS computer account again. For more information, see [Rebooting a DB instance](USER_RebootInstance.md). | 
| Error 87 / 0x57 | The parameter is incorrect. | The domain service account specified via AWS Secrets Manager doesn't have the correct permissions. The user profile may also be corrupted. | Review the requirements for the domain service account. For more information, see [Configure your AD domain service account](USER_SQLServer_SelfManagedActiveDirectory.Requirements.md#USER_SQLServer_SelfManagedActiveDirectory.Requirements.DomainAccountConfig).  | 
| Error 234 / 0xEA | Specified Organizational Unit (OU) does not exist. | The OU specified with the `—domain-ou` parameter doesn't exist in your self-managed AD. | Review the `—domain-ou` parameter and ensure the specified OU exists in your self-managed AD. | 
| Error 1326 / 0x52E | The user name or password is incorrect. | The domain service account credentials provided in AWS Secrets Manager contains an unknown username or bad password. The domain account may also be disabled in your self-managed AD. | Ensure the credentials provided in AWS Secrets Manager are correct and the domain account is enabled in your self-managed AD. | 
| Error 1355 / 0x54B | The specified domain either does not exist or could not be contacted. | The domain is down, the specified set of DNS IPs are unreachable, or the specified FQDN is unreachable. | Review the `—domain-dns-ips` and `—domain-fqdn` parameters to ensure they're correct. Review the networking configuration of your RDS for SQL Server DB instance and ensure your self-managed AD is reachable. For more information, see [Configure your network connectivity](USER_SQLServer_SelfManagedActiveDirectory.Requirements.md#USER_SQLServer_SelfManagedActiveDirectory.Requirements.NetworkConfig).  | 
| Error 1722 / 0x6BA | The RPC server is unavailable. | There was an issue reaching the RPC service of your AD domain. This might be a service or network issue. | Validate that the RPC service is running on your domain controllers and that the TCP ports `135` and `49152-65535` are reachable on your domain from your RDS for SQL Server DB instance. | 
|  Error 1727 / 0x6BF  |  The remote procedure call failed and did not execute.  |  Network connectivity issue or firewall restriction blocking RPC communication to the domain controller.  |  If using Cross VPC domain join, validate Cross VPC communication is setup correctly with either VPC peering or Transit Gateway. Ensure TCP high ports `49152-65535` are reachable on your domain from your RDS for SQL Server DB instance, including any possible firewall restrictions.  | 
| Error 2224 / 0x8B0 | The user account already exists. | The computer account that's attempting to be added to your self-managed AD already exists. | Identify the computer account by running `SELECT @@SERVERNAME` on your RDS for SQL Server DB instance and then carefully remove it from your self-managed AD. | 
| Error 2242 / 0x8c2 | The password of this user has expired. | The password for the domain service account specified via AWS Secrets Manager has expired. | Update the password for the domain service account used to join your RDS for SQL Server DB instance to your self-managed AD. | 

After joining your DB instance to a self-managed Active Directory domain, you might receive RDS events related to your domain health.

```
Unhealthy domain state detected while attempt to verify or 
configure your Kerberos endpoint in your domain on 
node node_n. message
```

For Multi-AZ instances, you might notice the error reporting for both node1 and node2, which indicates your instance's Kerberos configuration is not ready for failover. In the event of a failover, you might experience authentication difficulties using Kerberos. Resolve the configuration issues to ensure Kerberos setup is valid and up to date. For Multi-AZ instances, no actions are required to use Kerberos authentication on the new primary host given all network and permission configurations are in place.

For Single-AZ instances, node1 is the primary node. If your Kerberos authentication is not working as expected, check the instance events and resolve the configuration issues to ensure Kerberos setup is valid and up to date.

## Restoring a SQL Server DB instance and then adding it to a self-managed Active Directory domain
<a name="USER_SQLServer_SelfManagedActiveDirectory.Restore"></a>

You can restore a DB snapshot or do point-in-time recovery (PITR) for a SQL Server DB instance and then add it to a self-managed Active Directory domain. Once the DB instance is restored, modify the instance using the process explained in [Step 1: Create or modify a SQL Server DB instance](USER_SQLServer_SelfManagedActiveDirectory.Joining.md#USER_SQLServer_SelfManagedActiveDirectory.SettingUp.CreateModify) to add the DB instance to a self-managed AD domain.

# Working with AWS Managed Active Directory with RDS for SQL Server
<a name="USER_SQLServerWinAuth"></a>

You can use AWS Managed Microsoft AD to authenticate users with Windows Authentication when they connect to your RDS for SQL Server DB instance. The DB instance works with AWS Directory Service for Microsoft Active Directory, also called AWS Managed Microsoft AD, to enable Windows Authentication. When users authenticate with a SQL Server DB instance joined to the trusting domain, authentication requests are forwarded to the domain directory that you create with Directory Service. 

## Region and version availability
<a name="USER_SQLServerWinAuth.RegionVersionAvailability"></a>

Amazon RDS supports using only AWS Managed Microsoft AD for Windows Authentication. RDS doesn't support using AD Connector. For more information, see the following:
+ [Application compatibility policy for AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_app_compatibility.html)
+ [Application compatibility policy for AD Connector](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ad_connector_app_compatibility.html)

For information on version and Region availability, see [Kerberos authentication with RDS for SQL Server](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RDS_Fea_Regions_DB-eng.Feature.KerberosAuthentication.html#Concepts.RDS_Fea_Regions_DB-eng.Feature.KerberosAuthentication.sq).

## Overview of setting up Windows authentication
<a name="USER_SQLServerWinAuth.overview"></a>

Amazon RDS uses mixed mode for Windows Authentication. This approach means that the *master user* (the name and password used to create your SQL Server DB instance) uses SQL Authentication. Because the master user account is a privileged credential, you should restrict access to this account.

To get Windows Authentication using an on-premises or self-hosted Microsoft Active Directory, create a forest trust. The trust can be one-way or two-way. For more information on setting up forest trusts using Directory Service, see [When to create a trust relationship](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_setup_trust.html) in the *AWS Directory Service Administration Guide*.

To set up Windows authentication for a SQL Server DB instance, do the following steps, explained in greater detail in [Setting up Windows Authentication for SQL Server DB instances](USER_SQLServerWinAuth.SettingUp.md):

1. Use AWS Managed Microsoft AD, either from the AWS Management Console or Directory Service API, to create an AWS Managed Microsoft AD directory. 

1. If you use the AWS CLI or Amazon RDS API to create your SQL Server DB instance, create an AWS Identity and Access Management (IAM) role. This role uses the managed IAM policy `AmazonRDSDirectoryServiceAccess` and allows Amazon RDS to make calls to your directory. If you use the console to create your SQL Server DB instance, AWS creates the IAM role for you. 

   For the role to allow access, the AWS Security Token Service (AWS STS) endpoint must be activated in the AWS Region for your AWS account. AWS STS endpoints are active by default in all AWS Regions, and you can use them without any further actions. For more information, see [Managing AWS STS in an AWS Region](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) in the *IAM User Guide*.

1. Create and configure users and groups in the AWS Managed Microsoft AD directory using the Microsoft Active Directory tools. For more information about creating users and groups in your Active Directory, see [Manage users and groups in AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_manage_users_groups.html) in the *AWS Directory Service Administration Guide*.

1. If you plan to locate the directory and the DB instance in different VPCs, enable cross-VPC traffic.

1. Use Amazon RDS to create a new SQL Server DB instance either from the console, AWS CLI, or Amazon RDS API. In the create request, you provide the domain identifier ("`d-*`" identifier) that was generated when you created your directory and the name of the role you created. You can also modify an existing SQL Server DB instance to use Windows Authentication by setting the domain and IAM role parameters for the DB instance.

1. Use the Amazon RDS master user credentials to connect to the SQL Server DB instance as you do any other DB instance. Because the DB instance is joined to the AWS Managed Microsoft AD domain, you can provision SQL Server logins and users from the Active Directory users and groups in their domain. (These are known as SQL Server "Windows" logins.) Database permissions are managed through standard SQL Server permissions granted and revoked to these Windows logins. 

When you create a domain-connected RDS for SQL Server DB instance using the Amazon RDS console, AWS automatically creates the `rds-directoryservice-access-role` IAM role. This role is essential for managing domain-connected instances and is required for the following operations:
+ Making configuration changes to domain-connected SQL Server instances
+ Managing Active Directory integration settings
+ Performing maintenance operations on domain-joined instances

**Important**  
If you delete the `rds-directoryservice-access-role` IAM role, you can't make changes to your domain-connected SQL Server instance through the Amazon RDS console or API. Attempting to modify the instance results in an error message stating: You don't have permission to iam:CreateRole. To request access, copy the following text and send it to your AWS administrator.  
This error occurs because Amazon RDS needs to recreate the role to manage the domain connection, but lacks the necessary permissions. Additionally, this error is not logged in CloudTrail, which can make troubleshooting more difficult.

If you accidentally delete the `rds-directoryservice-access-role`, you must have `iam:CreateRole` permissions to recreate it before you can make any changes to your domain-connected SQL Server instance. To recreate the role manually, ensure it has the `AmazonRDSDirectoryServiceAccess` managed policy attached and the appropriate trust relationship that allows the RDS service to assume the role.

# Creating the endpoint for Kerberos authentication
<a name="USER_SQLServerWinAuth.KerberosEndpoint"></a>

Kerberos-based authentication requires that the endpoint be the customer-specified host name, a period, and then the fully qualified domain name (FQDN). For example, the following is an example of an endpoint you might use with Kerberos-based authentication. In this example, the SQL Server DB instance host name is `ad-test` and the domain name is `corp-ad.company.com`. 

```
ad-test.corp-ad.company.com
```

If you want to make sure your connection is using Kerberos, run the following query: 

```
1. SELECT net_transport, auth_scheme 
2.   FROM sys.dm_exec_connections 
3.  WHERE session_id = @@SPID;
```

# Setting up Windows Authentication for SQL Server DB instances
<a name="USER_SQLServerWinAuth.SettingUp"></a>

You use AWS Directory Service for Microsoft Active Directory, also called AWS Managed Microsoft AD, to set up Windows Authentication for a SQL Server DB instance. To set up Windows Authentication, take the following steps. 

## Step 1: Create a directory using the AWS Directory Service for Microsoft Active Directory
<a name="USER_SQLServerWinAuth.SettingUp.CreateDirectory"></a>

Directory Service creates a fully managed, Microsoft Active Directory in the AWS Cloud. When you create an AWS Managed Microsoft AD directory, Directory Service creates two domain controllers and Domain Name Service (DNS) servers on your behalf. The directory servers are created in two subnets in two different Availability Zones within a VPC. This redundancy helps ensure that your directory remains accessible even if a failure occurs.

 When you create an AWS Managed Microsoft AD directory, Directory Service performs the following tasks on your behalf: 
+ Sets up a Microsoft Active Directory within the VPC. 
+ Creates a directory administrator account with the user name Admin and the specified password. You use this account to manage your directory.
+ Creates a security group for the directory controllers.

When you launch an AWS Directory Service for Microsoft Active Directory, AWS creates an Organizational Unit (OU) that contains all your directory's objects. This OU, which has the NetBIOS name that you typed when you created your directory, is located in the domain root. The domain root is owned and managed by AWS. 

 The *admin* account that was created with your AWS Managed Microsoft AD directory has permissions for the most common administrative activities for your OU: 
+ Create, update, or delete users, groups, and computers. 
+ Add resources to your domain such as file or print servers, and then assign permissions for those resources to users and groups in your OU. 
+ Create additional OUs and containers.
+ Delegate authority. 
+ Create and link group policies. 
+ Restore deleted objects from the Active Directory Recycle Bin. 
+ Run AD and DNS Windows PowerShell modules on the Active Directory Web Service. 

The admin account also has rights to perform the following domain-wide activities: 
+ Manage DNS configurations (add, remove, or update records, zones, and forwarders). 
+ View DNS event logs. 
+ View security event logs. 

**To create a directory with AWS Managed Microsoft AD**

1. In the [Directory Service console](https://console.aws.amazon.com/directoryservicev2/) navigation pane, choose **Directories** and choose **Set up directory**.

1. Choose **AWS Managed Microsoft AD**. This is the only option currently supported for use with Amazon RDS.

1. Choose **Next**.

1. On the **Enter directory information** page, provide the following information:   
**Edition**  
 Choose the edition that meets your requirements.  
**Directory DNS name**  
The fully qualified name for the directory, such as `corp.example.com`. Names longer than 47 characters aren't supported by SQL Server.  
**Directory NetBIOS name**  
An optional short name for the directory, such as `CORP`.   
**Directory description**  
An optional description for the directory.   
**Admin password**  
The password for the directory administrator. The directory creation process creates an administrator account with the user name Admin and this password.   
The directory administrator password can't include the word `admin`. The password is case-sensitive and must be 8–64 characters in length. It must also contain at least one character from three of the following four categories:   
   + Lowercase letters (a-z)
   + Uppercase letters (A-Z)
   + Numbers (0-9)
   + Non-alphanumeric characters (\$1\$1@\$1\$1%^&\$1\$1-\$1=`\$1\$1()\$1\$1[]:;"'<>,.?/)   
**Confirm password**  
Retype the administrator password. 

1. Choose **Next**.

1. On the **Choose VPC and subnets** page, provide the following information:  
**VPC**  
Choose the VPC for the directory.  
You can locate the directory and the DB instance in different VPCs, but if you do so, make sure to enable cross-VPC traffic. For more information, see [Step 4: Enable cross-VPC traffic between the directory and the DB instance](#USER_SQLServerWinAuth.SettingUp.VPC-Peering).  
**Subnets**  
Choose the subnets for the directory servers. The two subnets must be in different Availability Zones.

1. Choose **Next**.

1. Review the directory information. If changes are needed, choose **Previous**. When the information is correct, choose **Create directory**.   
![\[Review and create page\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/WinAuth2.png)

It takes several minutes for the directory to be created. When it has been successfully created, the **Status** value changes to **Active**.

To see information about your directory, choose the directory ID in the directory listing. Make a note of the **Directory ID**. You need this value when you create or modify your SQL Server DB instance.

![\[Directory details page\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/WinAuth3.png)


## Step 2: Create the IAM role for use by Amazon RDS
<a name="USER_SQLServerWinAuth.SettingUp.CreateIAMRole"></a>

If you use the console to create your SQL Server DB instance, you can skip this step. If you use the CLI or RDS API to create your SQL Server DB instance, you must create an IAM role that uses the `AmazonRDSDirectoryServiceAccess` managed IAM policy. This role allows Amazon RDS to make calls to the Directory Service for you. 

If you are using a custom policy for joining a domain, rather than using the AWS-managed `AmazonRDSDirectoryServiceAccess` policy, make sure that you allow the `ds:GetAuthorizedApplicationDetails` action. This requirement is effective starting July 2019, due to a change in the Directory Service API.

The following IAM policy, `AmazonRDSDirectoryServiceAccess`, provides access to Directory Service.

**Example IAM policy for providing access to Directory Service**    
****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Action": [
            "ds:DescribeDirectories", 
            "ds:AuthorizeApplication", 
            "ds:UnauthorizeApplication",
            "ds:GetAuthorizedApplicationDetails"
        ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}
```

We recommend using the [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn) and [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount) global condition context keys in resource-based trust relationships to limit the service's permissions to a specific resource. This is the most effective way to protect against the [confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html).

You might use both global condition context keys and have the `aws:SourceArn` value contain the account ID. In this case, the `aws:SourceAccount` value and the account in the `aws:SourceArn` value must use the same account ID when used in the same statement.
+ Use `aws:SourceArn` if you want cross-service access for a single resource.
+ Use `aws:SourceAccount` if you want to allow any resource in that account to be associated with the cross-service use.

In the trust relationship, make sure to use the `aws:SourceArn` global condition context key with the full Amazon Resource Name (ARN) of the resources accessing the role. For Windows Authentication, make sure to include the DB instances, as shown in the following example.

**Example trust relationship with global condition context key for Windows Authentication**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "rds.amazonaws.com"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                    "aws:SourceArn": [
                        "arn:aws:rds:Region:my_account_ID:db:db_instance_identifier"
                    ]
                }
            }
        }
    ]
}
```

Create an IAM role using this IAM policy and trust relationship. For more information about creating IAM roles, see [Creating customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-using.html#create-managed-policy-console) in the *IAM User Guide*.

## Step 3: Create and configure users and groups
<a name="USER_SQLServerWinAuth.SettingUp.CreateUsers"></a>

You can create users and groups with the Active Directory Users and Computers tool. This tool is one of the Active Directory Domain Services and Active Directory Lightweight Directory Services tools. Users represent individual people or entities that have access to your directory. Groups are very useful for giving or denying privileges to groups of users, rather than having to apply those privileges to each individual user.

To create users and groups in an Directory Service directory, you must be connected to a Windows EC2 instance that is a member of the Directory Service directory. You must also be logged in as a user that has privileges to create users and groups. For more information, see [Add users and groups (Simple AD and AWS Managed Microsoft AD)](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/creating_ad_users_and_groups.html) in the *AWS Directory Service Administration Guide*.

## Step 4: Enable cross-VPC traffic between the directory and the DB instance
<a name="USER_SQLServerWinAuth.SettingUp.VPC-Peering"></a>

If you plan to locate the directory and the DB instance in the same VPC, skip this step and move on to [Step 5: Create or modify a SQL Server DB instance](#USER_SQLServerWinAuth.SettingUp.CreateModify).

If you plan to locate the directory and the DB instance in different VPCs, configure cross-VPC traffic using VPC peering or [AWS Transit Gateway](https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html).

The following procedure enables traffic between VPCs using VPC peering. Follow the instructions in [What is VPC peering?](https://docs.aws.amazon.com/vpc/latest/peering/Welcome.html) in the *Amazon Virtual Private Cloud Peering Guide*.

**To enable cross-VPC traffic using VPC peering**

1. Set up appropriate VPC routing rules to ensure that network traffic can flow both ways.

1. Ensure that the DB instance's security group can receive inbound traffic from the directory's security group.

1. Ensure that there is no network access control list (ACL) rule to block traffic.

If a different AWS account owns the directory, you must share the directory.

**To share the directory between AWS accounts**

1. Start sharing the directory with the AWS account that the DB instance will be created in by following the instructions in [Tutorial: Sharing your AWS Managed Microsoft AD directory for seamless EC2 domain-join](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_tutorial_directory_sharing.html) in the *Directory Service Administration Guide*.

1. Sign in to the Directory Service console using the account for the DB instance, and ensure that the domain has the `SHARED` status before proceeding.

1. While signed into the Directory Service console using the account for the DB instance, note the **Directory ID** value. You use this directory ID to join the DB instance to the domain.

## Step 5: Create or modify a SQL Server DB instance
<a name="USER_SQLServerWinAuth.SettingUp.CreateModify"></a>

Create or modify a SQL Server DB instance for use with your directory. You can use the console, CLI, or RDS API to associate a DB instance with a directory. You can do this in one of the following ways:
+ Create a new SQL Server DB instance using the console, the [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) CLI command, or the [CreateDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html) RDS API operation.

  For instructions, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).
+ Modify an existing SQL Server DB instance using the console, the [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) CLI command, or the [ModifyDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) RDS API operation.

  For instructions, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).
+ Restore a SQL Server DB instance from a DB snapshot using the console, the [restore-db-instance-from-db-snapshot](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html) CLI command, or the [RestoreDBInstanceFromDBSnapshot](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromDBSnapshot.html) RDS API operation.

  For instructions, see [Restoring to a DB instance](USER_RestoreFromSnapshot.md).
+ Restore a SQL Server DB instance to a point-in-time using the console, the [restore-db-instance-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-to-point-in-time.html) CLI command, or the [RestoreDBInstanceToPointInTime](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceToPointInTime.html) RDS API operation.

  For instructions, see [Restoring a DB instance to a specified time for Amazon RDS](USER_PIT.md).

 Windows Authentication is only supported for SQL Server DB instances in a VPC. 

 For the DB instance to be able to use the domain directory that you created, the following is required: 
+  For **Directory**, you must choose the domain identifier (`d-ID`) generated when you created the directory.
+  Make sure that the VPC security group has an outbound rule that lets the DB instance communicate with the directory.

![\[Microsoft SQL Server Windows Authentication directory\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/WinAuth1.png)


When you use the AWS CLI, the following parameters are required for the DB instance to be able to use the directory that you created:
+ For the `--domain` parameter, use the domain identifier (`d-ID`) generated when you created the directory.
+ For the `--domain-iam-role-name` parameter, use the role that you created that uses the managed IAM policy `AmazonRDSDirectoryServiceAccess`.

For example, the following CLI command modifies a DB instance to use a directory.

For Linux, macOS, or Unix:

```
aws rds modify-db-instance \
    --db-instance-identifier mydbinstance \
    --domain d-ID \
    --domain-iam-role-name role-name
```

For Windows:

```
aws rds modify-db-instance ^
    --db-instance-identifier mydbinstance ^
    --domain d-ID ^
    --domain-iam-role-name role-name
```

**Important**  
If you modify a DB instance to enable Kerberos authentication, reboot the DB instance after making the change.

## Step 6: Create Windows Authentication SQL Server logins
<a name="USER_SQLServerWinAuth.CreateLogins"></a>

Use the Amazon RDS master user credentials to connect to the SQL Server DB instance as you do any other DB instance. Because the DB instance is joined to the AWS Managed Microsoft AD domain, you can provision SQL Server logins and users. You do this from the Active Directory users and groups in your domain. Database permissions are managed through standard SQL Server permissions granted and revoked to these Windows logins.

For an Active Directory user to authenticate with SQL Server, a SQL Server Windows login must exist for the user or a group that the user is a member of. Fine-grained access control is handled through granting and revoking permissions on these SQL Server logins. A user that doesn't have a SQL Server login or belong to a group with such a login can't access the SQL Server DB instance.

The ALTER ANY LOGIN permission is required to create an Active Directory SQL Server login. If you haven't created any logins with this permission, connect as the DB instance's master user using SQL Server Authentication.

Run a data definition language (DDL) command such as the following example to create a SQL Server login for an Active Directory user or group.

**Note**  
Specify users and groups using the pre-Windows 2000 login name in the format `domainName\login_name`. You can't use a user principal name (UPN) in the format *`login_name`*`@`*`DomainName`*.  
You can only create a Windows Authentication login on an RDS for SQL Server instance by using T-SQL statements. You can't use the SQL Server Management studio to create a Windows Authentication login.

```
USE [master]
GO
CREATE LOGIN [mydomain\myuser] FROM WINDOWS WITH DEFAULT_DATABASE = [master], DEFAULT_LANGUAGE = [us_english];
GO
```

For more information, see [CREATE LOGIN (Transact-SQL)](https://msdn.microsoft.com/en-us/library/ms189751.aspx) in the Microsoft Developer Network documentation.

Users (both humans and applications) from your domain can now connect to the RDS for SQL Server instance from a domain-joined client machine using Windows authentication.

# Managing a DB instance in a Domain
<a name="USER_SQLServerWinAuth.Managing"></a>

 You can use the console, AWS CLI, or the Amazon RDS API to manage your DB instance and its relationship with your domain. For example, you can move the DB instance into, out of, or between domains. 

 For example, using the Amazon RDS API, you can do the following: 
+  To reattempt a domain join for a failed membership, use the [ModifyDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) API operation and specify the current membership's directory ID. 
+  To update the IAM role name for membership, use the `ModifyDBInstance` API operation and specify the current membership's directory ID and the new IAM role. 
+  To remove a DB instance from a domain, use the `ModifyDBInstance` API operation and specify `none` as the domain parameter. 
+  To move a DB instance from one domain to another, use the `ModifyDBInstance` API operation and specify the domain identifier of the new domain as the domain parameter. 
+  To list membership for each DB instance, use the [DescribeDBInstances](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/DescribeDBInstances.html) API operation. 

## Understanding Domain membership
<a name="USER_SQLServerWinAuth.Understanding"></a>

 After you create or modify your DB instance, the instance becomes a member of the domain. The AWS console indicates the status of the domain membership for the DB instance. The status of the DB instance can be one of the following: 
+  **joined** – The instance is a member of the domain.
+  **joining** – The instance is in the process of becoming a member of the domain.
+  **pending-join** – The instance membership is pending.
+  **pending-maintenance-join** – AWS will attempt to make the instance a member of the domain during the next scheduled maintenance window.
+  **pending-removal** – The removal of the instance from the domain is pending.
+  **pending-maintenance-removal** – AWS will attempt to remove the instance from the domain during the next scheduled maintenance window.
+  **failed** – A configuration problem has prevented the instance from joining the domain. Check and fix your configuration before reissuing the instance modify command.
+  **removing** – The instance is being removed from the domain.

A request to become a member of a domain can fail because of a network connectivity issue or an incorrect IAM role. For example, you might create a DB instance or modify an existing instance and have the attempt fail for the DB instance to become a member of a domain. In this case, either reissue the command to create or modify the DB instance or modify the newly created instance to join the domain.

# Connecting to SQL Server with Windows authentication
<a name="USER_SQLServerWinAuth.Connecting"></a>

To connect to SQL Server with Windows Authentication, you must be logged into a domain-joined computer as a domain user. After launching SQL Server Management Studio, choose **Windows Authentication** as the authentication type, as shown following.

![\[Connect to SQL Server using Windows Authentication\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/WinAuth4.png)


## Restoring a SQL Server DB instance and then adding it to a domain
<a name="USER_SQLServerWinAuth.Restore"></a>

You can restore a DB snapshot or do point-in-time recovery (PITR) for a SQL Server DB instance and then add it to a domain. Once the DB instance is restored, modify the instance using the process explained in [Step 5: Create or modify a SQL Server DB instance](USER_SQLServerWinAuth.SettingUp.md#USER_SQLServerWinAuth.SettingUp.CreateModify) to add the DB instance to a domain.

# Upgrades of the Microsoft SQL Server DB engine
<a name="USER_UpgradeDBInstance.SQLServer"></a>

When Amazon RDS supports a new version of a database engine, you can upgrade your DB instances to the new version. There are two kinds of upgrades for SQL Server DB instances: major version upgrades and minor version upgrades. 

*Major version upgrades* can contain database changes that are not backward-compatible with existing applications. As a result, you must *manually* perform major version upgrades of your DB instances. You can initiate a major version upgrade by modifying your DB instance. However, before you perform a major version upgrade, we recommend that you test the upgrade by following the steps described in [Testing an RDS for SQL Server upgrade](USER_UpgradeDBInstance.SQLServer.UpgradeTesting.md). 

*Minor version upgrades* contain only changes that are backward-compatible with existing applications. You can upgrade the minor version for your DB instance in two ways:
+ *Manually* – Modify your DB instance to initiate the upgrade
+ *Automatically* – Enable automatic minor version upgrades for your DB instance

When you enable automatic minor version upgrades, RDS for SQL Server automatically upgrades your database instance during scheduled maintenance windows when critical security updates are available in a newer minor version.

For minor engine versions after `16.00.4120.1`, `15.00.4365.2`, `14.00.3465.1`, `13.00.6435.1`, the following security protocols are disabled by default:
+ `rds.tls10` (TLS 1.0 protocol)
+ `rds.tls11` (TLS 1.1 protocol)
+ `rds.rc4` (RC4 cipher)
+ `rds.curve25519` (Curve25519 encryption)
+ `rds.3des168` (Triple DES encryption)

For earlier engine versions, Amazon RDS enables these security protocols by default.

```
...

"ValidUpgradeTarget": [
    {
        "Engine": "sqlserver-se",
        "EngineVersion": "14.00.3281.6.v1",
        "Description": "SQL Server 2017 14.00.3281.6.v1",
        "AutoUpgrade": false,
        "IsMajorVersionUpgrade": false
    }
...
```

For more information about performing upgrades, see [Upgrading a SQL Server DB instance](#USER_UpgradeDBInstance.SQLServer.Upgrading). For information about what SQL Server versions are available on Amazon RDS, see [Amazon RDS for Microsoft SQL Server](CHAP_SQLServer.md).

Amazon RDS also supports upgrade rollout policy to manage automatic minor version upgrades across multiple database resources and AWS accounts. For more information, see [Using AWS Organizations upgrade rollout policy for automatic minor version upgrades](RDS.Maintenance.AMVU.UpgradeRollout.md).

**Topics**
+ [

# Major version upgrades for RDS for SQL Server
](USER_UpgradeDBInstance.SQLServer.Major.md)
+ [

# Considerations for SQL Server upgrades
](USER_UpgradeDBInstance.SQLServer.Considerations.md)
+ [

# Testing an RDS for SQL Server upgrade
](USER_UpgradeDBInstance.SQLServer.UpgradeTesting.md)
+ [

## Upgrading a SQL Server DB instance
](#USER_UpgradeDBInstance.SQLServer.Upgrading)
+ [

## Upgrading deprecated DB instances before support ends
](#USER_UpgradeDBInstance.SQLServer.DeprecatedVersions)

# Major version upgrades for RDS for SQL Server
<a name="USER_UpgradeDBInstance.SQLServer.Major"></a>

Amazon RDS currently supports the following major version upgrades to a Microsoft SQL Server DB instance.

You can upgrade your existing DB instance to SQL Server 2017 or 2019 from any version except SQL Server 2008. To upgrade from SQL Server 2008, first upgrade to one of the other versions.


****  

| Current version | Supported upgrade versions | 
| --- | --- | 
|  SQL Server 2019  |  SQL Server 2022  | 
|  SQL Server 2017  |  SQL Server 2022 SQL Server 2019  | 
|  SQL Server 2016  |  SQL Server 2022 SQL Server 2019 SQL Server 2017  | 

You can use an AWS CLI query, such as the following example, to find the available upgrades for a particular database engine version.

**Example**  
For Linux, macOS, or Unix:  

```
aws rds describe-db-engine-versions \
    --engine sqlserver-se \
    --engine-version 14.00.3281.6.v1 \
    --query "DBEngineVersions[*].ValidUpgradeTarget[*].{EngineVersion:EngineVersion}" \
    --output table
```
For Windows:  

```
aws rds describe-db-engine-versions ^
    --engine sqlserver-se ^
    --engine-version 14.00.3281.6.v1 ^
    --query "DBEngineVersions[*].ValidUpgradeTarget[*].{EngineVersion:EngineVersion}" ^
    --output table
```
The output shows that you can upgrade version 14.00.3281.6 to the latest available SQL Server 2017 or 2019 versions.  

```
--------------------------
|DescribeDBEngineVersions|
+------------------------+
|      EngineVersion     |
+------------------------+
|  14.00.3294.2.v1       |
|  14.00.3356.20.v1      |
|  14.00.3381.3.v1       |
|  14.00.3401.7.v1       | 
|  14.00.3421.10.v1      |
|  14.00.3451.2.v1       |
|  15.00.4043.16.v1      |
|  15.00.4073.23.v1      |
|  15.00.4153.1.v1       |
|  15.00.4198.2.v1       |
|  15.00.4236.7.v1       |
+------------------------+
```

## Database compatibility level
<a name="USER_UpgradeDBInstance.SQLServer.Major.Compatibility"></a>

You can use Microsoft SQL Server database compatibility levels to adjust some database behaviors to mimic previous versions of SQL Server. For more information, see [Compatibility level](https://msdn.microsoft.com/en-us/library/bb510680.aspx) in the Microsoft documentation. When you upgrade your DB instance, all existing databases remain at their original compatibility level. 

You can change the compatibility level of a database by using the ALTER DATABASE command. For example, to change a database named `customeracct` to be compatible with SQL Server 2016, issue the following command: 

```
1. ALTER DATABASE customeracct SET COMPATIBILITY_LEVEL = 130
```

# Considerations for SQL Server upgrades
<a name="USER_UpgradeDBInstance.SQLServer.Considerations"></a>

Amazon RDS takes two DB snapshots during the upgrade process. The first DB snapshot is of the DB instance before any upgrade changes have been made. The second DB snapshot is taken after the upgrade finishes.

**Note**  
Amazon RDS only takes DB snapshots if you have set the backup retention period for your DB instance to a number greater than 0. To change your backup retention period, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

After an upgrade is completed, you can't revert to the previous version of the database engine. If you want to return to the previous version, restore from the DB snapshot that was taken before the upgrade to create a new DB instance. 

During a minor or major version upgrade of SQL Server, the **Free Storage Space** and **Disk Queue Depth** metrics will display `-1`. After the upgrade is completed, both metrics will return to normal.

Before you upgrade your SQL Server instance, review the following information.

**Topics**
+ [

## Best practices before initiating an upgrade
](#USER_UpgradeDBInstance.SQLServer.BestPractices)
+ [

## Multi-AZ considerations
](#USER_UpgradeDBInstance.SQLServer.MAZ)
+ [

## Read replica considerations
](#USER_UpgradeDBInstance.SQLServer.readreplica)
+ [

## Option group considerations
](#USER_UpgradeDBInstance.SQLServer.OGPG.OG)
+ [

## Parameter group considerations
](#USER_UpgradeDBInstance.SQLServer.OGPG.PG)

## Best practices before initiating an upgrade
<a name="USER_UpgradeDBInstance.SQLServer.BestPractices"></a>

Before starting the upgrade process, implement the following preparatory stpes to allow optimal upgrade performance and minimize potential issues:

Timing and workload management  
+ Schedule upgrades during low transaction volume periods.
+ Minimize write operations during the upgrade window.
This allows Amazon RDS to complete upgrades faster by reducing the number of transaction log backup files that RDS needs to restore during secondary-to-primary pairing.

Transaction management  
+ Identify and monitor long-running transactions.
+ Ensure all critical transactions are commited before starting the upgrade.
+ Prevent long-running transactions during the upgrade window.

Log file optimization  
Review and optimize transaction log files:  
+ Shrink oversized log files.
+ Reduce high log consumption patterns.
+ Manage Virtual Log Files (VLFs).
+ Maintain adequate free space for normal operations.

## Multi-AZ considerations
<a name="USER_UpgradeDBInstance.SQLServer.MAZ"></a>

Amazon RDS supports Multi-AZ deployments for DB instances running Microsoft SQL Server by using SQL Server Database Mirroring (DBM) or Always On Availability Groups (AGs). For more information, see [Multi-AZ deployments for Amazon RDS for Microsoft SQL Server](USER_SQLServerMultiAZ.md).

In a Multi-AZ deployment (Mirroring/AlwaysOn), when an upgrade is requested, RDS follows a rolling upgrade strategy for the primary and secondary instances. Rolling upgrades ensure at least one instance is available for transactions while the secondary instance is upgraded. The outage is expected to only last the duration of a failover.

During the upgrade, RDS removes the secondary instance from the Multi-AZ configuration, performs an upgrade of the secondary instance, and restores any transaction log backups from the primary taken during the time it was disconnected. After all the log backups are restored, RDS joins the upgraded secondary to the primary. When all the databases are in a synchronized state, RDS performs a failover to the upgraded secondary instance. Once the failover is completed, RDS proceeds with upgrading the old primary instance, restores any transaction log backups, and pairs it with the new primary.

To minimize this failover duration, we recommend using AlwaysOn AGs availability group listener endpoint when using client libraries that support the `MultiSubnetFailover` connection option in the connection string. When using the availability group listener endpoint, failover times are typically less than 10 seconds, however, this duration does not include any additional crash recovery time.

## Read replica considerations
<a name="USER_UpgradeDBInstance.SQLServer.readreplica"></a>

During a database version upgrade, Amazon RDS upgrades all of your read replicas along with the primary DB instance. Amazon RDS does not support database version upgrades on the read replicas separately. For more information on read replicas, see [Working with read replicas for Microsoft SQL Server in Amazon RDS](SQLServer.ReadReplicas.md).

When you perform a database version upgrade of the primary DB instance, all its read-replicas are also automatically upgraded. Amazon RDS will upgrade all of the read replicas simultaneously before upgrading the primary DB instance. Read replicas may not be available until the database version upgrade on the primary DB instance is complete.

## Option group considerations
<a name="USER_UpgradeDBInstance.SQLServer.OGPG.OG"></a>

If your DB instance uses a custom DB option group, in some cases Amazon RDS can't automatically assign your DB instance a new option group. For example, when you upgrade to a new major version, you must specify a new option group. We recommend that you create a new option group, and add the same options to it as your existing custom option group.

For more information, see [Creating an option group](USER_WorkingWithOptionGroups.md#USER_WorkingWithOptionGroups.Create) or [Copying an option group](USER_WorkingWithOptionGroups.md#USER_WorkingWithOptionGroups.Copy).

## Parameter group considerations
<a name="USER_UpgradeDBInstance.SQLServer.OGPG.PG"></a>

If your DB instance uses a custom DB parameter group:
+ Amazon RDS automatically reboots the DB instance after an upgrade.
+ In some cases, RDS can't automatically assign a new parameter group to your DB instance.

  For example, when you upgrade to a new major version, you must specify a new parameter group. We recommend that you create a new parameter group, and configure the parameters as in your existing custom parameter group.

For more information, see [Creating a DB parameter group in Amazon RDS](USER_WorkingWithParamGroups.Creating.md) or [Copying a DB parameter group in Amazon RDS](USER_WorkingWithParamGroups.Copying.md).

# Testing an RDS for SQL Server upgrade
<a name="USER_UpgradeDBInstance.SQLServer.UpgradeTesting"></a>

Before you perform a major version upgrade on your DB instance, you should thoroughly test your database, and all applications that access the database, for compatibility with the new version. We recommend that you use the following procedure.

**To test a major version upgrade**

1. Review [Upgrade SQL Server](https://docs.microsoft.com/en-us/sql/database-engine/install-windows/upgrade-sql-server) in the Microsoft documentation for the new version of the database engine to see if there are compatibility issues that might affect your database or applications.

1. If your DB instance uses a custom option group, create a new option group compatible with the new version you are upgrading to. For more information, see [Option group considerations](USER_UpgradeDBInstance.SQLServer.Considerations.md#USER_UpgradeDBInstance.SQLServer.OGPG.OG).

1. If your DB instance uses a custom parameter group, create a new parameter group compatible with the new version you are upgrading to. For more information, see [Parameter group considerations](USER_UpgradeDBInstance.SQLServer.Considerations.md#USER_UpgradeDBInstance.SQLServer.OGPG.PG).

1. Create a DB snapshot of the DB instance to be upgraded. For more information, see [Creating a DB snapshot for a Single-AZ DB instance for Amazon RDS](USER_CreateSnapshot.md).

1. Restore the DB snapshot to create a new test DB instance. For more information, see [Restoring to a DB instance](USER_RestoreFromSnapshot.md).

1. Modify this new test DB instance to upgrade it to the new version, by using one of the following methods:
   + [Console](USER_UpgradeDBInstance.Upgrading.md#USER_UpgradeDBInstance.Upgrading.Manual.Console)
   + [AWS CLI](USER_UpgradeDBInstance.Upgrading.md#USER_UpgradeDBInstance.Upgrading.Manual.CLI)
   + [RDS API](USER_UpgradeDBInstance.Upgrading.md#USER_UpgradeDBInstance.Upgrading.Manual.API)

1. Evaluate the storage used by the upgraded instance to determine if the upgrade requires additional storage. 

1. Run as many of your quality assurance tests against the upgraded DB instance as needed to ensure that your database and application work correctly with the new version. Implement any new tests needed to evaluate the impact of any compatibility issues you identified in step 1. Test all stored procedures and functions. Direct test versions of your applications to the upgraded DB instance. 

1. If all tests pass, then perform the upgrade on your production DB instance. We recommend that you do not allow write operations to the DB instance until you confirm that everything is working correctly. 

## Upgrading a SQL Server DB instance
<a name="USER_UpgradeDBInstance.SQLServer.Upgrading"></a>

For information about manually or automatically upgrading a SQL Server DB instance, see the following:
+ [Upgrading a DB instance engine version](USER_UpgradeDBInstance.Upgrading.md)
+ [Best practices for upgrading SQL Server 2008 R2 to SQL Server 2016 on Amazon RDS for SQL Server](https://aws.amazon.com/blogs/database/best-practices-for-upgrading-sql-server-2008-r2-to-sql-server-2016-on-amazon-rds-for-sql-server/)

**Important**  
If you have any snapshots that are encrypted using AWS KMS, we recommend that you initiate an upgrade before support ends. 

## Upgrading deprecated DB instances before support ends
<a name="USER_UpgradeDBInstance.SQLServer.DeprecatedVersions"></a>

After a major version is deprecated, you can't install it on new DB instances. RDS will try to automatically upgrade all existing DB instances. 

If you need to restore a deprecated DB instance, you can do point-in-time recovery (PITR) or restore a snapshot. Doing this gives you temporary access a DB instance that uses the version that is being deprecated. However, after a major version is fully deprecated, these DB instances will also be automatically upgraded to a supported version. 

# Working with storage in RDS for SQL Server
<a name="Appendix.SQLServer.CommonDBATasks.DatabaseStorage"></a>

With RDS for SQL Server, you can attach up to three additional volumes to your RDS for SQL Server instance, each mapped to a unique Windows drive letter. This allows you to distribute database files across multiple volumes beyond the default `D:` drive. When you add a storage volume, you get enhanced flexibility for database file management and storage optimization.

Benefits include:
+ **Flexible file distribution** – Distribute database data files and log files across multiple volumes for improved I/O performance.
+ **Storage optimization** – Use different storage types and configurations for different workload requirements.
+ **Scalability** – Add storage capacity without modifying existing volumes.

**Topics**
+ [

## Considerations for using additional storage volumes with RDS for SQL Server
](#SQLServer.ASV.Considerations)
+ [

## Add, remove, or modify storage volumes with RDS for SQL Server
](#SQLServer.ASV.Management)
+ [

## Restore operations for additional storage volumes with RDS for SQL Server
](#SQLServer.ASV.Restore)
+ [

## Use cases for additional storage volumes with RDS for SQL Server
](#SQLServer.ASV.UseCases)

## Considerations for using additional storage volumes with RDS for SQL Server
<a name="SQLServer.ASV.Considerations"></a>

Take note of the following features and limitations when using additional storage volumes with RDS for SQL Server:
+ You can only add storage volumes on SQL Server Standard Edition (SE), Enterprise Edition (EE), and Developer Edition (DEV-EE).
+ You can add up to 3 additional storage volumes per instance.
+ Volume names are automatically mapped to Windows drive letters as follows:
  + `rdsdbdata2` – `H:` drive
  + `rdsdbdata3` – `I:` drive
  + `rdsdbdata4` – `J:` drive
+ TempDB files continue to use the `T:` drive when using NVMe instance storage. SQL Server Audit files and Microsoft Business Intelligence (MSBI) files remain on the `D:` drive.
+ You can only add General Purpose SSD (gp3) and Provisioned IOPS SSD (io2) storage types.
+ The minimum storage size of additional storage volumes is the same as the limit set for the default `D:` drive. The maximum storage size for your DB instance is 256 TiB total across all volumes.
+ Adding storage volumes to instances with read replicas or to read replica instances isn't supported.
+ Adding storage volumes to instances enabled for cross-region automated backup isn't supported.
+ Configuring additional storage volumes for storage autoscaling isn't supported.
+ Moving files between volumes after creation isn't supported.
+ You can't delete the `D:` volume, but you can delete other storage volumes as long as they're empty.
+ Modifying the size of existing volumes during snapshot restore or point-in-time recovery (PITR) isn't supported. However, you can add new storage volumes during restore operations.

## Add, remove, or modify storage volumes with RDS for SQL Server
<a name="SQLServer.ASV.Management"></a>

You can add, modify, and remove additional storage volumes using the AWS CLI or AWS Management Console. All operations use the `modify-db-instance` API operation with the `additional-storage-volumes` parameter.

**Important**  
Adding or removing additional storage volumes creates a backup pending action and a point-in-time restore blackout window. This window closes when the backup workflow completes.

**Topics**
+ [

### Adding storage volumes
](#SQLServer.ASV.Adding)
+ [

### Scaling additional storage volumes
](#SQLServer.ASV.Scaling)
+ [

### Removing additional storage volumes
](#SQLServer.ASV.Removing)

### Adding storage volumes
<a name="SQLServer.ASV.Adding"></a>

You can add up to three storage volumes beyond the default `D:` drive. To add a new storage volume to your RDS for SQL Server instance, use the `modify-db-instance` command with the `additional-storage-volumes` parameter.

The following example adds a new 4,000 GiB General Purpose SSD (gp3) volume named `rdsdbdata4`.

```
aws rds modify-db-instance \
  --db-instance-identifier my-sql-server-instance \
  --region us-east-1 \
  --additional-storage-volumes '[{"VolumeName":"rdsdbdata4","StorageType":"gp3","AllocatedStorage":4000}]' \
  --apply-immediately
```

### Scaling additional storage volumes
<a name="SQLServer.ASV.Scaling"></a>

You can modify any storage setting for your additional volumes except for storage size. The following example modifies the IOPS setting for the `rdsdbdata2` volume.

```
aws rds modify-db-instance \
  --db-instance-identifier my-sql-server-instance \
  --region us-east-1 \
  --additional-storage-volumes '[{"VolumeName":"rdsdbdata2","IOPS":4000}]' \
  --apply-immediately
```

### Removing additional storage volumes
<a name="SQLServer.ASV.Removing"></a>

You can't delete the `D:` volume, but you can delete other storage volumes when they're empty.

**Warning**  
Before you remove an additional storage volume, make sure that no database files are stored on the volume.

The following example removes the `rdsdbdata4` volume.

```
aws rds modify-db-instance \
  --db-instance-identifier my-sql-server-instance \
  --region us-east-1 \
  --additional-storage-volumes '[{"VolumeName":"rdsdbdata4","SetForDelete":true}]' \
  --apply-immediately
```

## Restore operations for additional storage volumes with RDS for SQL Server
<a name="SQLServer.ASV.Restore"></a>

When you restore your database, you can add storage volumes. You can also modify the storage settings of existing volumes.

**Topics**
+ [

### Snapshot restore
](#SQLServer.ASV.SnapshotRestore)
+ [

### Point-in-time recovery
](#SQLServer.ASV.PITR)
+ [

### Native database restore
](#SQLServer.ASV.NativeRestore)

### Snapshot restore
<a name="SQLServer.ASV.SnapshotRestore"></a>

When restoring from a snapshot, you can add new additional storage volumes or modify the IOPS, throughput, and storage type settings of existing volumes.

The following example restores a DB instance from a snapshot and modifies the IOPS setting for the `rdsdbdata2` volume:

```
aws rds restore-db-instance-from-db-snapshot \
  --db-instance-identifier my-restored-instance \
  --db-snapshot-identifier my-snapshot \
  --region us-east-1 \
  --additional-storage-volumes '[{"VolumeName":"rdsdbdata2","IOPS":5000}]'
```

### Point-in-time recovery
<a name="SQLServer.ASV.PITR"></a>

During point-in-time recovery (PITR), you can add new additional storage volumes with custom configurations.

The following example performs PITR and adds a new 5,000 GiB General Purpose SSD (gp3) volume:

```
aws rds restore-db-instance-to-point-in-time \
  --source-db-instance-identifier my-source-instance \
  --target-db-instance my-pitr-instance \
  --use-latest-restorable-time \
  --region us-east-1 \
  --additional-storage-volumes '[{"VolumeName":"rdsdbdata4","StorageType":"gp3","AllocatedStorage":5000,"IOPS":5000,"StorageThroughput":200}]'
```

### Native database restore
<a name="SQLServer.ASV.NativeRestore"></a>

You can use the `rds_restore_database` stored procedure to restore databases to specific additional storage volumes. Two new parameters support volume selection:

**`data_file_volume`**  
Specifies the drive letter for database data files

**`log_file_volume`**  
Specifies the drive letter for database log files

The following example restores a database with data files on the `H:` drive and log files on the `I:` drive:

```
EXEC msdb.dbo.rds_restore_database    
    @restore_db_name='my_database',
    @s3_arn_to_restore_from='arn:aws:s3:::my-bucket/backup-file.bak',
    @data_file_volume='H:',
    @log_file_volume='I:';
```

If you don't specify volume parameters, or if you specify the `D:` drive for both parameters, the database files are restored to the default `D:` drive:

```
EXEC msdb.dbo.rds_restore_database    
    @restore_db_name='my_database',
    @s3_arn_to_restore_from='arn:aws:s3:::my-bucket/backup-file.bak';
```

## Use cases for additional storage volumes with RDS for SQL Server
<a name="SQLServer.ASV.UseCases"></a>

Additional storage volumes support various database management scenarios. The following sections describe common use cases and implementation approaches.

**Topics**
+ [

### Creating databases on additional storage volumes
](#SQLServer.ASV.NewDatabase)
+ [

### Extending storage capacity
](#SQLServer.ASV.ExtendStorage)
+ [

### Moving databases between volumes
](#SQLServer.ASV.MoveDatabase)
+ [

### Archiving data to cost-effective storage
](#SQLServer.ASV.ArchiveData)

### Creating databases on additional storage volumes
<a name="SQLServer.ASV.NewDatabase"></a>

You can create new databases directly on additional storage volumes using standard SQL Server `CREATE DATABASE` statements.

The following example creates a database with data files on the `H:` drive and log files on the `I:` drive:

```
CREATE DATABASE MyDatabase
ON (
    NAME = 'MyDatabase_Data',
    FILENAME = 'H:\rdsdbdata\data\MyDatabase_Data.mdf',
    SIZE = 100MB,
    FILEGROWTH = 10MB
)
LOG ON (
    NAME = 'MyDatabase_Log',
    FILENAME = 'I:\rdsdbdata\data\MyDatabase_Log.ldf',
    SIZE = 10MB,
    FILEGROWTH = 10%
);
```

### Extending storage capacity
<a name="SQLServer.ASV.ExtendStorage"></a>

When the default `D:` drive reaches its maximum capacity, you can add additional storage volumes, scale existing volumes, and create new data files or log files on the new volumes.

**To extend storage capacity**

1. Add a storage volume to your instance using the `modify-db-instance` command.

1. Add a new data file to the additional storage volume:

   ```
   ALTER DATABASE MyDatabase
   ADD FILE (
       NAME = 'MyDatabase_Data2',
       FILENAME = 'H:\rdsdbdata\data\MyDatabase_Data2.ndf',
       SIZE = 500MB,
       FILEGROWTH = 50MB
   );
   ```

### Moving databases between volumes
<a name="SQLServer.ASV.MoveDatabase"></a>

To move a database to a different volume, use the backup and restore approach with the `rds_backup_database` and `rds_restore_database` stored procedures. For more information, see [Using native backup and restore](SQLServer.Procedural.Importing.Native.Using.md).

**To move a database to a different volume**

1. Back up the database using `rds_backup_database`:

   ```
   EXEC msdb.dbo.rds_backup_database 
       @source_db_name='MyDatabase',
       @s3_arn_to_backup_to='arn:aws:s3:::my-bucket/database-backup.bak';
   ```

1. Restore the database to the target volume:

   ```
   EXEC msdb.dbo.rds_restore_database    
       @restore_db_name='MyDatabase_New',
       @s3_arn_to_restore_from='arn:aws:s3:::my-bucket/database-backup.bak',
       @data_file_volume='H:',
       @log_file_volume='I:';
   ```

1. Drop the database from your old drive to release the space. For more information, see [Dropping a database in an Amazon RDS for Microsoft SQL Server DB instance](Appendix.SQLServer.CommonDBATasks.DropMirrorDB.md).

### Archiving data to cost-effective storage
<a name="SQLServer.ASV.ArchiveData"></a>

For partitioned tables, you can archive older data to additional storage volumes with different performance characteristics.

**To archive partitioned data**

1. Add a storage volume with appropriate storage type and capacity.

1. Create a new filegroup on the additional storage volume:

   ```
   ALTER DATABASE MyDatabase
   ADD FILEGROUP ArchiveFileGroup;
   
   ALTER DATABASE MyDatabase
   ADD FILE (
       NAME = 'Archive_Data',
       FILENAME = 'H:\rdsdbdata\data\Archive_Data.ndf',
       SIZE = 1GB,
       FILEGROWTH = 100MB
   ) TO FILEGROUP ArchiveFileGroup;
   ```

1. Move partitions to the new filegroup using SQL Server partition management commands.

# Importing and exporting SQL Server databases using native backup and restore
<a name="SQLServer.Procedural.Importing"></a>

Amazon RDS supports native backup and restore for Microsoft SQL Server databases using full backup files (.bak files). When you use RDS, you access files stored in Amazon S3 rather than using the local file system on the database server.

For example, you can create a full backup from your local server, store it on S3, and then restore it onto an existing Amazon RDS DB instance. You can also make backups from RDS, store them on S3, and then restore them wherever you want.

Native backup and restore is available in all AWS Regions for Single-AZ and Multi-AZ DB instances, including Multi-AZ DB instances with read replicas. Native backup and restore is available for all editions of Microsoft SQL Server supported on Amazon RDS.

The following diagram shows the supported scenarios.

![\[Native Backup and Restore Architecture\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/SQL-bak-file.png)


Using native .bak files to back up and restore databases is usually the fastest way to back up and restore databases. There are many additional advantages to using native backup and restore. For example, you can do the following:
+ Migrate databases to or from Amazon RDS.
+ Move databases between RDS for SQL Server DB instances.
+ Migrate data, schemas, stored procedures, triggers, and other database code inside .bak files.
+ Backup and restore single databases, instead of entire DB instances.
+ Create copies of databases for development, testing, training, and demonstrations.
+ Store and transfer backup files with Amazon S3, for an added layer of protection for disaster recovery.
+ Create native backups of databases that have Transparent Data Encryption (TDE) turned on, and restore those backups to on-premises databases. For more information, see [Support for Transparent Data Encryption in SQL Server](Appendix.SQLServer.Options.TDE.md).
+ Restore native backups of on-premises databases that have TDE turned on to RDS for SQL Server DB instances. For more information, see [Support for Transparent Data Encryption in SQL Server](Appendix.SQLServer.Options.TDE.md).

**Contents**
+ [

## Limitations and recommendations
](#SQLServer.Procedural.Importing.Native.Limitations)
+ [

# Setting up for native backup and restore
](SQLServer.Procedural.Importing.Native.Enabling.md)
  + [

## Manually creating an IAM role for native backup and restore
](SQLServer.Procedural.Importing.Native.Enabling.md#SQLServer.Procedural.Importing.Native.Enabling.IAM)
+ [

# Using native backup and restore
](SQLServer.Procedural.Importing.Native.Using.md)
  + [

## Backing up a database
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Using.Backup)
    + [

### Usage
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Backup.Syntax)
    + [

### Examples
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Backup.Examples)
  + [

## Restoring a database
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Using.Restore)
    + [

### Usage
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Restore.Syntax)
    + [

### Examples
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Restore.Examples)
  + [

## Restoring a log
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Restore.Log)
    + [

### Usage
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Restore.Log.Syntax)
    + [

### Examples
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Restore.Log.Examples)
  + [

## Finishing a database restore
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Finish.Restore)
    + [

### Usage
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Finish.Restore.Syntax)
  + [

## Working with partially restored databases
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Partially.Restored)
    + [

### Dropping a partially restored database
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Drop.Partially.Restored)
    + [

### Snapshot restore and point-in-time recovery behavior for partially restored databases
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Snapshot.Restore)
  + [

## Canceling a task
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Using.Cancel)
    + [

### Usage
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Cancel.Syntax)
  + [

## Tracking the status of tasks
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Tracking)
    + [

### Usage
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Tracking.Syntax)
    + [

### Examples
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Tracking.Examples)
    + [

### Response
](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Tracking.Response)
+ [

# Compressing backup files
](SQLServer.Procedural.Importing.Native.Compression.md)
+ [

# Troubleshooting
](SQLServer.Procedural.Importing.Native.Troubleshooting.md)
+ [

# Importing and exporting SQL Server data using other methods
](SQLServer.Procedural.Importing.Snapshots.md)
  + [

## Importing data into RDS for SQL Server by using a snapshot
](SQLServer.Procedural.Importing.Snapshots.md#SQLServer.Procedural.Importing.Procedure)
    + [

### Import the data
](SQLServer.Procedural.Importing.Snapshots.md#ImportData.SQLServer.Import)
      + [

#### Generate and Publish Scripts Wizard
](SQLServer.Procedural.Importing.Snapshots.md#ImportData.SQLServer.MgmtStudio.ScriptWizard)
      + [

#### Import and Export Wizard
](SQLServer.Procedural.Importing.Snapshots.md#ImportData.SQLServer.MgmtStudio.ImportExportWizard)
      + [

#### Bulk copy
](SQLServer.Procedural.Importing.Snapshots.md#ImportData.SQLServer.MgmtStudio.BulkCopy)
  + [

## Exporting data from RDS for SQL Server
](SQLServer.Procedural.Importing.Snapshots.md#SQLServer.Procedural.Exporting)
    + [

### SQL Server Import and Export Wizard
](SQLServer.Procedural.Importing.Snapshots.md#SQLServer.Procedural.Exporting.SSIEW)
    + [

### SQL Server Generate and Publish Scripts Wizard and bcp utility
](SQLServer.Procedural.Importing.Snapshots.md#SQLServer.Procedural.Exporting.SSGPSW)
+ [

# Using BCP utility from Linux to import and export data
](SQLServer.Procedural.Importing.BCP.Linux.md)
  + [

## Prerequisites
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Prerequisites)
  + [

## Installing SQL Server command-line tools on Linux
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Installing)
  + [

## Exporting data from RDS for SQL Server
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Exporting)
    + [

### Basic export syntax
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Exporting.Basic)
    + [

### Export example
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Exporting.Example)
  + [

## Importing data to RDS for SQL Server
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Importing)
    + [

### Basic import syntax
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Importing.Basic)
    + [

### Import example
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Importing.Example)
  + [

## Common BCP options
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Options)
  + [

## Best practices and considerations
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.BestPractices)
  + [

## Troubleshooting common issues
](SQLServer.Procedural.Importing.BCP.Linux.md#SQLServer.Procedural.Importing.BCP.Linux.Troubleshooting)

## Limitations and recommendations
<a name="SQLServer.Procedural.Importing.Native.Limitations"></a>

The following are some limitations to using native backup and restore: 
+ You can't back up to, or restore from, an Amazon S3 bucket in a different AWS Region from your Amazon RDS DB instance.
+ You can't restore a database with the same name as an existing database. Database names are unique.
+ We strongly recommend that you don't restore backups from one time zone to a different time zone. If you restore backups from one time zone to a different time zone, you must audit your queries and applications for the effects of the time zone change.
+ RDS for Microsoft SQL Server has a size limit of 5 TB per file. For native backups of larger databases, you can use multifile backup.
+ The maximum database size that can be backed up to S3 depends on the available memory, CPU, I/O, and network resources on the DB instance. The larger the database, the more memory the backup agent consumes.
+ You can't back up to or restore from more than 10 backup files at the same time.
+ A differential backup is based on the last full backup. For differential backups to work, you can't take a snapshot between the last full backup and the differential backup. If you want a differential backup, but a manual or automated snapshot exists, then do another full backup before proceeding with the differential backup.
+ Differential and log restores aren't supported for databases with files that have their file\$1guid (unique identifier) set to `NULL`.
+ You can run up to two backup or restore tasks at the same time.
+ You can't perform native log backups from SQL Server on Amazon RDS.
+ RDS supports native restores of databases up to 64 TiB. Native restores of databases on SQL Server Express Edition are limited to 10 GB.
+ You can't do a native backup during the maintenance window, or any time Amazon RDS is in the process of taking a snapshot of the database. If a native backup task overlaps with the RDS daily backup window, the native backup task is canceled.
+ On Multi-AZ DB instances, you can only natively restore databases that are backed up in the full recovery model.
+ Calling the RDS procedures for native backup and restore within a transaction isn't supported.
+ Use a symmetric encryption AWS KMS key to encrypt your backups. Amazon RDS doesn't support asymmetric KMS keys. For more information, see [Creating symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html#create-symmetric-cmk) in the *AWS Key Management Service Developer Guide*.
+ Native backup files are encrypted with the specified KMS key using the "Encryption-Only" crypto mode. When you are restoring encrypted backup files, be aware that they were encrypted with the "Encryption-Only" crypto mode.
+ You can't restore a database that contains a FILESTREAM file group.
+ Amazon S3 server-side encryption with AWS KMS (SSE-KMS) is supported through your S3 bucket's default encryption configuration when you pass `@enable_bucket_default_encryption=1` to the backup stored procedure. By default, the restore supports the S3 object's server-side encryption.

  When you provide a KMS key to a stored procedure, any native backup and restores are encrypted and decrypted on the client-side with the KMS key. AWS stores the backups in the S3 bucket with SSE-S3 when `@enable_bucket_default_encryption=0` or with your S3 bucket's configured default encryption key when `@enable_bucket_default_encryption=1`.
+ When using S3 Access Points, the access point cannot be configured to use an RDS internal VPC.
+ For highest performance, we recommend using directory buckets or access points for directory buckets if they are available in your region.

If your database can be offline while the backup file is created, copied, and restored, we recommend that you use native backup and restore to migrate it to RDS. If your on-premises database can't be offline, we recommend that you use the AWS Database Migration Service to migrate your database to Amazon RDS. For more information, see [What is AWS Database Migration Service?](https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html) 

Native backup and restore isn't intended to replace the data recovery capabilities of the cross-region snapshot copy feature. We recommend that you use snapshot copy to copy your database snapshot to another AWS Region for cross-region disaster recovery in Amazon RDS. For more information, see [Copying a DB snapshot for Amazon RDS](USER_CopySnapshot.md).

# Setting up for native backup and restore
<a name="SQLServer.Procedural.Importing.Native.Enabling"></a>

To set up for native backup and restore, you need three components:

1. An Amazon S3 bucket to store your backup files.

   You must have an S3 bucket to use for your backup files and then upload backups you want to migrate to RDS. If you already have an Amazon S3 bucket, you can use that. If you don't, you can [create a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingaBucket.html). Alternatively, you can choose to have a new bucket created for you when you add the `SQLSERVER_BACKUP_RESTORE` option by using the AWS Management Console.

   For information on using S3, see the [Amazon Simple Storage Service User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/)

1. An AWS Identity and Access Management (IAM) role to access the bucket.

   If you already have an IAM role, you can use that. You can choose to have a new IAM role created for you when you add the `SQLSERVER_BACKUP_RESTORE` option by using the AWS Management Console. Alternatively, you can create a new one manually.

   If you want to create a new IAM role manually, take the approach discussed in the next section. Do the same if you want to attach trust relationships and permissions policies to an existing IAM role.

1. The `SQLSERVER_BACKUP_RESTORE` option added to an option group on your DB instance.

   To enable native backup and restore on your DB instance, you add the `SQLSERVER_BACKUP_RESTORE` option to an option group on your DB instance. For more information and instructions, see [Support for native backup and restore in SQL Server](Appendix.SQLServer.Options.BackupRestore.md).

## Manually creating an IAM role for native backup and restore
<a name="SQLServer.Procedural.Importing.Native.Enabling.IAM"></a>

If you want to manually create a new IAM role to use with native backup and restore, you can do so. In this case, you create a role to delegate permissions from the Amazon RDS service to your Amazon S3 bucket. When you create an IAM role, you attach a trust relationship and a permissions policy. The trust relationship allows RDS to assume this role. The permissions policy defines the actions this role can perform. For more information about creating the role, see [Creating a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html).

For the native backup and restore feature, use trust relationships and permissions policies similar to the examples in this section. In the following example, we use the service principal name `rds.amazonaws.com` as an alias for all service accounts. In the other examples, we specify an Amazon Resource Name (ARN) to identify another account, user, or role that we're granting access to in the trust policy.

We recommend using the [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn) and [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount) global condition context keys in resource-based trust relationships to limit the service's permissions to a specific resource. This is the most effective way to protect against the [confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html).

You might use both global condition context keys and have the `aws:SourceArn` value contain the account ID. In this case, the `aws:SourceAccount` value and the account in the `aws:SourceArn` value must use the same account ID when used in the same statement.
+ Use `aws:SourceArn` if you want cross-service access for a single resource.
+ Use `aws:SourceAccount` if you want to allow any resource in that account to be associated with the cross-service use.

In the trust relationship, make sure to use the `aws:SourceArn` global condition context key with the full ARN of the resources accessing the role. For native backup and restore, make sure to include both the DB option group and the DB instances, as shown in the following example.

**Example of trust relationship with global condition context key for native backup and restore**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "rds.amazonaws.com"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                    "aws:SourceArn": [
                        "arn:aws:rds:Region:0123456789:db:db_instance_identifier",
                        "arn:aws:rds:Region:0123456789:og:option_group_name"
                    ],
                    "aws:SourceAccount": "0123456789"
                }
            }
        }
    ]
}
```

The following example uses an ARN to specify a resource. For more information on using ARNs, see [ Amazon resource names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). 

**Example of permissions policy for native backup and restore without encryption support**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement":
    [
        {
        "Effect": "Allow",
        "Action":
            [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
        },
        {
        "Effect": "Allow",
        "Action":
            [
                "s3:GetObjectAttributes",
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListMultipartUploadParts",
                "s3:AbortMultipartUpload"
            ],
        "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
        }
    ]
}
```

**Example permissions policy for native backup and restore with encryption support**  
If you want to encrypt your backup files, include an encryption key in your permissions policy. For more information about encryption keys, see [Getting started](https://docs.aws.amazon.com/kms/latest/developerguide/getting-started.html) in the *AWS Key Management Service Developer Guide*.  
You must use a symmetric encryption KMS key to encrypt your backups. Amazon RDS doesn't support asymmetric KMS keys. For more information, see [Creating symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html#create-symmetric-cmk) in the *AWS Key Management Service Developer Guide*.  
The IAM role must also be a key user and key administrator for the KMS key, that is, it must be specified in the key policy. For more information, see [Creating symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html#create-symmetric-cmk) in the *AWS Key Management Service Developer Guide*.  
****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowAccessToKey",
      "Effect": "Allow",
      "Action": [
        "kms:DescribeKey",
        "kms:GenerateDataKey",
        "kms:Encrypt",
        "kms:Decrypt"
      ],
      "Resource": "arn:aws:kms:us-east-1:123456789012:key/key-id"
    },
    {
      "Sid": "AllowAccessToS3",
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket",
        "s3:GetBucketLocation"
      ],
      "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
    },
    {
      "Sid": "GetS3Info",
      "Effect": "Allow",
      "Action": [
        "s3:GetObjectAttributes",
        "s3:GetObject",
        "s3:PutObject",
        "s3:ListMultipartUploadParts",
        "s3:AbortMultipartUpload"
      ],
      "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
    }
  ]
}
```

**Example permissions policy for native backup and restore using access points without encryption support**  
The actions required to use S3 access points are the same as for S3 buckets. The resource path is updated to match the S3 access point ARN pattern.  
Access points must be configured to use **Network origin: Internet** as RDS does not publish private VPCs. S3 traffic from RDS instances does not go through the public internet since it goes through private VPCs.  
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
                ],
            "Resource": [
            "arn:aws:s3:us-east-1:111122223333:accesspoint/amzn-s3-demo-ap",
            "arn:aws:s3:::underlying-bucket"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObjectAttributes",
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListMultipartUploadParts",
                "s3:AbortMultipartUpload"
                ],
                "Resource": [
                "arn:aws:s3:us-east-1:111122223333:accesspoint/amzn-s3-demo-ap/*",
                    "arn:aws:s3:::underlying-bucket/*"
                    ]
                }
            ]   
}
```

**Example permissions policy for native backup and restore using access points for directory buckets without encryption support**  
Directory buckets use a different, [session-based authorization mechanism](https://docs.aws.amazon.com//AmazonS3/latest/userguide/s3-express-authenticating-authorizing.html) than general purpose buckets, so the only required permission for native backup restore is the bucket-level “s3express:CreateSession” permission. To configure object-level access, you must use [access points for directory buckets](https://docs.aws.amazon.com//AmazonS3/latest/userguide/access-points-directory-buckets-policies.html).  
Access points must be configured to use **Network origin: Internet** as RDS does not publish private VPCs. S3 traffic from RDS instances does not go through the public internet since it goes through private VPCs.  
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement":
    [
        {
        "Effect": "Allow",
        "Action": "s3express:CreateSession",
        "Resource": 
            [
                "arn:aws:s3express:us-east-1:111122223333:accesspoint/amzn-s3-demo-accesspoint--use1-az6--xa-s3",
                "arn:aws:s3express:us-east-1:111122223333:bucket/amzn-s3-demo-bucket--use1-az6--x-s3"
            ]
        }
    ]
}
```

# Using native backup and restore
<a name="SQLServer.Procedural.Importing.Native.Using"></a>

After you have enabled and configured native backup and restore, you can start using it. First, you connect to your Microsoft SQL Server database, and then you call an Amazon RDS stored procedure to do the work. For instructions on connecting to your database, see [Connecting to your Microsoft SQL Server DB instance](USER_ConnectToMicrosoftSQLServerInstance.md). 

Some of the stored procedures require that you provide an Amazon Resource Name (ARN) to your Amazon S3 bucket and file. The format for your ARN is `arn:aws:s3:::bucket_name/file_name.extension`. Amazon S3 doesn't require an account number or AWS Region in ARNs.

If you also provide an optional KMS key, the format for the ARN of the key is `arn:aws:kms:region:account-id:key/key-id`. For more information, see [ Amazon resource names (ARNs) and AWS service namespaces](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). You must use a symmetric encryption KMS key to encrypt your backups. Amazon RDS doesn't support asymmetric KMS keys. For more information, see [Creating symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html#create-symmetric-cmk) in the *AWS Key Management Service Developer Guide*.

**Note**  
Whether or not you use a KMS key, the native backup and restore tasks enable server-side Advanced Encryption Standard (AES) 256-bit encryption through SSE-S3 by default for files uploaded to S3. Passing in `@enable_bucket_default_encryption=1` to the backup stored procedure uses your S3 bucket's configured default encryption key.

For instructions on how to call each stored procedure, see the following topics:
+ [Backing up a database](#SQLServer.Procedural.Importing.Native.Using.Backup)
+ [Restoring a database](#SQLServer.Procedural.Importing.Native.Using.Restore)
+ [Restoring a log](#SQLServer.Procedural.Importing.Native.Restore.Log)
+ [Finishing a database restore](#SQLServer.Procedural.Importing.Native.Finish.Restore)
+ [Working with partially restored databases](#SQLServer.Procedural.Importing.Native.Partially.Restored)
+ [Canceling a task](#SQLServer.Procedural.Importing.Native.Using.Cancel)
+ [Tracking the status of tasks](#SQLServer.Procedural.Importing.Native.Tracking)

## Backing up a database
<a name="SQLServer.Procedural.Importing.Native.Using.Backup"></a>

To back up your database, use the `rds_backup_database` stored procedure.

**Note**  
You can't back up a database during the maintenance window, or while Amazon RDS is taking a snapshot. 

### Usage
<a name="SQLServer.Procedural.Importing.Native.Backup.Syntax"></a>

```
exec msdb.dbo.rds_backup_database
	@source_db_name='database_name',
	@s3_arn_to_backup_to='arn:aws:s3:::bucket_name/file_name.extension',
	[@kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id'],	
	[@overwrite_s3_backup_file=0|1],
	[@block_size=512|1024|2048|4096|8192|16384|32768|65536],
        [@max_transfer_size=n],
        [@buffer_count=n],
	[@type='DIFFERENTIAL|FULL'],
	[@number_of_files=n],
	[@enable_bucket_default_encryption=0|1];
```

The following parameters are required:
+ `@source_db_name` – The name of the database to back up.
+ `@s3_arn_to_backup_to` – The ARN indicating the Amazon S3 bucket, access point, directory bucket, or access point for directory bucket to use for the backup, plus the name of the backup file.

  The file can have any extension, but `.bak` is usually used. Note that access point ARNs must be of the format `arn:aws:s3:us-east-1:111122223333:access-point-name/object/key`.

The following parameters are optional:
+ `@kms_master_key_arn` – The ARN for the symmetric encryption KMS key to use to encrypt the item.
  + You can't use the default encryption key. If you use the default key, the database won't be backed up.
  +  If you don't specify a KMS key identifier, the backup file won't be encrypted. For more information, see [Encrypting Amazon RDS resources](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html).
  + When you specify a KMS key, client-side encryption is used.
  + Amazon RDS doesn't support asymmetric KMS keys. For more information, see [Creating symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html#create-symmetric-cmk) in the *AWS Key Management Service Developer Guide*.
+ `@overwrite_s3_backup_file` – A value that indicates whether to overwrite an existing backup file.
  + `0` – Doesn't overwrite an existing file. This value is the default.

    Setting `@overwrite_s3_backup_file` to 0 returns an error if the file already exists.
  + `1` – Overwrites an existing file that has the specified name, even if it isn't a backup file.
+ `@type` – The type of backup.
  + `DIFFERENTIAL` – Makes a differential backup.
  + `FULL` – Makes a full backup. This value is the default.

  A differential backup is based on the last full backup. For differential backups to work, you can't take a snapshot between the last full backup and the differential backup. If you want a differential backup, but a snapshot exists, then do another full backup before proceeding with the differential backup.

  You can look for the last full backup or snapshot using the following example SQL query:

  ```
  select top 1
  database_name
  , 	backup_start_date
  , 	backup_finish_date
  from    msdb.dbo.backupset
  where   database_name='mydatabase'
  and     type = 'D'
  order by backup_start_date desc;
  ```
+ `@number_of_files` – The number of files into which the backup will be divided (chunked). The maximum number is 10.
  + Multifile backup is supported for both full and differential backups.
  + If you enter a value of 1 or omit the parameter, a single backup file is created.

  Provide the prefix that the files have in common, then suffix that with an asterisk (`*`). The asterisk can be anywhere in the *file\$1name* part of the S3 ARN. The asterisk is replaced by a series of alphanumeric strings in the generated files, starting with `1-of-number_of_files`.

  For example, if the file names in the S3 ARN are `backup*.bak` and you set `@number_of_files=4`, the backup files generated are `backup1-of-4.bak`, `backup2-of-4.bak`, `backup3-of-4.bak`, and `backup4-of-4.bak`.
  + If any of the file names already exists, and `@overwrite_s3_backup_file` is 0, an error is returned.
  + Multifile backups can only have one asterisk in the *file\$1name* part of the S3 ARN.
  + Single-file backups can have any number of asterisks in the *file\$1name* part of the S3 ARN. Asterisks aren't removed from the generated file name.
+ `@block_size` – Block size (in bytes) specifying the physical block size for backup operations. Valid values are 512, 1024, 2048, 4096, 8192, 16384, 32768, and 65536
+ `@max_transfer_size` – Maximum transfer size denotes the upper limit of data volume (in bytes) transmitted per I/O operation during the backup process. Valid values are multiples of 65536 bytes (64 KB) up to 4194304 bytes (4 MB). 
+ `@buffer_count` – Total number of I/O buffers to be use for the backup process.
+ `@enable_bucket_default_encryption` – A value that indicates whether to use the S3 bucket's default encryption configuration for server-side encryption in S3. Directory buckets always use the bucket's default encryption configuration regardless of this setting.
  + `0` – Server-side encryption uses Advanced Encryption Standard (AES) 256-bit encryption through SSE-S3.
  + `1` – Server-side encryption uses your S3 bucket’s configured [default encryption](https://docs.aws.amazon.com//AmazonS3/latest/userguide/bucket-encryption.html). 

### Examples
<a name="SQLServer.Procedural.Importing.Native.Backup.Examples"></a>

**Example of differential backup**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup1.bak',
@overwrite_s3_backup_file=1,
@type='DIFFERENTIAL';
```

**Example of full backup with client-side encryption**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup1.bak',
@kms_master_key_arn='arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE',
@overwrite_s3_backup_file=1,
@type='FULL';
```

**Example of multifile backup**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@number_of_files=4;
```

**Example of multifile differential backup**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@type='DIFFERENTIAL',
@number_of_files=4;
```

**Example of multifile backup with encryption**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@kms_master_key_arn='arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE',
@number_of_files=4;
```

**Example of multifile backup with S3 overwrite**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@overwrite_s3_backup_file=1,
@number_of_files=4;
```

**Example of backup with block size**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@block_size=512;
```

**Example of multifile backup with `@max_transfer_size` and `@buffer_count`**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@number_of_files=4,
@max_transfer_size=4194304,
@buffer_count=10;
```

**Example of single-file backup with the @number\$1of\$1files parameter**  
This example generates a backup file named `backup*.bak`.  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@number_of_files=1;
```

**Example of full backup with server-side encryption**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:::mybucket/backup*.bak',
@overwrite_s3_backup_file=1,
@type='FULL',
@enable_bucket_default_encryption=1;
```

**Example of full backup using an access point**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:us-east-1:111122223333:accesspoint/my-access-point/object/backup1.bak',
@overwrite_s3_backup_file=1,
@type='FULL';
```

**Example of full backup using an access point for a directory bucket**  

```
exec msdb.dbo.rds_backup_database
@source_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3express:us-east-1:123456789012:accesspoint/my-access-point--use1-az6--xa-s3/object/backup1.bak',
@overwrite_s3_backup_file=1,
@type='FULL';
```

## Restoring a database
<a name="SQLServer.Procedural.Importing.Native.Using.Restore"></a>

To restore your database, call the `rds_restore_database` stored procedure. Amazon RDS creates an initial snapshot of the database after the restore task is complete and the database is open.

### Usage
<a name="SQLServer.Procedural.Importing.Native.Restore.Syntax"></a>

```
exec msdb.dbo.rds_restore_database
	@restore_db_name='database_name',
	@s3_arn_to_restore_from='arn:aws:s3:::bucket_name/file_name.extension',
	@with_norecovery=0|1,
	[@keep_cdc=0|1],
	[@data_file_volume='D:|H:|I:|J:'],
	[@log_file_volume='D:|H:|I:|J:'],
	[@kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id'],
        [@block_size=512|1024|2048|4096|8192|16384|32768|65536],
        [@max_transfer_size=n],
        [@buffer_count=n],
	[@type='DIFFERENTIAL|FULL'];
```

The following parameters are required:
+ `@restore_db_name` – The name of the database to restore. Database names are unique. You can't restore a database with the same name as an existing database.
+ `@s3_arn_to_restore_from` – The ARN indicating the Amazon S3 prefix and names of the backup files used to restore the database.
  + For a single-file backup, provide the entire file name.
  + For a multifile backup, provide the prefix that the files have in common, then suffix that with an asterisk (`*`).
    + If using a directory bucket, the ARN must end with `/*` due to [differences for directory buckets](https://docs.aws.amazon.com//AmazonS3/latest/userguide/s3-express-differences.html).
  + If `@s3_arn_to_restore_from` is empty, the following error message is returned: S3 ARN prefix cannot be empty.

The following parameter is required for differential restores, but optional for full restores:
+ `@with_norecovery` – The recovery clause to use for the restore operation.
  + Set it to `0` to restore with RECOVERY. In this case, the database is online after the restore.
  + Set it to `1` to restore with NORECOVERY. In this case, the database remains in the RESTORING state after restore task completion. With this approach, you can do later differential restores.
  + For DIFFERENTIAL restores, specify `0` or `1`.
  + For `FULL` restores, this value defaults to `0`.

The following parameters are optional:
+ `@keep_cdc` – Indicates whether to retain Change Data Capture (CDC) configuration on the restored database. Set to `1` to enable KEEP\$1CDC, `0` to disable. The default value is `0`.
+ `@data_file_volume` – Specifies the drive letter for database data files. The default value is `D:`.
+ `@log_file_volume` – Specifies the drive letter for database log files The default value is `D:`.
+ `@kms_master_key_arn` – If you encrypted the backup file, the KMS key to use to decrypt the file.

  When you specify a KMS key, client-side encryption is used.
+ `@type` – The type of restore. Valid types are `DIFFERENTIAL` and `FULL`. The default value is `FULL`.
+ `@block_size` – Block size (in bytes) specifying the physical block size for backup operations. Valid values are 512, 1024, 2048, 4096, 8192, 16384, 32768, and 65536
+ `@max_transfer_size` – Maximum transfer size denotes the upper limit of data volume (in bytes) transmitted per I/O operation during the backup process. Valid values are multiples of 65536 bytes (64 KB) up to 4194304 bytes (4 MB). 
+ `@buffer_count` – Total number of I/O buffers to be use for the backup process.

**Note**  
For differential restores, either the database must be in the RESTORING state or a task must already exist that restores with NORECOVERY.  
You can't restore later differential backups while the database is online.  
You can't submit a restore task for a database that already has a pending restore task with RECOVERY.  
Full restores with both NORECOVERY and KEEP\$1CDC aren’t supported.  
All native restores aren't supported on instances that have cross-region read replicas.  
For supported configurations, restoring a database on a Multi-AZ instance with read replicas is similar to restoring a database on a Multi-AZ instance. You don't have to take any additional actions to restore a database on a replica.

### Examples
<a name="SQLServer.Procedural.Importing.Native.Restore.Examples"></a>

**Example of single-file restore**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak';
```

**Example of multifile restore**  
To avoid errors when restoring multiple files, make sure that all the backup files have the same prefix, and that no other files use that prefix.  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup*';
```

**Example of full database restore with RECOVERY**  
The following three examples perform the same task, full restore with RECOVERY.  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak';
```

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
[@type='DIFFERENTIAL|FULL'];
```

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@type='FULL',
@with_norecovery=0;
```

**Example of full database restore with encryption**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@kms_master_key_arn='arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE';
```

**Example of restore with block size**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@block_size=512;
```

**Example of multifile restore with @max\$1transfer\$1size and @buffer\$1count**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup*',
@max_transfer_size=4194304,
@buffer_count=10;
```

**Example of full database restore with NORECOVERY**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@type='FULL',
@with_norecovery=1;
```

**Example of differential restore with NORECOVERY**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@type='DIFFERENTIAL',
@with_norecovery=1;
```

**Example of differential restore with RECOVERY**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@type='DIFFERENTIAL',
@with_norecovery=0;
```

**Example of full database restore with RECOVERY using an access point**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_backup_to='arn:aws:s3:us-east-1:111122223333:accesspoint/my-access-point/object/backup1.bak',
@with_norecovery=0;
```

**Example of full database restore with KEEP\$1CDC**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@keep_cdc=1;
```

## Restoring a log
<a name="SQLServer.Procedural.Importing.Native.Restore.Log"></a>

To restore your log, call the `rds_restore_log` stored procedure.

### Usage
<a name="SQLServer.Procedural.Importing.Native.Restore.Log.Syntax"></a>

```
exec msdb.dbo.rds_restore_log 
	@restore_db_name='database_name',
	@s3_arn_to_restore_from='arn:aws:s3:::bucket_name/log_file_name.extension',
	[@kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id'],
	[@with_norecovery=0|1],
	[@keep_cdc=0|1],
	[@stopat='datetime'],
	[@block_size=512|1024|2048|4096|8192|16384|32768|65536],
        [@max_transfer_size=n],
        [@buffer_count=n];
```

The following parameters are required:
+ `@restore_db_name` – The name of the database whose log to restore.
+ `@s3_arn_to_restore_from` – The ARN indicating the Amazon S3 prefix and name of the log file used to restore the log. The file can have any extension, but `.trn` is usually used.

  If `@s3_arn_to_restore_from` is empty, the following error message is returned: S3 ARN prefix cannot be empty.

The following parameters are optional:
+ `@keep_cdc` – Indicates whether to retain Change Data Capture (CDC) configuration on the restored database. Set to 1 to enable KEEP\$1CDC, 0 to disable. The default value is 0.
+ `@kms_master_key_arn` – If you encrypted the log, the KMS key to use to decrypt the log.
+ `@with_norecovery` – The recovery clause to use for the restore operation. This value defaults to `1`.
  + Set it to `0` to restore with RECOVERY. In this case, the database is online after the restore. You can't restore further log backups while the database is online.
  + Set it to `1` to restore with NORECOVERY. In this case, the database remains in the RESTORING state after restore task completion. With this approach, you can do later log restores.
+ `@stopat` – A value that specifies that the database is restored to its state at the date and time specified (in datetime format). Only transaction log records written before the specified date and time are applied to the database.

  If this parameter isn't specified (it is NULL), the complete log is restored.
+ `@block_size` – Block size (in bytes) specifying the physical block size for backup operations. Valid values are 512, 1024, 2048, 4096, 8192, 16384, 32768, and 65536
+ `@max_transfer_size` – Maximum transfer size denotes the upper limit of data volume (in bytes) transmitted per I/O operation during the backup process. Valid values are multiples of 65536 bytes (64 KB) up to 4194304 bytes (4 MB). 
+ `@buffer_count` – Total number of I/O buffers to be use for the backup process.

**Note**  
For log restores, either the database must be in a state of restoring or a task must already exist that restores with NORECOVERY.  
You can't restore log backups while the database is online.  
You can't submit a log restore task on a database that already has a pending restore task with RECOVERY.

### Examples
<a name="SQLServer.Procedural.Importing.Native.Restore.Log.Examples"></a>

**Example of log restore**  

```
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn';
```

**Example of log restore with encryption**  

```
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@kms_master_key_arn='arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE';
```

**Example of log restore with NORECOVERY**  
The following two examples perform the same task, log restore with NORECOVERY.  

```
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@with_norecovery=1;
```

```
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn';
```

**Example of restore with block size**  

```
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@block_size=512;
```

**Example of log restore with RECOVERY**  

```
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@with_norecovery=0;
```

**Example of log restore with STOPAT clause**  

```
exec msdb.dbo.rds_restore_log
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/mylog.trn',
@with_norecovery=0,
@stopat='2019-12-01 03:57:09';
```

**Example of log restore with KEEP\$1CDC**  

```
exec msdb.dbo.rds_restore_database
@restore_db_name='mydatabase',
@s3_arn_to_restore_from='arn:aws:s3:::mybucket/backup1.bak',
@keep_cdc=1;
```

## Finishing a database restore
<a name="SQLServer.Procedural.Importing.Native.Finish.Restore"></a>

If the last restore task on the database was performed using `@with_norecovery=1`, the database is now in the RESTORING state. Open this database for normal operation by using the `rds_finish_restore` stored procedure.

### Usage
<a name="SQLServer.Procedural.Importing.Native.Finish.Restore.Syntax"></a>

```
exec msdb.dbo.rds_finish_restore @db_name='database_name';
```

**Note**  
To use this approach, the database must be in the RESTORING state without any pending restore tasks.  
To finish restoring the database, use the master login. Or use the user login that most recently restored the database or log with NORECOVERY.

## Working with partially restored databases
<a name="SQLServer.Procedural.Importing.Native.Partially.Restored"></a>

### Dropping a partially restored database
<a name="SQLServer.Procedural.Importing.Native.Drop.Partially.Restored"></a>

To drop a partially restored database (left in the RESTORING state), use the `rds_drop_database` stored procedure.

```
exec msdb.dbo.rds_drop_database @db_name='database_name';
```

**Note**  
You can't submit a DROP database request for a database that already has a pending restore or finish restore task.  
To drop the database, use the master login. Or use the user login that most recently restored the database or log with NORECOVERY.

### Snapshot restore and point-in-time recovery behavior for partially restored databases
<a name="SQLServer.Procedural.Importing.Native.Snapshot.Restore"></a>

Partially restored databases in the source instance (left in the RESTORING state) are dropped from the target instance during snapshot restore and point-in-time recovery.

## Canceling a task
<a name="SQLServer.Procedural.Importing.Native.Using.Cancel"></a>

To cancel a backup or restore task, call the `rds_cancel_task` stored procedure.

**Note**  
You can't cancel a FINISH\$1RESTORE task.

### Usage
<a name="SQLServer.Procedural.Importing.Native.Cancel.Syntax"></a>

```
exec msdb.dbo.rds_cancel_task @task_id=ID_number;
```

The following parameter is required:
+ `@task_id` – The ID of the task to cancel. You can get the task ID by calling `rds_task_status`. 

## Tracking the status of tasks
<a name="SQLServer.Procedural.Importing.Native.Tracking"></a>

To track the status of your backup and restore tasks, call the `rds_task_status` stored procedure. If you don't provide any parameters, the stored procedure returns the status of all tasks. The status for tasks is updated approximately every two minutes. Task history is retained for 36 days.

### Usage
<a name="SQLServer.Procedural.Importing.Native.Tracking.Syntax"></a>

```
exec msdb.dbo.rds_task_status
	[@db_name='database_name'],
	[@task_id=ID_number];
```

The following parameters are optional: 
+ `@db_name` – The name of the database to show the task status for.
+ `@task_id` – The ID of the task to show the task status for.

### Examples
<a name="SQLServer.Procedural.Importing.Native.Tracking.Examples"></a>

**Example of listing the status for a specific task**  

```
exec msdb.dbo.rds_task_status @task_id=5;
```

**Example of listing the status for a specific database and task**  

```
exec msdb.dbo.rds_task_status
@db_name='my_database',
@task_id=5;
```

**Example of listing all tasks and their statuses on a specific database**  

```
exec msdb.dbo.rds_task_status @db_name='my_database';
```

**Example of listing all tasks and their statuses on the current instance**  

```
exec msdb.dbo.rds_task_status;
```

### Response
<a name="SQLServer.Procedural.Importing.Native.Tracking.Response"></a>

The `rds_task_status` stored procedure returns the following columns.


****  

| Column | Description | 
| --- | --- | 
| `task_id` |  The ID of the task.   | 
| `task_type` |  Task type depending on the input parameters, as follows: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.Native.Using.html) Amazon RDS creates an initial snapshot of the database after it is open on completion of the following restore tasks: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.Native.Using.html)  | 
| `database_name` |  The name of the database that the task is associated with.   | 
| `% complete` |  The progress of the task as a percent value.   | 
| `duration (mins)` |  The amount of time spent on the task, in minutes.   | 
| `lifecycle` |  The status of the task. The possible statuses are the following:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.Native.Using.html)  | 
| `task_info` |  Additional information about the task.  If an error occurs while backing up or restoring a database, this column contains information about the error. For a list of possible errors, and mitigation strategies, see [Troubleshooting](SQLServer.Procedural.Importing.Native.Troubleshooting.md).   | 
| `last_updated` |  The date and time that the task status was last updated. The status is updated after every 5 percent of progress.  | 
| `created_at` | The date and time that the task was created. | 
| S3\$1object\$1arn | The ARN indicating the Amazon S3 prefix and the name of the file that is being backed up or restored. | 
| `overwrite_s3_backup_file` |  The value of the `@overwrite_s3_backup_file` parameter specified when calling a backup task. For more information, see [Backing up a database](#SQLServer.Procedural.Importing.Native.Using.Backup).  | 
| KMS\$1master\$1key\$1arn | The ARN for the KMS key used for encryption (for backup) and decryption (for restore). | 
| filepath | Not applicable to native backup and restore tasks. | 
| overwrite\$1file | Not applicable to native backup and restore tasks. | 

# Compressing backup files
<a name="SQLServer.Procedural.Importing.Native.Compression"></a>

To save space in your Amazon S3 bucket, you can compress your backup files. For more information about compressing backup files, see [Backup compression](https://msdn.microsoft.com/en-us/library/bb964719.aspx) in the Microsoft documentation. 

Compressing your backup files is supported for the following database editions: 
+ Microsoft SQL Server Enterprise Edition 
+ Microsoft SQL Server Standard Edition 

To verify the compression option for your backup files, run the following code:

```
1. exec rdsadmin.dbo.rds_show_configuration 'S3 backup compression';
```

To turn on compression for your backup files, run the following code:

```
1. exec rdsadmin.dbo.rds_set_configuration 'S3 backup compression', 'true';
```

To turn off compression for your backup files, run the following code: 

```
1. exec rdsadmin.dbo.rds_set_configuration 'S3 backup compression', 'false';
```

# Troubleshooting
<a name="SQLServer.Procedural.Importing.Native.Troubleshooting"></a>

The following are issues you might encounter when you use native backup and restore.


****  

| Issue | Troubleshooting suggestions | 
| --- | --- | 
|  Database backup/restore option is not enabled yet or is in the process of being enabled. Please try again later.  |  Make sure that you have added the `SQLSERVER_BACKUP_RESTORE` option to the DB option group associated with your DB instance. For more information, see [Adding the native backup and restore option](Appendix.SQLServer.Options.BackupRestore.md#Appendix.SQLServer.Options.BackupRestore.Add).  | 
|  The EXECUTE permission was denied on the object '*rds\$1backup\$1database*', database 'msdb', schema 'dbo'.  |  Make sure that you are using the master user when executing the stored procedure. If you encounter this error even after being logged in as the master user, it might be due to the admin user permissions being misaligned. To reset the master user, use the AWS Management Console. See [Resetting the db\$1owner role membership for master user for Amazon RDS for SQL Server](Appendix.SQLServer.CommonDBATasks.ResetPassword.md).   | 
|  The EXECUTE permission was denied on the object '*rds\$1restore\$1database*', database 'msdb', schema 'dbo'.  |  Make sure that you are using the master user when executing the stored procedure. If you encounter this error even after being logged in as the master user, it might be due to the admin user permissions being misaligned. To reset the master user, use the AWS Management Console. See [Resetting the db\$1owner role membership for master user for Amazon RDS for SQL Server](Appendix.SQLServer.CommonDBATasks.ResetPassword.md).   | 
|  Access Denied  | The backup or restore process can't access the backup file. This is usually caused by issues like the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.Native.Troubleshooting.html)  | 
|  BACKUP DATABASE WITH COMPRESSION isn't supported on <edition\$1name> Edition  |  Compressing your backup files is only supported for Microsoft SQL Server Enterprise Edition and Standard Edition. For more information, see [Compressing backup files](SQLServer.Procedural.Importing.Native.Compression.md).   | 
|  Key <ARN> does not exist  |  You attempted to restore an encrypted backup, but didn't provide a valid encryption key. Check your encryption key and retry. For more information, see [Restoring a database](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Using.Restore).   | 
|  Please reissue task with correct type and overwrite property  |  If you attempt to back up your database and provide the name of a file that already exists, but set the overwrite property to false, the save operation fails. To fix this error, either provide the name of a file that doesn't already exist, or set the overwrite property to true. For more information, see [Backing up a database](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Using.Backup). It's also possible that you intended to restore your database, but called the `rds_backup_database` stored procedure accidentally. In that case, call the `rds_restore_database` stored procedure instead. For more information, see [Restoring a database](SQLServer.Procedural.Importing.Native.Using.md#SQLServer.Procedural.Importing.Native.Using.Restore). If you intended to restore your database and called the `rds_restore_database` stored procedure, make sure that you provided the name of a valid backup file. For more information, see [Using native backup and restore](SQLServer.Procedural.Importing.Native.Using.md).  | 
|  Please specify a bucket that is in the same region as RDS instance  |  You can't back up to, or restore from, an Amazon S3 bucket in a different AWS Region from your Amazon RDS DB instance. You can use Amazon S3 replication to copy the backup file to the correct AWS Region. For more information, see [Cross-Region replication](https://docs.aws.amazon.com/AmazonS3/latest/userguide/crr.html) in the Amazon S3 documentation.  | 
|  The specified bucket does not exist  | Verify that you have provided the correct ARN for your bucket and file, in the correct format.  For more information, see [Using native backup and restore](SQLServer.Procedural.Importing.Native.Using.md).  | 
|  User <ARN> is not authorized to perform <kms action> on resource <ARN>  |  You requested an encrypted operation, but didn't provide correct AWS KMS permissions. Verify that you have the correct permissions, or add them.  For more information, see [Setting up for native backup and restore](SQLServer.Procedural.Importing.Native.Enabling.md).  | 
|  The Restore task is unable to restore from more than 10 backup file(s). Please reduce the number of files matched and try again.  |  Reduce the number of files that you're trying to restore from. You can make each individual file larger if necessary.  | 
|  Database '*database\$1name*' already exists. Two databases that differ only by case or accent are not allowed. Choose a different database name.  |  You can't restore a database with the same name as an existing database. Database names are unique.  | 

# Importing and exporting SQL Server data using other methods
<a name="SQLServer.Procedural.Importing.Snapshots"></a>

Following, you can find information about using snapshots to import your Microsoft SQL Server data to Amazon RDS. You can also find information about using snapshots to export your data from an RDS DB instance running SQL Server. 

If your scenario supports it, it's easier to move data in and out of Amazon RDS by using the native backup and restore functionality. For more information, see [Importing and exporting SQL Server databases using native backup and restore](SQLServer.Procedural.Importing.md). 

**Note**  
Amazon RDS for Microsoft SQL Server doesn't support importing data into the `msdb` database. 

## Importing data into RDS for SQL Server by using a snapshot
<a name="SQLServer.Procedural.Importing.Procedure"></a>

**To import data into a SQL Server DB instance by using a snapshot**

1. Create a DB instance. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).

1. Stop applications from accessing the destination DB instance. 

   If you prevent access to your DB instance while you are importing data, data transfer is faster. Additionally, you don't need to worry about conflicts while data is being loaded if other applications cannot write to the DB instance at the same time. If something goes wrong and you have to roll back to an earlier database snapshot, the only changes that you lose are the imported data. You can import this data again after you resolve the issue. 

   For information about controlling access to your DB instance, see [Controlling access with security groups](Overview.RDSSecurityGroups.md). 

1. Create a snapshot of the target database. 

   If the target database is already populated with data, we recommend that you take a snapshot of the database before you import the data. If something goes wrong with the data import or you want to discard the changes, you can restore the database to its previous state by using the snapshot. For information about database snapshots, see [Creating a DB snapshot for a Single-AZ DB instance for Amazon RDS](USER_CreateSnapshot.md). 
**Note**  
When you take a database snapshot, I/O operations to the database are suspended for a moment (milliseconds) while the backup is in progress. 

1. Disable automated backups on the target database. 

   Disabling automated backups on the target DB instance improves performance while you are importing your data because Amazon RDS doesn't log transactions when automatic backups are disabled. However, there are some things to consider. Automated backups are required to perform a point-in-time recovery. Thus, you can't restore the database to a specific point in time while you are importing data. Additionally, any automated backups that were created on the DB instance are erased unless you choose to retain them. 

   Choosing to retain the automated backups can help protect you against accidental deletion of data. Amazon RDS also saves the database instance properties along with each automated backup to make it easy to recover. Using this option lets you restore a deleted database instance to a specified point in time within the backup retention period even after deleting it. Automated backups are automatically deleted at the end of the specified backup window, just as they are for an active database instance. 

   You can also use previous snapshots to recover the database, and any snapshots that you have taken remain available. For information about automated backups, see [Introduction to backups](USER_WorkingWithAutomatedBackups.md). 

1. Disable foreign key constraints, if applicable. 

    If you need to disable foreign key constraints, you can do so with the following script. 

   ```
   --Disable foreign keys on all tables
       DECLARE @table_name SYSNAME;
       DECLARE @cmd NVARCHAR(MAX);
       DECLARE table_cursor CURSOR FOR SELECT name FROM sys.tables;
       
       OPEN table_cursor;
       FETCH NEXT FROM table_cursor INTO @table_name;
       
       WHILE @@FETCH_STATUS = 0 BEGIN
         SELECT @cmd = 'ALTER TABLE '+QUOTENAME(@table_name)+' NOCHECK CONSTRAINT ALL';
         EXEC (@cmd);
         FETCH NEXT FROM table_cursor INTO @table_name;
       END
       
       CLOSE table_cursor;
       DEALLOCATE table_cursor;
       
       GO
   ```

1. Drop indexes, if applicable. 

1. Disable triggers, if applicable. 

    If you need to disable triggers, you can do so with the following script. 

   ```
   --Disable triggers on all tables
       DECLARE @enable BIT = 0;
       DECLARE @trigger SYSNAME;
       DECLARE @table SYSNAME;
       DECLARE @cmd NVARCHAR(MAX);
       DECLARE trigger_cursor CURSOR FOR SELECT trigger_object.name trigger_name,
        table_object.name table_name
       FROM sysobjects trigger_object
       JOIN sysobjects table_object ON trigger_object.parent_obj = table_object.id
       WHERE trigger_object.type = 'TR';
       
       OPEN trigger_cursor;
       FETCH NEXT FROM trigger_cursor INTO @trigger, @table;
       
       WHILE @@FETCH_STATUS = 0 BEGIN
         IF @enable = 1
            SET @cmd = 'ENABLE ';
         ELSE
            SET @cmd = 'DISABLE ';
       
         SET @cmd = @cmd + ' TRIGGER dbo.'+QUOTENAME(@trigger)+' ON dbo.'+QUOTENAME(@table)+' ';
         EXEC (@cmd);
         FETCH NEXT FROM trigger_cursor INTO @trigger, @table;
       END
       
       CLOSE trigger_cursor;
       DEALLOCATE trigger_cursor;
       
       GO
   ```

1. Query the source SQL Server instance for any logins that you want to import to the destination DB instance. 

   SQL Server stores logins and passwords in the `master` database. Because Amazon RDS doesn't grant access to the `master` database, you cannot directly import logins and passwords into your destination DB instance. Instead, you must query the `master` database on the source SQL Server instance to generate a data definition language (DDL) file. This file should include all logins and passwords that you want to add to the destination DB instance. This file also should include role memberships and permissions that you want to transfer. 

   For information about querying the `master` database, see [ Transfer logins and passwords between instances of SQL Server](https://learn.microsoft.com/en-us/troubleshoot/sql/database-engine/security/transfer-logins-passwords-between-instances) in the Microsoft Knowledge Base.

   The output of the script is another script that you can run on the destination DB instance. The script in the Knowledge Base article has the following code: 

   ```
   p.type IN 
   ```

   Every place `p.type` appears, use the following code instead: 

   ```
   p.type = 'S' 
   ```

1. Import the data using the method in [Import the data](#ImportData.SQLServer.Import). 

1. Grant applications access to the target DB instance. 

   When your data import is complete, you can grant access to the DB instance to those applications that you blocked during the import. For information about controlling access to your DB instance, see [Controlling access with security groups](Overview.RDSSecurityGroups.md). 

1. Enable automated backups on the target DB instance. 

   For information about automated backups, see [Introduction to backups](USER_WorkingWithAutomatedBackups.md). 

1. Enable foreign key constraints. 

    If you disabled foreign key constraints earlier, you can now enable them with the following script. 

   ```
   --Enable foreign keys on all tables
       DECLARE @table_name SYSNAME;
       DECLARE @cmd NVARCHAR(MAX);
       DECLARE table_cursor CURSOR FOR SELECT name FROM sys.tables;
       
       OPEN table_cursor;
       FETCH NEXT FROM table_cursor INTO @table_name;
       
       WHILE @@FETCH_STATUS = 0 BEGIN
         SELECT @cmd = 'ALTER TABLE '+QUOTENAME(@table_name)+' CHECK CONSTRAINT ALL';
         EXEC (@cmd);
         FETCH NEXT FROM table_cursor INTO @table_name;
       END
       
       CLOSE table_cursor;
       DEALLOCATE table_cursor;
   ```

1. Enable indexes, if applicable.

1. Enable triggers, if applicable.

    If you disabled triggers earlier, you can now enable them with the following script. 

   ```
   --Enable triggers on all tables
       DECLARE @enable BIT = 1;
       DECLARE @trigger SYSNAME;
       DECLARE @table SYSNAME;
       DECLARE @cmd NVARCHAR(MAX);
       DECLARE trigger_cursor CURSOR FOR SELECT trigger_object.name trigger_name,
        table_object.name table_name
       FROM sysobjects trigger_object
       JOIN sysobjects table_object ON trigger_object.parent_obj = table_object.id
       WHERE trigger_object.type = 'TR';
       
       OPEN trigger_cursor;
       FETCH NEXT FROM trigger_cursor INTO @trigger, @table;
       
       WHILE @@FETCH_STATUS = 0 BEGIN
         IF @enable = 1
            SET @cmd = 'ENABLE ';
         ELSE
            SET @cmd = 'DISABLE ';
       
         SET @cmd = @cmd + ' TRIGGER dbo.'+QUOTENAME(@trigger)+' ON dbo.'+QUOTENAME(@table)+' ';
         EXEC (@cmd);
         FETCH NEXT FROM trigger_cursor INTO @trigger, @table;
       END
       
       CLOSE trigger_cursor;
       DEALLOCATE trigger_cursor;
   ```

### Import the data
<a name="ImportData.SQLServer.Import"></a>

Microsoft SQL Server Management Studio is a graphical SQL Server client that is included in all Microsoft SQL Server editions except the Express Edition. SQL Server Management Studio Express is available from Microsoft as a free download. To find this download, see [the Microsoft website](https://www.microsoft.com/en-us/download). 

**Note**  
SQL Server Management Studio is available only as a Windows-based application.

SQL Server Management Studio includes the following tools, which are useful in importing data to a SQL Server DB instance: 
+ Generate and Publish Scripts Wizard
+ Import and Export Wizard
+ Bulk copy

#### Generate and Publish Scripts Wizard
<a name="ImportData.SQLServer.MgmtStudio.ScriptWizard"></a>

The Generate and Publish Scripts Wizard creates a script that contains the schema of a database, the data itself, or both. You can generate a script for a database in your local SQL Server deployment. You can then run the script to transfer the information that it contains to an Amazon RDS DB instance. 

**Note**  
For databases of 1 GiB or larger, it's more efficient to script only the database schema. You then use the Import and Export Wizard or the bulk copy feature of SQL Server to transfer the data.

For detailed information about the Generate and Publish Scripts Wizard, see the [Microsoft SQL Server documentation](http://msdn.microsoft.com/en-us/library/ms178078%28v=sql.105%29.aspx). 

In the wizard, pay particular attention to the advanced options on the **Set Scripting Options** page to ensure that everything you want your script to include is selected. For example, by default, database triggers are not included in the script.

When the script is generated and saved, you can use SQL Server Management Studio to connect to your DB instance and then run the script.

#### Import and Export Wizard
<a name="ImportData.SQLServer.MgmtStudio.ImportExportWizard"></a>

The Import and Export Wizard creates a special Integration Services package, which you can use to copy data from your local SQL Server database to the destination DB instance. The wizard can filter which tables and even which tuples within a table are copied to the destination DB instance.

**Note**  
The Import and Export Wizard works well for large datasets, but it might not be the fastest way to remotely export data from your local deployment. For an even faster way, consider the SQL Server bulk copy feature.

For detailed information about the Import and Export Wizard, see the [ Microsoft SQL Server documentation](http://msdn.microsoft.com/en-us/library/ms140052%28v=sql.105%29.aspx).

In the wizard, on the **Choose a Destination** page, do the following:
+ For **Server Name**, type the name of the endpoint for your DB instance.
+ For the server authentication mode, choose **Use SQL Server Authentication**.
+ For **User name** and **Password**, type the credentials for the master user that you created for the DB instance.

#### Bulk copy
<a name="ImportData.SQLServer.MgmtStudio.BulkCopy"></a>

The SQL Server bulk copy feature is an efficient means of copying data from a source database to your DB instance. Bulk copy writes the data that you specify to a data file, such as an ASCII file. You can then run bulk copy again to write the contents of the file to the destination DB instance. 

This section uses the **bcp** utility, which is included with all editions of SQL Server. For detailed information about bulk import and export operations, see [the Microsoft SQL Server documentation](http://msdn.microsoft.com/en-us/library/ms187042%28v=sql.105%29.aspx). 

**Note**  
Before you use bulk copy, you must first import your database schema to the destination DB instance. The Generate and Publish Scripts Wizard, described earlier in this topic, is an excellent tool for this purpose. 

The following command connects to the local SQL Server instance. It generates a tab-delimited file of a specified table in the C:\$1 root directory of your existing SQL Server deployment. The table is specified by its fully qualified name, and the text file has the same name as the table that is being copied. 

```
bcp dbname.schema_name.table_name out C:\table_name.txt -n -S localhost -U username -P password -b 10000 
```

The preceding code includes the following options:
+ `-n` specifies that the bulk copy uses the native data types of the data to be copied.
+ `-S` specifies the SQL Server instance that the *bcp* utility connects to.
+ `-U` specifies the user name of the account to log in to the SQL Server instance.
+ `-P` specifies the password for the user specified by `-U`.
+ `-b` specifies the number of rows per batch of imported data.

**Note**  
There might be other parameters that are important to your import situation. For example, you might need the `-E` parameter that pertains to identity values. For more information; see the full description of the command line syntax for the **bcp** utility in [the Microsoft SQL Server documentation](http://msdn.microsoft.com/en-us/library/ms162802%28v=sql.105%29.aspx). 

For example, suppose that a database named `store` that uses the default schema, `dbo`, contains a table named `customers`. The user account `admin`, with the password `insecure`, copies 10,000 rows of the `customers` table to a file named `customers.txt`. 

```
bcp store.dbo.customers out C:\customers.txt -n -S localhost -U admin -P insecure -b 10000 
```

After you generate the data file, you can upload the data to your DB instance by using a similar command. Beforehand, create the database and schema on the target DB instance. Then use the `in` argument to specify an input file instead of `out` to specify an output file. Instead of using localhost to specify the local SQL Server instance, specify the endpoint of your DB instance. If you use a port other than 1433, specify that too. The user name and password are the master user and password for your DB instance. The syntax is as follows. 

```
bcp dbname.schema_name.table_name 
					in C:\table_name.txt -n -S endpoint,port -U master_user_name -P master_user_password -b 10000
```

To continue the previous example, suppose that the master user name is `admin`, and the password is `insecure`. The endpoint for the DB instance is `rds.ckz2kqd4qsn1.us-east-1.rds.amazonaws.com`, and you use port 4080. The command is as follows. 

```
bcp store.dbo.customers in C:\customers.txt -n -S rds.ckz2kqd4qsn1.us-east-1.rds.amazonaws.com,4080 -U admin -P insecure -b 10000 
```

**Note**  
Specify a password other than the prompt shown here as a security best practice.

## Exporting data from RDS for SQL Server
<a name="SQLServer.Procedural.Exporting"></a>

You can choose one of the following options to export data from an RDS for SQL Server DB instance:
+ **Native database backup using a full backup file (.bak)** – Using .bak files to backup databases is heavily optimized, and is usually the fastest way to export data. For more information, see [Importing and exporting SQL Server databases using native backup and restore](SQLServer.Procedural.Importing.md). 
+ **SQL Server Import and Export Wizard** – For more information, see [SQL Server Import and Export Wizard](#SQLServer.Procedural.Exporting.SSIEW). 
+ **SQL Server Generate and Publish Scripts Wizard and bcp utility** – For more information, see [SQL Server Generate and Publish Scripts Wizard and bcp utility](#SQLServer.Procedural.Exporting.SSGPSW). 

### SQL Server Import and Export Wizard
<a name="SQLServer.Procedural.Exporting.SSIEW"></a>

You can use the SQL Server Import and Export Wizard to copy one or more tables, views, or queries from your RDS for SQL Server DB instance to another data store. This choice is best if the target data store is not SQL Server. For more information, see [ SQL Server Import and Export Wizard](http://msdn.microsoft.com/en-us/library/ms141209%28v=sql.110%29.aspx) in the SQL Server documentation. 

The SQL Server Import and Export Wizard is available as part of Microsoft SQL Server Management Studio. This graphical SQL Server client is included in all Microsoft SQL Server editions except the Express Edition. SQL Server Management Studio is available only as a Windows-based application. SQL Server Management Studio Express is available from Microsoft as a free download. To find this download, see [the Microsoft website](http://www.microsoft.com/en-us/search/Results.aspx?q=sql%20server%20management%20studio). 

**To use the SQL Server Import and Export Wizard to export data**

1. In SQL Server Management Studio, connect to your RDS for SQL Server DB instance. For details on how to do this, see [Connecting to your Microsoft SQL Server DB instance](USER_ConnectToMicrosoftSQLServerInstance.md). 

1. In **Object Explorer**, expand **Databases**, open the context (right-click) menu for the source database, choose **Tasks**, and then choose **Export Data**. The wizard appears. 

1. On the **Choose a Data Source** page, do the following:

   1. For **Data source**, choose **SQL Server Native Client 11.0**. 

   1. Verify that the **Server name** box shows the endpoint of your RDS for SQL Server DB instance.

   1. Select **Use SQL Server Authentication**. For **User name** and **Password**, type the master user name and password of your DB instance.

   1. Verify that the **Database** box shows the database from which you want to export data.

   1. Choose **Next**.

1. On the **Choose a Destination** page, do the following:

   1. For **Destination**, choose **SQL Server Native Client 11.0**. 
**Note**  
Other target data sources are available. These include .NET Framework data providers, OLE DB providers, SQL Server Native Client providers, ADO.NET providers, Microsoft Office Excel, Microsoft Office Access, and the Flat File source. If you choose to target one of these data sources, skip the remainder of step 4. For details on the connection information to provide next, see [Choose a destination](http://msdn.microsoft.com/en-us/library/ms178430%28v=sql.110%29.aspx) in the SQL Server documentation. 

   1. For **Server name**, type the server name of the target SQL Server DB instance. 

   1. Choose the appropriate authentication type. Type a user name and password if necessary. 

   1. For **Database**, choose the name of the target database, or choose **New** to create a new database to contain the exported data. 

      If you choose **New**, see [Create database](http://msdn.microsoft.com/en-us/library/ms183323%28v=sql.110%29.aspx) in the SQL Server documentation for details on the database information to provide.

   1. Choose **Next**.

1. On the **Table Copy or Query** page, choose **Copy data from one or more tables or views** or **Write a query to specify the data to transfer**. Choose **Next**. 

1. If you chose **Write a query to specify the data to transfer**, you see the **Provide a Source Query** page. Type or paste in a SQL query, and then choose **Parse** to verify it. Once the query validates, choose **Next**. 

1. On the **Select Source Tables and Views** page, do the following:

   1. Select the tables and views that you want to export, or verify that the query you provided is selected.

   1. Choose **Edit Mappings** and specify database and column mapping information. For more information, see [Column mappings](http://msdn.microsoft.com/en-us/library/ms189660%28v=sql.110%29.aspx) in the SQL Server documentation. 

   1. (Optional) To see a preview of data to be exported, select the table, view, or query, and then choose **Preview**.

   1. Choose **Next**.

1. On the **Run Package** page, verify that **Run immediately** is selected. Choose **Next**. 

1. On the **Complete the Wizard** page, verify that the data export details are as you expect. Choose **Finish**. 

1. On the **The execution was successful** page, choose **Close**. 

### SQL Server Generate and Publish Scripts Wizard and bcp utility
<a name="SQLServer.Procedural.Exporting.SSGPSW"></a>

You can use the SQL Server Generate and Publish Scripts Wizard to create scripts for an entire database or just selected objects. You can run these scripts on a target SQL Server DB instance to recreate the scripted objects. You can then use the bcp utility to bulk export the data for the selected objects to the target DB instance. This choice is best if you want to move a whole database (including objects other than tables) or large quantities of data between two SQL Server DB instances. For a full description of the bcp command-line syntax, see [bcp utility](http://msdn.microsoft.com/en-us/library/ms162802%28v=sql.110%29.aspx) in the Microsoft SQL Server documentation. 

The SQL Server Generate and Publish Scripts Wizard is available as part of Microsoft SQL Server Management Studio. This graphical SQL Server client is included in all Microsoft SQL Server editions except the Express Edition. SQL Server Management Studio is available only as a Windows-based application. SQL Server Management Studio Express is available from Microsoft as a [free download](http://www.microsoft.com/en-us/search/Results.aspx?q=sql%20server%20management%20studio). 

**To use the SQL Server Generate and Publish Scripts Wizard and the bcp utility to export data**

1. In SQL Server Management Studio, connect to your RDS for SQL Server DB instance. For details on how to do this, see [Connecting to your Microsoft SQL Server DB instance](USER_ConnectToMicrosoftSQLServerInstance.md). 

1. In **Object Explorer**, expand the **Databases** node and select the database you want to script. 

1. Follow the instructions in [Generate and publish scripts Wizard](http://msdn.microsoft.com/en-us/library/bb895179%28v=sql.110%29.aspx) in the SQL Server documentation to create a script file.

1. In SQL Server Management Studio, connect to your target SQL Server DB instance.

1. With the target SQL Server DB instance selected in **Object Explorer**, choose **Open** on the **File** menu, choose **File**, and then open the script file. 

1. If you have scripted the entire database, review the CREATE DATABASE statement in the script. Make sure that the database is being created in the location and with the parameters that you want. For more information, see [CREATE DATABASE](http://msdn.microsoft.com/en-us/library/ms176061%28v=sql.110%29.aspx) in the SQL Server documentation. 

1. If you are creating database users in the script, check to see if server logins exist on the target DB instance for those users. If not, create logins for those users; the scripted commands to create the database users fail otherwise. For more information, see [Create a login](http://msdn.microsoft.com/en-us/library/aa337562%28v=sql.110%29.aspx) in the SQL Server documentation.

1. Choose **\$1Execute** on the SQL Editor menu to run the script file and create the database objects. When the script finishes, verify that all database objects exist as expected.

1. Use the bcp utility to export data from the RDS for SQL Server DB instance into files. Open a command prompt and type the following command.

   ```
   bcp database_name.schema_name.table_name out data_file -n -S aws_rds_sql_endpoint -U username -P password
   ```

   The preceding code includes the following options:
   + *table\$1name* is the name of one of the tables that you've recreated in the target database and now want to populate with data. 
   + *data\$1file* is the full path and name of the data file to be created.
   + `-n` specifies that the bulk copy uses the native data types of the data to be copied.
   + `-S` specifies the SQL Server DB instance to export from.
   + `-U` specifies the user name to use when connecting to the SQL Server DB instance.
   + `-P` specifies the password for the user specified by `-U`.

   The following shows an example command. 

   ```
   bcp world.dbo.city out C:\Users\JohnDoe\city.dat -n -S sql-jdoe.1234abcd.us-west-2.rds.amazonaws.com,1433 -U JohnDoe -P ClearTextPassword
   ```

   Repeat this step until you have data files for all of the tables you want to export. 

1. Prepare your target DB instance for bulk import of data by following the instructions at [Basic guidelines for bulk importing data](http://msdn.microsoft.com/en-us/library/ms189989%28v=sql.110%29.aspx) in the SQL Server documentation. 

1. Decide on a bulk import method to use after considering performance and other concerns discussed in [About bulk import and bulk export operations](http://msdn.microsoft.com/en-us/library/ms187042%28v=sql.105%29.aspx) in the SQL Server documentation. 

1. Bulk import the data from the data files that you created using the bcp utility. To do so, follow the instructions at either [Import and export bulk data by using the bcp utility](http://msdn.microsoft.com/en-us/library/aa337544%28v=sql.110%29.aspx) or [Import bulk data by using BULK INSERT or OPENROWSET(BULK...)](http://msdn.microsoft.com/en-us/library/ms175915%28v=sql.110%29.aspx) in the SQL Server documentation, depending on what you decided in step 11. 

# Using BCP utility from Linux to import and export data
<a name="SQLServer.Procedural.Importing.BCP.Linux"></a>

The BCP (Bulk Copy Program) utility provides an efficient way to transfer large amounts of data between your RDS for SQL Server DB instance and data files. You can use BCP from Linux environments to perform bulk data operations, making it useful for data migration, ETL processes, and regular data transfers.

BCP supports both importing data from files into SQL Server tables and exporting data from SQL Server tables to files. This is particularly effective for transferring structured data in various formats including delimited text files.

## Prerequisites
<a name="SQLServer.Procedural.Importing.BCP.Linux.Prerequisites"></a>

Before using BCP with your RDS for SQL Server DB instance from Linux, ensure you have the following:
+ A Linux environment with network connectivity to your RDS for SQL Server DB instance
+ Microsoft SQL Server command-line tools installed on your Linux system, including:
  + sqlcmd - SQL Server command-line query tool
  + bcp - Bulk Copy Program utility
+ Valid credentials for your RDS for SQL Server DB instance
+ Network access configured through security groups to allow connections on the SQL Server port (typically 1433)
+ Appropriate database permissions for the operations you want to perform

## Installing SQL Server command-line tools on Linux
<a name="SQLServer.Procedural.Importing.BCP.Linux.Installing"></a>

To use BCP from Linux, you need to install the Microsoft SQL Server command-line tools. For detailed installation instructions for your specific Linux distribution, see the following Microsoft documentation:
+ [Install sqlcmd and bcp the SQL Server command-line tools on Linux](https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-tools)
+ [bcp utility](https://docs.microsoft.com/en-us/sql/tools/bcp-utility) - Complete reference for the BCP utility

After installation, ensure the tools are available in your PATH by running:

```
bcp -v
sqlcmd -?
```

## Exporting data from RDS for SQL Server
<a name="SQLServer.Procedural.Importing.BCP.Linux.Exporting"></a>

You can use BCP to export data from your RDS for SQL Server DB instance to files on your Linux system. This is useful for creating backups, data analysis, or preparing data for migration.

### Basic export syntax
<a name="SQLServer.Procedural.Importing.BCP.Linux.Exporting.Basic"></a>

The basic syntax for exporting data using BCP is:

```
bcp database.schema.table out output_file -S server_name -U username -P password [options]
```

Where:
+ `database.schema.table` - The fully qualified table name
+ `output_file` - The path and name of the output file
+ `server_name` - Your RDS for SQL Server endpoint
+ `username` - Your database username
+ `password` - Your database password

### Export example
<a name="SQLServer.Procedural.Importing.BCP.Linux.Exporting.Example"></a>

The following example exports data from a table named `customers` in the `sales` database:

```
bcp sales.dbo.customers out /home/user/customers.txt \
    -S mydb.cluster-abc123.us-east-1.rds.amazonaws.com \
    -U admin \
    -P mypassword \
    -c \
    -t "|" \
    -r "\n"
```

This command:
+ Exports data from the `customers` table
+ Saves the output to `/home/user/customers.txt`
+ Uses character format (`-c`)
+ Uses pipe (\$1) as the field delimiter (`-t "|"`)
+ Uses newline as the row delimiter (`-r "\n"`)

## Importing data to RDS for SQL Server
<a name="SQLServer.Procedural.Importing.BCP.Linux.Importing"></a>

You can use BCP to import data from files on your Linux system into your RDS for SQL Server DB instance. This is useful for data migration, loading test data, or regular data updates.

### Basic import syntax
<a name="SQLServer.Procedural.Importing.BCP.Linux.Importing.Basic"></a>

The basic syntax for importing data using BCP is:

```
bcp database.schema.table in input_file -S server_name -U username -P password [options]
```

Where:
+ `database.schema.table` - The fully qualified destination table name
+ `input_file` - The path and name of the input file
+ `server_name` - Your RDS for SQL Server endpoint
+ `username` - Your database username
+ `password` - Your database password

### Import example
<a name="SQLServer.Procedural.Importing.BCP.Linux.Importing.Example"></a>

The following example imports data from a file into a table named `customers`:

```
bcp sales.dbo.customers in /home/user/customers.txt \
    -S mydb.cluster-abc123.us-east-1.rds.amazonaws.com \
    -U admin \
    -P mypassword \
    -c \
    -t "|" \
    -r "\n" \
    -b 1000
```

This command:
+ Imports data into the `customers` table
+ Reads data from `/home/user/customers.txt`
+ Uses character format (`-c`)
+ Uses pipe (\$1) as the field delimiter (`-t "|"`)
+ Uses newline as the row delimiter (`-r "\n"`)
+ Processes data in batches of 1000 rows (`-b 1000`)

## Common BCP options
<a name="SQLServer.Procedural.Importing.BCP.Linux.Options"></a>

BCP provides numerous options to control data formatting and transfer behavior. The following table describes commonly used options:


| Option | Description | 
| --- | --- | 
| -c | Uses character data type for all columns | 
| -n | Uses native database data types | 
| -t | Specifies the field delimiter (default is tab) | 
| -r | Specifies the row delimiter (default is newline) | 
| -b | Specifies the batch size for bulk operations | 
| -F | Specifies the first row to export or import | 
| -L | Specifies the last row to export or import | 
| -e | Specifies an error file to capture rejected rows | 
| -f | Specifies a format file for data formatting | 
| -q | Uses quoted identifiers for object names | 

## Best practices and considerations
<a name="SQLServer.Procedural.Importing.BCP.Linux.BestPractices"></a>

When using BCP with RDS for SQL Server from Linux, consider the following best practices:
+ **Use batch processing** – For large datasets, use the `-b` option to process data in batches. T his improves performance and allows for better error recovery.
+ **Handle errors gracefully** – Use the `-e` option to capture error information and rejected rows in a separate file for analysis.
+ **Choose appropriate data formats** – Use character format (`-c`) for cross-platform compatibility or native format (`-n`) for better performance when both source and destination are SQL Server.
+ **Secure your credentials** – Avoid putting passwords directly in command lines. Consider using environment variables or configuration files with appropriate permissions.
+ **Test with small datasets** – Before processing large amounts of data, test your BCP commands with smaller datasets to verify formatting and connectivity.
+ **Monitor network connectivity** – Ensure stable network connections, especially for large data transfers. Consider using tools like `screen` or `tmux` for long-running operations.
+ **Validate data integrity** – After data transfer, verify row counts and sample data to ensure the operation completed successfully.

## Troubleshooting common issues
<a name="SQLServer.Procedural.Importing.BCP.Linux.Troubleshooting"></a>

The following table describes common issues you might encounter when using BCP from Linux and their solutions:


| Issue | Solution | 
| --- | --- | 
| Connection timeout or network errors | Verify your Amazon RDS endpoint, security group settings, and network connectivity. Make sure the SQL Server port (typically 1433) is accessible from your Linux system. | 
| Authentication failures | Verify your username and password. Make sure the database user has appropriate permissions for the operations you're performing. | 
| Data format errors | Check your field and row delimiters. Make sure the data format matches what BCP expects. Use format files for complex data structures. | 
| Permission denied errors | Make sure your database user has INSERT permissions for imports or SELECT permissions for exports on the target tables. | 
| Large file handling issues | Use batch processing with the -b option. Consider splitting large files into smaller chunks for better performance and error recovery. | 
| Character encoding problems | Ensure your data files use compatible character encoding. Use the -c option for character format or specify appropriate code pages. | 

# Working with read replicas for Microsoft SQL Server in Amazon RDS
<a name="SQLServer.ReadReplicas"></a>

You usually use read replicas to configure replication between Amazon RDS DB instances. For general information about read replicas, see [Working with DB instance read replicas](USER_ReadRepl.md). 

In this section, you can find specific information about working with read replicas on Amazon RDS for SQL Server.
+ [Synchronizing database users and objects with a SQL Server read replica](SQLServer.ReadReplicas.ObjectSynchronization.md)
+ [Troubleshooting a SQL Server read replica problem](SQLServer.ReadReplicas.Troubleshooting.md)

## Configuring read replicas for SQL Server
<a name="SQLServer.ReadReplicas.Configuration"></a>

Before a DB instance can serve as a source instance for replication, you must enable automatic backups on the source DB instance. To do so, you set the backup retention period to a value other than 0. Setting this type of deployment also enforces that automatic backups are enabled.

Creating a SQL Server read replica doesn't require an outage for the primary DB instance. Amazon RDS sets the necessary parameters and permissions for the source DB instance and the read replica without any service interruption. A snapshot is taken of the source DB instance, and this snapshot becomes the read replica. No outage occurs when you delete a read replica. 

You can create up to 15 read replicas from one source DB instance. For replication to operate effectively, we recommend that you configure each read replica with the same amount of compute and storage resources as the source DB instance. If you scale the source DB instance, also scale the read replicas.

The SQL Server DB engine version of the source DB instance and all of its read replicas must be the same. Amazon RDS upgrades the primary immediately after upgrading the read replicas, regardless of the maintenance window. For more information about upgrading the DB engine version, see [Upgrades of the Microsoft SQL Server DB engine](USER_UpgradeDBInstance.SQLServer.md).

For a read replica to receive and apply changes from the source, it should have sufficient compute and storage resources. If a read replica reaches compute, network, or storage resource capacity, the read replica stops receiving or applying changes from its source. You can modify the storage and CPU resources of a read replica independently from its source and other read replicas. 

For more information about how to create a read replica, see [Creating a read replica](USER_ReadRepl.Create.md).

## Read replica limitations with SQL Server
<a name="SQLServer.ReadReplicas.Limitations"></a>

The following limitations apply to SQL Server read replicas on Amazon RDS:
+ Read replicas are only available on the SQL Server Enterprise Edition (EE) engine.
+ Read replicas are available for SQL Server versions 2016–2022.
+ You can create up to 15 read replicas from one source DB instance. Replication might lag when your source DB instance has more than 5 read replicas.
+ Read replicas are only available for DB instances running on DB instance classes with four or more vCPUs.
+ A read replica supports up to 100 databases depending on the instance class type and availability mode. You must create databases on the source DB instance to automatically replicate them to the read replicas. You can't choose individual databases to replicate. For more information, see [Limitations for Microsoft SQL Server DB instances](CHAP_SQLServer.md#SQLServer.Concepts.General.FeatureSupport.Limits).
+ You can't drop a database from a read replica. To drop a database, drop it from the source DB instance with the `rds_drop_database` stored procedure. For more information, see [Dropping a database in an Amazon RDS for Microsoft SQL Server DB instance](Appendix.SQLServer.CommonDBATasks.DropMirrorDB.md).
+ If the source DB instance uses Transparent Data Encryption (TDE) to encrypt data, the read replica also automatically configures TDE.

  If the source DB instance uses a KMS key to encrypt data, read replicas in the same region use the same KMS key. For cross-region read replicas, you must specify a KMS key from the read replica’s region when creating the read replica. You can't change the KMS key for a read replica.
+ Read replicas have the same time zone and collation as the source DB instance, regardless of Availabilty Zone they're created in.
+ The following aren't supported on Amazon RDS for SQL Server:
  + Backup retention of read replicas
  + Point-in-time recovery from read replicas
  + Manual snapshots of read replicas
  + Multi-AZ read replicas
  + Creating read replicas of read replicas
  + Synchronization of user logins to read replicas
+ Amazon RDS for SQL Server doesn't intervene to mitigate high replica lag between a source DB instance and its read replicas. Make sure that the source DB instance and its read replicas are sized properly, in terms of computing power and storage, to suit their operational load.
+ You can replicate between the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions, but not into or out of AWS GovCloud (US) Regions.

## Option considerations for RDS for SQL Server replicas
<a name="SQLServer.ReadReplicas.limitations.options"></a>

Before you create an RDS for SQL Server replica, consider the following requirements, restrictions, and recommendations:
+ If your SQL Server replica is in the same Region as its source DB instance, make sure that it belongs to the same option group as the source DB instance. Modifications to the source option group or source option group membership propagate to replicas. These changes are applied to the replicas immediately after they are applied to the source DB instance, regardless of the replica's maintenance window.

  For more information about option groups, see [Working with option groups](USER_WorkingWithOptionGroups.md).
+ When you create a SQL Server cross-Region replica, Amazon RDS creates a dedicated option group for it.

  You can't remove an SQL Server cross-Region replica from its dedicated option group. No other DB instances can use the dedicated option group for a SQL Server cross-Region replica.

  The following options are replicated options. To add replicated options to a SQL Server cross-Region replica, add it to the source DB instance's option group. The option is also installed on all of the source DB instance's replicas.
  + `TDE`

  The following options are non-replicated options. You can add or remove non-replicated options from a dedicated option group.
  + `MSDTC`
  + `SQLSERVER_AUDIT`
  + To enable the `SQLSERVER_AUDIT` option on cross-Region read replica, add the `SQLSERVER_AUDIT` option on the dedicated option group on the cross-region read replica and the source instance’s option group. By adding the `SQLSERVER_AUDIT` option on the source instance of SQL Server cross-Region read replica, you can create Server Level Audit Object and Server Level Audit Specifications on each of the cross-Region read replicas of the source instance. To allow the cross-Region read replicas access to upload the completed audit logs to an Amazon S3 bucket, add the `SQLSERVER_AUDIT` option to the dedicated option group and configure the option settings. The Amazon S3 bucket that you use as a target for audit files must be in the same Region as the cross-Region read replica. You can modify the option setting of the `SQLSERVER_AUDIT` option for each cross region read replica independently so each can access an Amazon S3 bucket in their respective Region.

  The following options are not supported for read replicas.
  + `SSRS`
  + `SSAS`
  + `SSIS`

  The following options are partially supported for cross-Region read replicas.
  + `SQLSERVER_BACKUP_RESTORE`
  + The source DB instance of a SQL Server cross-Region replica can have the `SQLSERVER_BACKUP_RESTORE` option, but you can not perform native restores on the source DB instance until you delete all its cross-Region replicas. Any existing native restore tasks will be cancelled during the creation of a cross-Region replica. You can't add the `SQLSERVER_BACKUP_RESTORE` option to a dedicated option group.

    For more information on native backup and restore, see [Importing and exporting SQL Server databases using native backup and restore](SQLServer.Procedural.Importing.md)

  When you promote a SQL Server cross-Region read replica, the promoted replica behaves the same as other SQL Server DB instances, including the management of its options. For more information about option groups, see [Working with option groups](USER_WorkingWithOptionGroups.md).

# Synchronizing database users and objects with a SQL Server read replica
<a name="SQLServer.ReadReplicas.ObjectSynchronization"></a>

Any logins, custom server roles, SQL agent jobs, or other server-level objects that exist in the primary DB instance at the time of creating a read replica are expected to be present in the newly created read replica. However, any server-level objects that are created in the primary DB instance after the creation of the read replica will not be automatically replicated, and you must create them manually in the read replica.

The database users are automatically replicated from the primary DB instance to the read replica. As the read replica database is in read-only mode, the security identifier (SID) of the database user cannot be updated in the database. Therefore, when creating SQL logins in the read replica, it's essential to ensure that the SID of that login matches the SID of the corresponding SQL login in the primary DB instance. If you don't synchronize the SIDs of the SQL logins, they won't be able to access the database in the read replica. Windows Active Directory (AD) Authenticated Logins do not experience this issue because the SQL Server obtains the SID from the Active Directory.

**To synchronize a SQL login from the primary DB instance to the read replica**

1. Connect to the primary DB instance.

1. Create a new SQL login in the primary DB instance.

   ```
   USE [master]
   GO
   CREATE LOGIN TestLogin1
   WITH PASSWORD = 'REPLACE WITH PASSWORD';
   ```
**Note**  
Specify a password other than the prompt shown here as a security best practice.

1. Create a new database user for the SQL login in the database.

   ```
   USE [REPLACE WITH YOUR DB NAME]
   GO
   CREATE USER TestLogin1 FOR LOGIN TestLogin1;
   GO
   ```

1. Check the SID of the newly created SQL login in primary DB instance.

   ```
   SELECT name, sid FROM sys.server_principals WHERE name =  'TestLogin1';
   ```

1. Connect to the read replica. Create the new SQL login.

   ```
   CREATE LOGIN TestLogin1 WITH PASSWORD = 'REPLACE WITH PASSWORD', SID=REPLACE WITH sid FROM STEP #4;
   ```

**Alternately, if you have access to the read replica database, you can fix the orphaned user as follows:**

1. Connect to the read replica.

1. Identify the orphaned users in the database.

   ```
   USE [REPLACE WITH YOUR DB NAME]
   GO
   EXEC sp_change_users_login 'Report';
   GO
   ```

1. Create a new SQL login for the orphaned database user.

   ```
   CREATE LOGIN TestLogin1 WITH PASSWORD = 'REPLACE WITH PASSWORD', SID=REPLACE WITH sid FROM STEP #2;
   ```

   Example:

   ```
   CREATE LOGIN TestLogin1 WITH PASSWORD = 'TestPa$$word#1', SID=0x1A2B3C4D5E6F7G8H9I0J1K2L3M4N5O6P;
   ```
**Note**  
Specify a password other than the prompt shown here as a security best practice.

# Troubleshooting a SQL Server read replica problem
<a name="SQLServer.ReadReplicas.Troubleshooting"></a>

You can monitor replication lag in Amazon CloudWatch by viewing the Amazon RDS `ReplicaLag` metric. For information about replication lag time, see [Monitoring read replication](USER_ReadRepl.Monitoring.md).

If replication lag is too long, you can use the following query to get information about the lag.

```
SELECT AR.replica_server_name
     , DB_NAME (ARS.database_id) 'database_name'
     , AR.availability_mode_desc
     , ARS.synchronization_health_desc
     , ARS.last_hardened_lsn
     , ARS.last_redone_lsn
     , ARS.secondary_lag_seconds
FROM sys.dm_hadr_database_replica_states ARS
INNER JOIN sys.availability_replicas AR ON ARS.replica_id = AR.replica_id
--WHERE DB_NAME(ARS.database_id) = 'database_name'
ORDER BY AR.replica_server_name;
```

# Multi-AZ deployments for Amazon RDS for Microsoft SQL Server
<a name="USER_SQLServerMultiAZ"></a>

Multi-AZ deployments provide increased availability, data durability, and fault tolerance for DB instances. In the event of planned database maintenance or unplanned service disruption, Amazon RDS automatically fails over to the up-to-date secondary DB instance. This functionality lets database operations resume quickly without manual intervention. The primary and standby instances use the same endpoint, whose physical network address transitions to the secondary replica as part of the failover process. You don't have to reconfigure your application when a failover occurs.

Amazon RDS supports Multi-AZ deployments for Microsoft SQL Server by using either SQL Server Database Mirroring (DBM), Always On Availability Groups (AGs), or block level replication. Amazon RDS monitors and maintains the health of your Multi-AZ deployment. If problems occur, RDS automatically repairs unhealthy DB instances, reestablishes synchronization, and initiates failovers. Failover only occurs if the standby and primary are fully in sync. You don't have to manage anything.

When you set up SQL Server Multi-AZ, RDS automatically configures all databases on the instance to use DBM, AGs, or block level replication. Amazon RDS handles the primary, the witness, and the secondary DB instance for you when you configure DBM or AGs. For block level replication, RDS handles the primary and the secondary DB instances. Because configuration is automatic, RDS selects DBM, Always On AGs, or block level replication based on the version of SQL Server that you deploy.

Amazon RDS supports Multi-AZ with Always On AGs for the following SQL Server versions and editions:
+ SQL Server 2022:
  + Standard Edition
  + Enterprise Edition
+ SQL Server 2019:
  + Standard Edition 15.00.4073.23 and higher
  + Enterprise Edition
+ SQL Server 2017:
  + Standard Edition 14.00.3401.7 and higher
  + Enterprise Edition 14.00.3049.1 and higher
+ SQL Server 2016: Enterprise Edition 13.00.5216.0 and higher

Amazon RDS supports Multi-AZ with DBM for the following SQL Server versions and editions, except for the versions noted previously:
+ SQL Server 2019: Standard Edition 15.00.4043.16
+ SQL Server 2017: Standard and Enterprise Editions
+ SQL Server 2016: Standard and Enterprise Editions 

Amazon RDS supports Multi-AZ with block level replication for SQL Server 2022 Web Edition 16.00.4215.2 and above.

**Note**  
Only new DB instances created with 16.00.4215.2 or higher support Multi-AZ deployments with block level replication. The following restrictions apply for existing SQL Server 2022 Web Edition instances:  
For existing instances on version 16.00.4215.2, you must restore a snapshot to a new instance with the same or higher minor version to enable block level replication.
SQL Server 2022 Web instances with an older minor version can be upgraded to minor version 16.00.4215.2 or higher to enable block level replication.

You can use the following SQL query to determine whether your SQL Server DB instance is Single-AZ, Multi-AZ with DBM, or Multi-AZ with Always On AGs. This query does not apply for Multi-AZ deployments on SQL Server Web Edition.

```
SELECT CASE WHEN dm.mirroring_state_desc IS NOT NULL THEN 'Multi-AZ (Mirroring)'
    WHEN dhdrs.group_database_id IS NOT NULL THEN 'Multi-AZ (AlwaysOn)'
    ELSE 'Single-AZ'
    END 'high_availability'
FROM sys.databases sd
LEFT JOIN sys.database_mirroring dm ON sd.database_id = dm.database_id
LEFT JOIN sys.dm_hadr_database_replica_states dhdrs ON sd.database_id = dhdrs.database_id AND dhdrs.is_local = 1
WHERE DB_NAME(sd.database_id) = 'rdsadmin';
```

The output resembles the following:

```
high_availability
Multi-AZ (AlwaysOn)
```

## Adding Multi-AZ to a Microsoft SQL Server DB instance
<a name="USER_SQLServerMultiAZ.Adding"></a>

When you create a new SQL Server DB instance using the AWS Management Console, you can add Multi-AZ with Database Mirroring (DBM), Always On AGs or block level replication. You do so by choosing **Yes (Mirroring / Always On / Block Level Replication)** from **Multi-AZ deployment**. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).

When you modify an existing SQL Server DB instance using the console, you can add Multi-AZ with DBM, AGs, or block level replication by choosing **Yes (Mirroring / Always On / Block Level Replication)** from **Multi-AZ deployment** on the **Modify DB instance** page. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

**Note**  
If your DB instance is running Database Mirroring (DBM)—not Always On Availability Groups (AGs)—you might need to disable in-memory optimization before you add Multi-AZ. Disable in-memory optimization with DBM before you add Multi-AZ if your DB instance runs SQL Server 2016 or 2017 Enterprise Edition and has in-memory optimization enabled.   
If your DB instance is running AGs or block level replication for SQL Server Web Editions, it doesn't require this step. 

## Removing Multi-AZ from a Microsoft SQL Server DB instance
<a name="USER_SQLServerMultiAZ.Removing"></a>

When you modify an existing SQL Server DB instance using the AWS Management Console, you can remove Multi-AZ with DBM, AGs, or block level replication. You can do this by choosing **No (Mirroring / Always On / Block Level Replication)** from **Multi-AZ deployment** on the **Modify DB instance** page. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

# Microsoft SQL Server Multi-AZ deployment limitations, notes, and recommendations
<a name="USER_SQLServerMultiAZ.Recommendations"></a>

The following are some limitations when working with Multi-AZ deployments on RDS for SQL Server DB instances:
+ Cross-Region Multi-AZ isn't supported.
+ Stopping an RDS for SQL Server DB instance in a multi-AZ deployment isn't supported.
+ You can't configure the secondary DB instance to accept database read activity.
+ Multi-AZ with Always On Availability Groups (AGs) supports in-memory optimization.
+ Multi-AZ with Always On Availability Groups (AGs) doesn't support Kerberos authentication for the availability group listener. This is because the listener has no Service Principal Name (SPN).
+ Multi-AZ with block level replication is currently only supported for SQL Server Web Edition instances.
+ You can't rename a database on a SQL Server DB instance that is in a SQL Server Multi-AZ deployment. If you need to rename a database on such an instance, first turn off Multi-AZ for the DB instance, then rename the database. Finally, turn Multi-AZ back on for the DB instance. 
+ You can only restore Multi-AZ DB instances that are backed up using the full recovery model.
+ Multi-AZ deployments have a limit of 10,000 SQL Server Agent jobs.

  If you need a higher limit, request an increase by contacting Support. Open the [AWS Support Center](https://console.aws.amazon.com/support/home#/) page, sign in if necessary, and choose **Create case**. Choose **Service limit increase**. Complete and submit the form.
+ You can't have an offline database on a SQL Server DB instance that is in a SQL Server Multi-AZ deployment.
+ RDS for SQL Server doesn't replicate MSDB database permissions to the secondary instance. If you need these permissions on the secondary instance, you must recreate them manually.
+ Volume metrics are not available for the secondary host of the instance using block level replication.

The following are some notes about working with Multi-AZ deployments on RDS for SQL Server DB instances:
+ Amazon RDS exposes the Always On AGs [availability group listener endpoint](https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/listeners-client-connectivity-application-failover). The endpoint is visible in the console, and is returned by the `DescribeDBInstances` API operation as an entry in the endpoints field.
+ Amazon RDS supports [availability group multisubnet failovers](https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/listeners-client-connectivity-application-failover).
+ To use SQL Server Multi-AZ with a SQL Server DB instance in a virtual private cloud (VPC), first create a DB subnet group that has subnets in at least two distinct Availability Zones. Then assign the DB subnet group to the primary replica of the SQL Server DB instance. 
+ When a DB instance is modified to be a Multi-AZ deployment, during the modification it has a status of **modifying**. Amazon RDS creates the standby, and makes a backup of the primary DB instance. After the process is complete, the status of the primary DB instance becomes **available**.
+ Multi-AZ deployments maintain all databases on the same node. If a database on the primary host fails over, all your SQL Server databases fail over as one atomic unit to your standby host. Amazon RDS provisions a new healthy host, and replaces the unhealthy host.
+ Multi-AZ with DBM, AGs, or block level replication supports a single standby replica.
+ Users, logins, and permissions are automatically replicated for you on the secondary. You don't need to recreate them. User-defined server roles are replicated in DB instances that use Always On AGs or block level replication for Multi-AZ deployments. 
+ In Multi-AZ deployments, RDS for SQL Server creates SQL Server logins to allow Always On AGs or Database Mirroring. RDS creates logins with the following pattern, `db_<dbiResourceId>_node1_login`, `db_<dbiResourceId>_node2_login`, and `db_<dbiResourceId>_witness_login`.
+ RDS for SQL Server creates a SQL Server login to allow access to read replicas. RDS creates a login with the following pattern, `db_<readreplica_dbiResourceId>_node_login`.
+ In Multi-AZ deployments, SQL Server Agent jobs are replicated from the primary host to the secondary host when the job replication feature is turned on. For more information, see [Turning on SQL Server Agent job replication](Appendix.SQLServer.CommonDBATasks.Agent.md#SQLServerAgent.Replicate).
+ You might observe elevated latencies compared to a standard DB instance deployment (in a single Availability Zone) because of the synchronous data replication.
+ Failover times are affected by the time it takes to complete the recovery process. Large transactions increase the failover time.
+ In SQL Server Multi-AZ deployments, reboot with failover reboots only the primary DB instance. After the failover, the primary DB instance becomes the new secondary DB instance. Parameters might not be updated for Multi-AZ instances. For reboot without failover, both the primary and secondary DB instances reboot, and parameters are updated after the reboot. If the DB instance is unresponsive, we recommend reboot without failover.

The following are some recommendations for working with Multi-AZ deployments on RDS for Microsoft SQL Server DB instances:
+ For databases used in production or preproduction, we recommend the following options:
  + Multi-AZ deployments for high availability
  + "Provisioned IOPS" for fast, consistent performance
  + "Memory optimized" rather than "General purpose"
+ You can't select the Availability Zone (AZ) for the secondary instance, so when you deploy application hosts, take this into account. Your database might fail over to another AZ, and the application hosts might not be in the same AZ as the database. For this reason, we recommend that you balance your application hosts across all AZs in the given AWS Region.
+ For best performance, don't enable Database Mirroring, Always On AGs, or block level replication during a large data load operation. If you want your data load to be as fast as possible, finish loading data before you convert your DB instance to a Multi-AZ deployment. 
+ Applications that access the SQL Server databases should have exception handling that catches connection errors. The following code sample shows a try/catch block that catches a communication error. In this example, the `break` statement exits the `while` loop if the connection is successful, but retries up to 10 times if an exception is thrown.

  ```
  int RetryMaxAttempts = 10;
  int RetryIntervalPeriodInSeconds = 1;
  int iRetryCount = 0;
  while (iRetryCount < RetryMaxAttempts)
  {
     using (SqlConnection connection = new SqlConnection(DatabaseConnString))
     {
        using (SqlCommand command = connection.CreateCommand())
        {
           command.CommandText = "INSERT INTO SOME_TABLE VALUES ('SomeValue');";
           try
           {
              connection.Open();
              command.ExecuteNonQuery();
              break;
           }
           catch (Exception ex) 
           {
              Logger(ex.Message);
              iRetryCount++;
           }
           finally {
              connection.Close();
           }
        }
     }
     Thread.Sleep(RetryIntervalPeriodInSeconds * 1000);
  }
  ```
+ Don't use the `Set Partner Off` command when working with Multi-AZ instances using DBM or AGs. This command is not supported on instances using block level replication. For example, don't do the following. 

  ```
  --Don't do this
  ALTER DATABASE db1 SET PARTNER off
  ```
+ Don't set the recovery mode to `simple`. For example, don't do the following. 

  ```
  --Don't do this
  ALTER DATABASE db1 SET RECOVERY simple
  ```
+ Don't use the `DEFAULT_DATABASE` parameter when creating new logins on Multi-AZ DB instances unless using block level replication for high availability, because these settings can't be applied to the standby mirror. For example, don't do the following. 

  ```
  --Don't do this
  CREATE LOGIN [test_dba] WITH PASSWORD=foo, DEFAULT_DATABASE=[db2]
  ```

  Also, don't do the following.

  ```
  --Don't do this
  ALTER LOGIN [test_dba] WITH DEFAULT_DATABASE=[db3]
  ```

# Determining the location of the secondary
<a name="USER_SQLServerMultiAZ.Location"></a>

You can determine the location of the secondary replica by using the AWS Management Console. You need to know the location of the secondary if you are setting up your primary DB instance in a VPC. 

![\[Secondary AZ\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/SQLSvr-MultiAZ.png)


You can also view the Availability Zone of the secondary using the AWS CLI command `describe-db-instances` or RDS API operation `DescribeDBInstances`. The output shows the secondary AZ where the standby mirror is located. 

# Migrating from Database Mirroring to Always On Availability Groups
<a name="USER_SQLServerMultiAZ.Migration"></a>

In version 14.00.3049.1 of Microsoft SQL Server Enterprise Edition, Always On Availability Groups (AGs) are enabled by default.

To migrate from Database Mirroring (DBM) to AGs, first check your version. If you are using a DB instance with a version prior to Enterprise Edition 13.00.5216.0, modify the instance to patch it to 13.00.5216.0 or later. If you are using a DB instance with a version prior to Enterprise Edition 14.00.3049.1, modify the instance to patch it to 14.00.3049.1 or later.

If you want to upgrade a mirrored DB instance to use AGs, run the upgrade first, modify the instance to remove Multi-AZ, and then modify it again to add Multi-AZ. This converts your instance to use Always On AGs.

# Additional features for Microsoft SQL Server on Amazon RDS
<a name="User.SQLServer.AdditionalFeatures"></a>

In the following sections, you can find information about augmenting Amazon RDS instances running the Microsoft SQL Server DB engine.

**Topics**
+ [

# Using Password Policy for SQL Server logins on RDS for SQL Server
](SQLServer.Concepts.General.PasswordPolicy.Using.md)
+ [

# Integrating an Amazon RDS for SQL Server DB instance with Amazon S3
](User.SQLServer.Options.S3-integration.md)
+ [

# Using Database Mail on Amazon RDS for SQL Server
](SQLServer.DBMail.md)
+ [

# Instance store support for the tempdb database on Amazon RDS for SQL Server
](SQLServer.InstanceStore.md)
+ [

# Using extended events with Amazon RDS for Microsoft SQL Server
](SQLServer.ExtendedEvents.md)
+ [

# Access to transaction log backups with RDS for SQL Server
](USER.SQLServer.AddlFeat.TransactionLogAccess.md)

# Using Password Policy for SQL Server logins on RDS for SQL Server
<a name="SQLServer.Concepts.General.PasswordPolicy.Using"></a>

Amazon RDS allows you to set the password policy for your Amazon RDS DB instance running Microsoft SQL Server. Use this to set complexity, length, and lockout requirements for logins that use SQL Server Authentication to authenticate to your DB instance.

## Key terms
<a name="SQLServer.Concepts.General.PasswordPolicy.Using.KT"></a>

**Login**  
In SQL Server, a server-level principal that can authenticate to a database instance is referred to as a **login**. Other database engines might refer to this principal as a *user*. In RDS for SQL Server, a login can authenticate using SQL Server Authentication or Windows Authentication.

**SQL Server login**  
A login that uses a username and password to authenticate using SQL Server Authentication is a SQL Server login. The password policy you configure through DB parameters only applies to SQL Server logins.

**Windows login**  
A login that is based on a Windows principal and authenticates using Windows Authentication is a Windows login. You can configure the password policy for your Windows logins in Active Directory. For more information, see [Working with Active Directory with RDS for SQL Server](User.SQLServer.ActiveDirectoryWindowsAuth.md).

## Enabling and disabling policy for each login
<a name="SQLServer.Concepts.General.PasswordPolicy.EnableDisable"></a>

 Each SQL Server login has flags for `CHECK_POLICY` and `CHECK_EXPIRATION`. By default, new logins are created with `CHECK_POLICY` set to `ON` and `CHECK_EXPIRATION` set to `OFF`. 

If `CHECK_POLICY` is enabled for a login, RDS for SQL Server validates the password against the complexity and minimum length requirements. Lockout policies also apply. An example T-SQL statement to enable `CHECK_POLICY` and `CHECK_EXPIRATION`: 

```
ALTER LOGIN [master_user] WITH CHECK_POLICY = ON, CHECK_EXPIRATION = ON;
```

If `CHECK_EXPIRATION` is enabled, passwords are subject to password age policies. The T-SQL statement to check if `CHECK_POLICY` and `CHECK_EXPIRATION` are set:

```
SELECT name, is_policy_checked, is_expiration_checked FROM sys.sql_logins;
```

## Password policy parameters
<a name="SQLServer.Concepts.General.PasswordPolicy.PWDPolicyParams"></a>

All password policy parameters are dynamic and do not require DB reboot to take effect. The following table lists the DB parameters you can set to modify the password policy for SQL Server logins:


****  

| DB parameter | Description | Allowed Values | Default Value | 
| --- | --- | --- | --- | 
| rds.password\$1complexity\$1enabled | Password complexity requirements must be satisfied when creating or changing passwords for SQL Server logins. The following constraints must be met: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Concepts.General.PasswordPolicy.Using.html)  | 0,1 | 0 | 
| rds.password\$1min\$1length | The minimum number of characters required in a password for a SQL Server login. | 0-14 | 0 | 
| rds.password\$1min\$1age | The minimum number of days a SQL Server login password must be used before the user can change it. Passwords can be changed immediately when set to 0. | 0-998 | 0 | 
| rds.password\$1max\$1age | The maximum number of days a SQL Server login password can be used after which the user is required to change it. Passwords never expire when set to 0. | 0-999 | 42 | 
| rds.password\$1lockout\$1threshold | The number of consecutive failed login attempts that cause a SQL Server login to become locked out. | 0-999 | 0 | 
| rds.password\$1lockout\$1duration | The number of minutes a locked out SQL Server login must wait before being unlocked. | 1-60 | 10 | 
| rds.password\$1lockout\$1reset\$1counter\$1after | The number of minutes that must elapse after a failed login attempt before the failed login attempt counter is reset to 0. | 1-60 | 10 | 

**Note**  
For more information about SQL Server password policy, see [ Password Policy](https://learn.microsoft.com/en-us/sql/relational-databases/security/password-policy).   
The password complexity and minimum length policies also apply to DB users in contained databases. For more information, see [ Contained Databases](https://learn.microsoft.com/en-us/sql/relational-databases/databases/contained-databases).

The following constraints apply to the password policy parameters:
+ The `rds.password_min_age` parameter must be less than `rds.password_max_age parameter`, unless `rds.password_max_age` is set to 0
+ The `rds.password_lockout_reset_counter_after` parameter must be less than or equal to the `rds.password_lockout_duration` parameter.
+ If `rds.password_lockout_threshold` is set to 0, `rds.password_lockout_duration` and `rds.password_lockout_reset_counter_after` do not apply.

### Considerations for existing logins
<a name="SQLServer.Concepts.General.PasswordPolicy.ExistingLogins"></a>

After modifying the password policy on an instance, existing passwords for logins are **not** retroactively evaluated against the new password complexity and length requirements. Only new passwords are validated against the new policy. 

SQL Server **does** evaluate existing passwords for age requirements.

It is possible for passwords to expire immediately once a password policy is modified. For example, if a login has `CHECK_EXPIRATION` enabled and its password was last changed 100 days ago and you set the `rds.password_max_age` parameter to 5 days, the password immediately expires and the login needs to change their password at their next attempt to log in.

**Note**  
RDS for SQL Server doesn't support password history policies. History policies prevent logins from reusing previously used passwords.

### Considerations for Multi-AZ deployments
<a name="SQLServer.Concepts.General.PasswordPolicy.MAZPasswords"></a>

The failed login attempt counter and lockout state for Multi-AZ instances does not replicate between nodes. In the event of a login being locked out when a Multi-AZ instance fails over, it is possible for the login to be already unlocked on the new node.

# Password considerations for the master login
<a name="SQLServer.Concepts.General.PasswordPolicy.MasterLogin"></a>

When you create an RDS for SQL Server DB instance, the master user password is not evaluated against the password policy. A new master password is also not evaluated against the password when performing operations to the master user, specifically when setting `MasterUserPassword` in the `ModifyDBInstance` command. In both cases, you can set a password for the master user that does not satisfy your password policy, and the operation still succeeds. If the policy is not satisfied, RDS attempts to raise an RDS event, with the recommendation to set a strong password. Take care to only use strong passwords for the master user. 

RDS attempts to generate the following event messages when the master user password does not meet the password policy requirements:
+ The master user was created, but the password doesn't meet the minimum length requirement of your password policy. Consider using a stronger password.
+ The master user was created, but the password doesn't meet the complexity requirement of your password policy. Consider using a stronger password.
+ The master user password was reset, but the password doesn't meet the minimum length requirement of your password policy. Consider using a stronger password.
+ The master user password was reset, but the password doesn't meet the complexity requirement of your password policy. Consider using a stronger password.

By default, the master user is created with `CHECK_POLICY` and `CHECK_EXPIRATION` set to `OFF`. To apply the password policy to the master user, you must manually enable these flags for the master user after DB instance creation. After you enable these flags, modify the master user password directly in SQL Server (eg. via T-SQL statements or SSMS) to validate the new password against the password policy.

**Note**  
If the master user gets locked out, you can unlock the user by resetting the master user password using the `ModifyDBInstance` command.

## Modifying the master user password
<a name="SQLServer.Concepts.General.PasswordPolicy.MasterLogin.Reset"></a>

You can modify the master user password by using the [ModifyDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) command.

**Note**  
When you reset the master user password, RDS resets various permissions for the master user and the master user might lose certain permissions. Resetting the master user password also unlocks the master user, if it was locked out.

RDS validates the new master user password and attempts to emit an RDS event if the password does not satisfy the policy. RDS sets the password even if it does not satisfy the password policy. 

# Integrating an Amazon RDS for SQL Server DB instance with Amazon S3
<a name="User.SQLServer.Options.S3-integration"></a>

You can transfer files between a DB instance running Amazon RDS for SQL Server and an Amazon S3 bucket. By doing this, you can use Amazon S3 with SQL Server features such as BULK INSERT. For example, you can download .csv, .xml, .txt, and other files from Amazon S3 to the DB instance host and import the data from `D:\S3\` into the database. All files are stored in `D:\S3\` on the DB instance.

The following limitations apply:

**Note**  
Traffic between the RDS host and S3 routes through VPC endpoints in RDS internal VPCs for all SQL Server features that use S3. This traffic doesn't use the RDS instance endpoint ENI. S3 bucket policies can't restrict RDS traffic by networking conditions.
+ Files in the `D:\S3` folder are deleted on the standby replica after a failover on Multi-AZ instances. For more information, see [Multi-AZ limitations for S3 integration](#S3-MAZ).
+ The DB instance and the S3 bucket must be in the same AWS Region.
+ If you run more than one S3 integration task at a time, the tasks run sequentially, not in parallel.
**Note**  
S3 integration tasks share the same queue as native backup and restore tasks. At maximum, you can have only two tasks in progress at any time in this queue. Therefore, two running native backup and restore tasks will block any S3 integration tasks.
+ You must re-enable the S3 integration feature on restored instances. S3 integration isn't propagated from the source instance to the restored instance. Files in `D:\S3` are deleted on a restored instance.
+ Downloading to the DB instance is limited to 100 files. In other words, there can't be more than 100 files in `D:\S3\`.
+ Only files without file extensions or with the following file extensions are supported for download: .abf, .asdatabase, .bcp, .configsettings, .csv, .dat, .deploymentoptions, .deploymenttargets, .fmt, .info, .ispac, .lst, .tbl, .txt, .xml, and .xmla.
+ The S3 bucket must have the same owner as the related AWS Identity and Access Management (IAM) role. Therefore, cross-account S3 integration isn't supported.
+ The S3 bucket can't be open to the public.
+ The file size for uploads from RDS to S3 is limited to 50 GB per file.
+ The file size for downloads from S3 to RDS is limited to the maximum supported by S3.

**Topics**
+ [

# Prerequisites for integrating RDS for SQL Server with S3
](Appendix.SQLServer.Options.S3-integration.preparing.md)
+ [

# Enabling RDS for SQL Server integration with S3
](Appendix.SQLServer.Options.S3-integration.enabling.md)
+ [

# Transferring files between RDS for SQL Server and Amazon S3
](Appendix.SQLServer.Options.S3-integration.using.md)
+ [

# Listing files on the RDS DB instance
](Appendix.SQLServer.Options.S3-integration.using.listing-files.md)
+ [

# Deleting files on the RDS DB instance
](Appendix.SQLServer.Options.S3-integration.using.deleting-files.md)
+ [

# Monitoring the status of a file transfer task
](Appendix.SQLServer.Options.S3-integration.using.monitortasks.md)
+ [

# Canceling a task
](Appendix.SQLServer.Options.S3-integration.canceltasks.md)
+ [

## Multi-AZ limitations for S3 integration
](#S3-MAZ)
+ [

# Disabling RDS for SQL Server integration with S3
](Appendix.SQLServer.Options.S3-integration.disabling.md)

For more information on working with files in Amazon S3, see [Getting started with Amazon Simple Storage Service](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3).

# Prerequisites for integrating RDS for SQL Server with S3
<a name="Appendix.SQLServer.Options.S3-integration.preparing"></a>

Before you begin, find or create the S3 bucket that you want to use. Also, add permissions so that the RDS DB instance can access the S3 bucket. To configure this access, you create both an IAM policy and an IAM role.

## Console
<a name="Appendix.SQLServer.Options.S3-integration.preparing.console"></a>

**To create an IAM policy for access to Amazon S3**

1. In the [IAM Management Console](https://console.aws.amazon.com/iam/home?#home), choose **Policies** in the navigation pane.

1. Create a new policy, and use the **Visual editor** tab for the following steps.

1. For **Service**, enter **S3** and then choose the **S3** service.

1. For **Actions**, choose the following to grant the access that your DB instance requires:
   + `ListAllMyBuckets` – required
   + `ListBucket` – required
   + `GetBucketAcl` – required
   + `GetBucketLocation` – required
   + `GetObject` – required for downloading files from S3 to `D:\S3\`
   + `PutObject` – required for uploading files from `D:\S3\` to S3
   + `ListMultipartUploadParts` – required for uploading files from `D:\S3\` to S3
   + `AbortMultipartUpload` – required for uploading files from `D:\S3\` to S3

1. For **Resources**, the options that display depend on which actions you choose in the previous step. You might see options for **bucket**, **object**, or both. For each of these, add the appropriate Amazon Resource Name (ARN).

   For **bucket**, add the ARN for the bucket that you want to use. For example, if your bucket is named *amzn-s3-demo-bucket*, set the ARN to `arn:aws:s3:::amzn-s3-demo-bucket`.

   For **object**, enter the ARN for the bucket and then choose one of the following:
   + To grant access to all files in the specified bucket, choose **Any** for both **Bucket name** and **Object name**.
   + To grant access to specific files or folders in the bucket, provide ARNs for the specific buckets and objects that you want SQL Server to access. 

1. Follow the instructions in the console until you finish creating the policy.

   The preceding is an abbreviated guide to setting up a policy. For more detailed instructions on creating IAM policies, see [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide.*

**To create an IAM role that uses the IAM policy from the previous procedure**

1. In the [IAM Management Console](https://console.aws.amazon.com/iam/home?#home), choose **Roles** in the navigation pane.

1. Create a new IAM role, and choose the following options as they appear in the console:
   + **AWS service**
   + **RDS**
   + **RDS – Add Role to Database**

   Then choose **Next:Permissions** at the bottom.

1. For **Attach permissions policies**, enter the name of the IAM policy that you previously created. Then choose the policy from the list.

1. Follow the instructions in the console until you finish creating the role.

   The preceding is an abbreviated guide to setting up a role. If you want more detailed instructions on creating roles, see [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) in the *IAM User Guide.*

## AWS CLI
<a name="Appendix.SQLServer.Options.S3-integration.preparing.CLI"></a>

To grant Amazon RDS access to an Amazon S3 bucket, use the following process:

1. Create an IAM policy that grants Amazon RDS access to an S3 bucket.

1. Create an IAM role that Amazon RDS can assume on your behalf to access your S3 buckets.

   For more information, see [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.

1. Attach the IAM policy that you created to the IAM role that you created.

**To create the IAM policy**

Include the appropriate actions to grant the access your DB instance requires:
+ `ListAllMyBuckets` – required
+ `ListBucket` – required
+ `GetBucketAcl` – required
+ `GetBucketLocation` – required
+ `GetObject` – required for downloading files from S3 to `D:\S3\`
+ `PutObject` – required for uploading files from `D:\S3\` to S3
+ `ListMultipartUploadParts` – required for uploading files from `D:\S3\` to S3
+ `AbortMultipartUpload` – required for uploading files from `D:\S3\` to S3

1. The following AWS CLI command creates an IAM policy named `rds-s3-integration-policy` with these options. It grants access to a bucket named *amzn-s3-demo-bucket*.  
**Example**  

   For Linux, macOS, or Unix:

   ```
   aws iam create-policy \
   	 --policy-name rds-s3-integration-policy \
   	 --policy-document '{
   	        "Version": "2012-10-17",		 	 	 
   	        "Statement": [
   	            {
   	                "Effect": "Allow",
   	                "Action": "s3:ListAllMyBuckets",
   	                "Resource": "*"
   	            },
   	            {
   	                "Effect": "Allow",
   	                "Action": [
   	                    "s3:ListBucket",
   	                    "s3:GetBucketAcl",
   	                    "s3:GetBucketLocation"
   	                ],
   	                "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
   	            },
   	            {
   	                "Effect": "Allow",
   	                "Action": [
   	                    "s3:GetObject",
   	                    "s3:PutObject",
   	                    "s3:ListMultipartUploadParts",
   	                    "s3:AbortMultipartUpload"
   	                ],
   	                "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/key_prefix/*"
   	            }
   	        ]
   	    }'
   ```

   For Windows:

   Make sure to change the line endings to the ones supported by your interface (`^` instead of `\`). Also, in Windows, you must escape all double quotes with a `\`. To avoid the need to escape the quotes in the JSON, you can save it to a file instead and pass that in as a parameter. 

   First, create the `policy.json` file with the following permission policy:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "s3:ListAllMyBuckets",
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:ListBucket",
                   "s3:GetBucketACL",
                   "s3:GetBucketLocation"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:GetObject",
                   "s3:PutObject",
                   "s3:ListMultipartUploadParts",
                   "s3:AbortMultipartUpload"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/key_prefix/*"
           }
       ]
   }
   ```

------

   Then use the following command to create the policy:

   ```
   aws iam create-policy ^
        --policy-name rds-s3-integration-policy ^
        --policy-document file://file_path/assume_role_policy.json
   ```

1. After the policy is created, note the Amazon Resource Name (ARN) of the policy. You need the ARN for a later step.

**To create the IAM role**
+ The following AWS CLI command creates the `rds-s3-integration-role` IAM role for this purpose.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws iam create-role \
  	   --role-name rds-s3-integration-role \
  	   --assume-role-policy-document '{
  	     "Version": "2012-10-17",		 	 	 
  	     "Statement": [
  	       {
  	         "Effect": "Allow",
  	         "Principal": {
  	            "Service": "rds.amazonaws.com"
  	          },
  	         "Action": "sts:AssumeRole"
  	       }
  	     ]
  	   }'
  ```

  For Windows:

  Make sure to change the line endings to the ones supported by your interface (`^` instead of `\`). Also, in Windows, you must escape all double quotes with a `\`. To avoid the need to escape the quotes in the JSON, you can save it to a file instead and pass that in as a parameter. 

  First, create the `assume_role_policy.json` file with the following policy:

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Principal": {
                  "Service": [
                      "rds.amazonaws.com"
                  ]
              },
              "Action": "sts:AssumeRole"
          }
      ]
  }
  ```

------

  Then use the following command to create the IAM role:

  ```
  aws iam create-role ^
       --role-name rds-s3-integration-role ^
       --assume-role-policy-document file://file_path/assume_role_policy.json
  ```  
**Example of using the global condition context key to create the IAM role**  

  We recommend using the [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn) and [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount) global condition context keys in resource-based policies to limit the service's permissions to a specific resource. This is the most effective way to protect against the [confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html).

  You might use both global condition context keys and have the `aws:SourceArn` value contain the account ID. In this case, the `aws:SourceAccount` value and the account in the `aws:SourceArn` value must use the same account ID when used in the same policy statement.
  + Use `aws:SourceArn` if you want cross-service access for a single resource.
  + Use `aws:SourceAccount` if you want to allow any resource in that account to be associated with the cross-service use.

  In the policy, make sure to use the `aws:SourceArn` global condition context key with the full Amazon Resource Name (ARN) of the resources accessing the role. For S3 integration, make sure to include the DB instance ARNs, as shown in the following example.

  For Linux, macOS, or Unix:

  ```
  aws iam create-role \
  	   --role-name rds-s3-integration-role \
  	   --assume-role-policy-document '{
  	     "Version": "2012-10-17",		 	 	 
  	     "Statement": [
  	       {
  	         "Effect": "Allow",
  	         "Principal": {
  	            "Service": "rds.amazonaws.com"
  	          },
  	         "Action": "sts:AssumeRole",
                  "Condition": {
                      "StringEquals": {
                          "aws:SourceArn":"arn:aws:rds:Region:my_account_ID:db:db_instance_identifier"
                      }
                  }
  	       }
  	     ]
  	   }'
  ```

  For Windows:

  Add the global condition context key to `assume_role_policy.json`.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Principal": {
                  "Service": [
                      "rds.amazonaws.com"
                  ]
              },
              "Action": "sts:AssumeRole",
              "Condition": {
                  "StringEquals": {
                      "aws:SourceArn":"arn:aws:rds:Region:my_account_ID:db:db_instance_identifier"
                  }
              }
          }
      ]
  }
  ```

------

**To attach the IAM policy to the IAM role**
+ The following AWS CLI command attaches the policy to the role named `rds-s3-integration-role`. Replace `your-policy-arn` with the policy ARN that you noted in a previous step.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws iam attach-role-policy \
  	   --policy-arn your-policy-arn \
  	   --role-name rds-s3-integration-role
  ```

  For Windows:

  ```
  aws iam attach-role-policy ^
  	   --policy-arn your-policy-arn ^
  	   --role-name rds-s3-integration-role
  ```

# Enabling RDS for SQL Server integration with S3
<a name="Appendix.SQLServer.Options.S3-integration.enabling"></a>

In the following section, you can find how to enable Amazon S3 integration with Amazon RDS for SQL Server. To work with S3 integration, your DB instance must be associated with the IAM role that you previously created before you use the `S3_INTEGRATION` feature-name parameter.

**Note**  
To add an IAM role to a DB instance, the status of the DB instance must be **available**.

## Console
<a name="Appendix.SQLServer.Options.S3-integration.enabling.console"></a>

**To associate your IAM role with your DB instance**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose the RDS for SQL Server DB instance name to display its details.

1. On the **Connectivity & security** tab, in the **Manage IAM roles** section, choose the IAM role to add for **Add IAM roles to this instance**.

1. For **Feature**, choose **S3\$1INTEGRATION**.  
![\[Add the S3_INTEGRATION role\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/ora-s3-integration-role.png)

1. Choose **Add role**.

## AWS CLI
<a name="Appendix.SQLServer.Options.S3-integration.enabling.cli"></a>

**To add the IAM role to the RDS for SQL Server DB instance**
+ The following AWS CLI command adds your IAM role to an RDS for SQL Server DB instance named `mydbinstance`.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds add-role-to-db-instance \
  	   --db-instance-identifier mydbinstance \
  	   --feature-name S3_INTEGRATION \
  	   --role-arn your-role-arn
  ```

  For Windows:

  ```
  aws rds add-role-to-db-instance ^
  	   --db-instance-identifier mydbinstance ^
  	   --feature-name S3_INTEGRATION ^
  	   --role-arn your-role-arn
  ```

  Replace `your-role-arn` with the role ARN that you noted in a previous step. `S3_INTEGRATION` must be specified for the `--feature-name` option.

# Transferring files between RDS for SQL Server and Amazon S3
<a name="Appendix.SQLServer.Options.S3-integration.using"></a>

You can use Amazon RDS stored procedures to download and upload files between Amazon S3 and your RDS DB instance. You can also use Amazon RDS stored procedures to list and delete files on the RDS instance.

The files that you download from and upload to S3 are stored in the `D:\S3` folder. This is the only folder that you can use to access your files. You can organize your files into subfolders, which are created for you when you include the destination folder during download.

Some of the stored procedures require that you provide an Amazon Resource Name (ARN) to your S3 bucket and file. The format for your ARN is `arn:aws:s3:::amzn-s3-demo-bucket/file_name`. Amazon S3 doesn't require an account number or AWS Region in ARNs.

S3 integration tasks run sequentially and share the same queue as native backup and restore tasks. At maximum, you can have only two tasks in progress at any time in this queue. It can take up to five minutes for the task to begin processing.

## Downloading files from an Amazon S3 bucket to a SQL Server DB instance
<a name="Appendix.SQLServer.Options.S3-integration.using.download"></a>

To download files from an S3 bucket to an RDS for SQL Server DB instance, use the Amazon RDS stored procedure `msdb.dbo.rds_download_from_s3` with the following parameters.


| Parameter name | Data type | Default | Required | Description | 
| --- | --- | --- | --- | --- | 
|  `@s3_arn_of_file`  |  NVARCHAR  |  –  |  Required  |  The S3 ARN of the file to download, for example: `arn:aws:s3:::amzn-s3-demo-bucket/mydata.csv`  | 
|  `@rds_file_path`  |  NVARCHAR  |  –  |  Optional  |  The file path for the RDS instance. If not specified, the file path is `D:\S3\<filename in s3>`. RDS supports absolute paths and relative paths. If you want to create a subfolder, include it in the file path.  | 
|  `@overwrite_file`  |  INT  |  0  |  Optional  | Overwrite the existing file:  0 = Don't overwrite 1 = Overwrite | 

You can download files without a file extension and files with the following file extensions: .bcp, .csv, .dat, .fmt, .info, .lst, .tbl, .txt, and .xml.

**Note**  
Files with the .ispac file extension are supported for download when SQL Server Integration Services is enabled. For more information on enabling SSIS, see [SQL Server Integration Services](Appendix.SQLServer.Options.SSIS.md).  
Files with the following file extensions are supported for download when SQL Server Analysis Services is enabled: .abf, .asdatabase, .configsettings, .deploymentoptions, .deploymenttargets, and .xmla. For more information on enabling SSAS, see [SQL Server Analysis Services](Appendix.SQLServer.Options.SSAS.md).

The following example shows the stored procedure to download files from S3. 

```
exec msdb.dbo.rds_download_from_s3
	    @s3_arn_of_file='arn:aws:s3:::amzn-s3-demo-bucket/bulk_data.csv',
	    @rds_file_path='D:\S3\seed_data\data.csv',
	    @overwrite_file=1;
```

The example `rds_download_from_s3` operation creates a folder named `seed_data` in `D:\S3\`, if the folder doesn't exist yet. Then the example downloads the source file `bulk_data.csv` from S3 to a new file named `data.csv` on the DB instance. If the file previously existed, it's overwritten because the `@overwrite_file` parameter is set to `1`.

## Uploading files from a SQL Server DB instance to an Amazon S3 bucket
<a name="Appendix.SQLServer.Options.S3-integration.using.upload"></a>

To upload files from an RDS for SQL Server DB instance to an S3 bucket, use the Amazon RDS stored procedure `msdb.dbo.rds_upload_to_s3` with the following parameters.


| Parameter name | Data type | Default | Required | Description | 
| --- | --- | --- | --- | --- | 
|  `@s3_arn_of_file`  |  NVARCHAR  |  –  |  Required  |  The S3 ARN of the file to be created in S3, for example: `arn:aws:s3:::amzn-s3-demo-bucket/mydata.csv`  | 
|  `@rds_file_path`  |  NVARCHAR  |  –  |  Required  | The file path of the file to upload to S3. Absolute and relative paths are supported. | 
|  `@overwrite_file`  |  INT  |  –  |  Optional  |  Overwrite the existing file:  0 = Don't overwrite 1 = Overwrite  | 

The following example uploads the file named `data.csv` from the specified location in `D:\S3\seed_data\` to a file `new_data.csv` in the S3 bucket specified by the ARN.

```
exec msdb.dbo.rds_upload_to_s3 
		@rds_file_path='D:\S3\seed_data\data.csv',
		@s3_arn_of_file='arn:aws:s3:::amzn-s3-demo-bucket/new_data.csv',
		@overwrite_file=1;
```

If the file previously existed in S3, it's overwritten because the @overwrite\$1file parameter is set to `1`.

# Listing files on the RDS DB instance
<a name="Appendix.SQLServer.Options.S3-integration.using.listing-files"></a>

To list the files available on the DB instance, use both a stored procedure and a function. First, run the following stored procedure to gather file details from the files in `D:\S3\`. 

```
exec msdb.dbo.rds_gather_file_details;
```

The stored procedure returns the ID of the task. Like other tasks, this stored procedure runs asynchronously. As soon as the status of the task is `SUCCESS`, you can use the task ID in the `rds_fn_list_file_details` function to list the existing files and directories in D:\$1S3\$1, as shown following.

```
SELECT * FROM msdb.dbo.rds_fn_list_file_details(TASK_ID);
```

The `rds_fn_list_file_details` function returns a table with the following columns.


| Output parameter | Description | 
| --- | --- | 
| filepath | Absolute path of the file (for example, D:\$1S3\$1mydata.csv) | 
| size\$1in\$1bytes | File size (in bytes) | 
| last\$1modified\$1utc | Last modification date and time in UTC format | 
| is\$1directory | Option that indicates whether the item is a directory (true/false) | 

# Deleting files on the RDS DB instance
<a name="Appendix.SQLServer.Options.S3-integration.using.deleting-files"></a>

To delete the files available on the DB instance, use the Amazon RDS stored procedure `msdb.dbo.rds_delete_from_filesystem` with the following parameters. 


| Parameter name | Data type | Default | Required | Description | 
| --- | --- | --- | --- | --- | 
|  `@rds_file_path`  |  NVARCHAR  |  –  |  Required  | The file path of the file to delete. Absolute and relative paths are supported.  | 
|  `@force_delete`  |  INT  | 0 |  Optional  |  To delete a directory, this flag must be included and set to `1`. `1` = delete a directory This parameter is ignored if you are deleting a file.  | 

To delete a directory, the `@rds_file_path` must end with a backslash (`\`) and `@force_delete` must be set to `1`.

The following example deletes the file `D:\S3\delete_me.txt`.

```
exec msdb.dbo.rds_delete_from_filesystem
    @rds_file_path='D:\S3\delete_me.txt';
```

The following example deletes the directory `D:\S3\example_folder\`.

```
exec msdb.dbo.rds_delete_from_filesystem
    @rds_file_path='D:\S3\example_folder\',
    @force_delete=1;
```

# Monitoring the status of a file transfer task
<a name="Appendix.SQLServer.Options.S3-integration.using.monitortasks"></a>

To track the status of your S3 integration task, call the `rds_fn_task_status` function. It takes two parameters. The first parameter should always be `NULL` because it doesn't apply to S3 integration. The second parameter accepts a task ID.

To see a list of all tasks, set the first parameter to `NULL` and the second parameter to `0`, as shown in the following example.

```
SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,0);
```

To get a specific task, set the first parameter to `NULL` and the second parameter to the task ID, as shown in the following example.

```
SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,42);
```

The `rds_fn_task_status` function returns the following information.


|  Output parameter  |  Description  | 
| --- | --- | 
|  `task_id`  |  The ID of the task.  | 
|  `task_type`  |  For S3 integration, tasks can have the following task types: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.Options.S3-integration.using.monitortasks.html)  | 
|  `database_name`  | Not applicable to S3 integration tasks. | 
|  `% complete`  |  The progress of the task as a percentage.  | 
|  `duration(mins)`  |  The amount of time spent on the task, in minutes.  | 
|  `lifecycle`  |  The status of the task. Possible statuses are the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.SQLServer.Options.S3-integration.using.monitortasks.html)  | 
|  `task_info`  |  Additional information about the task. If an error occurs during processing, this column contains information about the error.   | 
|  `last_updated`  |  The date and time that the task status was last updated.   | 
|  `created_at`  |  The date and time that the task was created.  | 
|  `S3_object_arn`  |  The ARN of the S3 object downloaded from or uploaded to.  | 
|  `overwrite_S3_backup_file`  |  Not applicable to S3 integration tasks.  | 
|  `KMS_master_key_arn`  |  Not applicable to S3 integration tasks.  | 
|  `filepath`  |  The file path on the RDS DB instance.  | 
|  `overwrite_file`  |  An option that indicates if an existing file is overwritten.  | 
|  `task_metadata`  |  Not applicable to S3 integration tasks.  | 

# Canceling a task
<a name="Appendix.SQLServer.Options.S3-integration.canceltasks"></a>

To cancel S3 integration tasks, use the `msdb.dbo.rds_cancel_task` stored procedure with the `task_id` parameter. Delete and list tasks that are in progress can't be cancelled. The following example shows a request to cancel a task. 

```
exec msdb.dbo.rds_cancel_task @task_id = 1234;
```

To get an overview of all tasks and their task IDs, use the `rds_fn_task_status` function as described in [Monitoring the status of a file transfer task](Appendix.SQLServer.Options.S3-integration.using.monitortasks.md).

## Multi-AZ limitations for S3 integration
<a name="S3-MAZ"></a>

On Multi-AZ instances, files in the `D:\S3` folder are deleted on the standby replica after a failover. A failover can be planned, for example, during DB instance modifications such as changing the instance class or upgrading the engine version. Or a failover can be unplanned, during an outage of the primary.

**Note**  
We don't recommend using the `D:\S3` folder for file storage. The best practice is to upload created files to Amazon S3 to make them durable, and download files when you need to import data.

To determine the last failover time, you can use the `msdb.dbo.rds_failover_time` stored procedure. For more information, see [Determining the last failover time for Amazon RDS for SQL Server](Appendix.SQLServer.CommonDBATasks.LastFailover.md).

**Example of no recent failover**  
This example shows the output when there is no recent failover in the error logs. No failover has happened since 2020-04-29 23:59:00.01.  
Therefore, all files downloaded after that time that haven't been deleted using the `rds_delete_from_filesystem` stored procedure are still accessible on the current host. Files downloaded before that time might also be available.  


| errorlog\$1available\$1from | recent\$1failover\$1time | 
| --- | --- | 
|  2020-04-29 23:59:00.0100000  |  null  | 

**Example of recent failover**  
This example shows the output when there is a failover in the error logs. The most recent failover was at 2020-05-05 18:57:51.89.  
All files downloaded after that time that haven't been deleted using the `rds_delete_from_filesystem` stored procedure are still accessible on the current host.  


| errorlog\$1available\$1from | recent\$1failover\$1time | 
| --- | --- | 
|  2020-04-29 23:59:00.0100000  |  2020-05-05 18:57:51.8900000  | 

# Disabling RDS for SQL Server integration with S3
<a name="Appendix.SQLServer.Options.S3-integration.disabling"></a>

Following, you can find how to disable Amazon S3 integration with Amazon RDS for SQL Server. Files in `D:\S3\` aren't deleted when disabling S3 integration.

**Note**  
To remove an IAM role from a DB instance, the status of the DB instance must be `available`.

## Console
<a name="Appendix.SQLServer.Options.S3-integration.disabling.console"></a>

**To disassociate your IAM role from your DB instance**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose the RDS for SQL Server DB instance name to display its details.

1. On the **Connectivity & security** tab, in the **Manage IAM roles** section, choose the IAM role to remove.

1. Choose **Delete**.

## AWS CLI
<a name="Appendix.SQLServer.Options.S3-integration.disabling.cli"></a>

**To remove the IAM role from the RDS for SQL Server DB instance**
+ The following AWS CLI command removes the IAM role from a RDS for SQL Server DB instance named `mydbinstance`.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds remove-role-from-db-instance \
  	   --db-instance-identifier mydbinstance \
  	   --feature-name S3_INTEGRATION \
  	   --role-arn your-role-arn
  ```

  For Windows:

  ```
  aws rds remove-role-from-db-instance ^
  	   --db-instance-identifier mydbinstance ^
  	   --feature-name S3_INTEGRATION ^
  	   --role-arn your-role-arn
  ```

  Replace `your-role-arn` with the appropriate IAM role ARN for the `--feature-name` option.

# Using Database Mail on Amazon RDS for SQL Server
<a name="SQLServer.DBMail"></a>

You can use Database Mail to send email messages to users from your Amazon RDS on SQL Server database instance. The messages can contain files and query results. Database Mail includes the following components:
+ **Configuration and security objects** – These objects create profiles and accounts, and are stored in the `msdb` database.
+ **Messaging objects** – These objects include the [sp\$1send\$1dbmail](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-send-dbmail-transact-sql) stored procedure used to send messages, and data structures that hold information about messages. They're stored in the `msdb` database.
+ **Logging and auditing objects** – Database Mail writes logging information to the `msdb` database and the Microsoft Windows application event log.
+ **Database Mail executable** – `DatabaseMail.exe` reads from a queue in the `msdb` database and sends email messages.

RDS supports Database Mail for all SQL Server versions on the Web, Standard, and Enterprise Editions.

## Limitations
<a name="SQLServer.DBMail.Limitations"></a>

The following limitations apply to using Database Mail on your SQL Server DB instance:
+ Database Mail isn't supported for SQL Server Express Edition.
+ Modifying Database Mail configuration parameters isn't supported. To see the preset (default) values, use the [sysmail\$1help\$1configure\$1sp](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-help-configure-sp-transact-sql) stored procedure.
+ File attachments aren't fully supported. For more information, see [Working with file attachments](#SQLServer.DBMail.Files).
+ The maximum file attachment size is 1 MB.
+ Database Mail requires additional configuration on Multi-AZ DB instances. For more information, see [Considerations for Multi-AZ deployments](#SQLServer.DBMail.MAZ).
+ Configuring SQL Server Agent to send email messages to predefined operators isn't supported.

# Enabling Database Mail
<a name="SQLServer.DBMail.Enable"></a>

Use the following process to enable Database Mail for your DB instance:

1. Create a new parameter group.

1. Modify the parameter group to set the `database mail xps` parameter to 1.

1. Associate the parameter group with the DB instance.

## Creating the parameter group for Database Mail
<a name="DBMail.CreateParamGroup"></a>

Create a parameter group for the `database mail xps` parameter that corresponds to the SQL Server edition and version of your DB instance.

**Note**  
You can also modify an existing parameter group. Follow the procedure in [Modifying the parameter that enables Database Mail](#DBMail.ModifyParamGroup).

### Console
<a name="DBMail.CreateParamGroup.Console"></a>

The following example creates a parameter group for SQL Server Standard Edition 2016.

**To create the parameter group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Parameter groups**.

1. Choose **Create parameter group**.

1. In the **Create parameter group** pane, do the following:

   1. For **Parameter group family**, choose **sqlserver-se-13.0**.

   1. For **Group name**, enter an identifier for the parameter group, such as **dbmail-sqlserver-se-13**.

   1. For **Description**, enter **Database Mail XPs**.

1. Choose **Create**.

### CLI
<a name="DBMail.CreateParamGroup.CLI"></a>

The following example creates a parameter group for SQL Server Standard Edition 2016.

**To create the parameter group**
+ Use one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-db-parameter-group \
      --db-parameter-group-name dbmail-sqlserver-se-13 \
      --db-parameter-group-family "sqlserver-se-13.0" \
      --description "Database Mail XPs"
  ```

  For Windows:

  ```
  aws rds create-db-parameter-group ^
      --db-parameter-group-name dbmail-sqlserver-se-13 ^
      --db-parameter-group-family "sqlserver-se-13.0" ^
      --description "Database Mail XPs"
  ```

## Modifying the parameter that enables Database Mail
<a name="DBMail.ModifyParamGroup"></a>

Modify the `database mail xps` parameter in the parameter group that corresponds to the SQL Server edition and version of your DB instance.

To enable Database Mail, set the `database mail xps` parameter to 1.

### Console
<a name="DBMail.ModifyParamGroup.Console"></a>

The following example modifies the parameter group that you created for SQL Server Standard Edition 2016.

**To modify the parameter group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Parameter groups**.

1. Choose the parameter group, such as **dbmail-sqlserver-se-13**.

1. Under **Parameters**, filter the parameter list for **mail**.

1. Choose **database mail xps**.

1. Choose **Edit parameters**.

1. Enter **1**.

1. Choose **Save changes**.

### CLI
<a name="DBMail.ModifyParamGroup.CLI"></a>

The following example modifies the parameter group that you created for SQL Server Standard Edition 2016.

**To modify the parameter group**
+ Use one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-parameter-group \
      --db-parameter-group-name dbmail-sqlserver-se-13 \
      --parameters "ParameterName='database mail xps',ParameterValue=1,ApplyMethod=immediate"
  ```

  For Windows:

  ```
  aws rds modify-db-parameter-group ^
      --db-parameter-group-name dbmail-sqlserver-se-13 ^
      --parameters "ParameterName='database mail xps',ParameterValue=1,ApplyMethod=immediate"
  ```

## Associating the parameter group with the DB instance
<a name="DBMail.AssocParamGroup"></a>

You can use the AWS Management Console or the AWS CLI to associate the Database Mail parameter group with the DB instance.

### Console
<a name="DBMail.AssocParamGroup.Console"></a>

You can associate the Database Mail parameter group with a new or existing DB instance.
+ For a new DB instance, associate it when you launch the instance. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).
+ For an existing DB instance, associate it by modifying the instance. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

### CLI
<a name="DBMail.AssocParamGroup.CLI"></a>

You can associate the Database Mail parameter group with a new or existing DB instance.

**To create a DB instance with the Database Mail parameter group**
+ Specify the same DB engine type and major version as you used when creating the parameter group.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-db-instance \
      --db-instance-identifier mydbinstance \
      --db-instance-class db.m5.2xlarge \
      --engine sqlserver-se \
      --engine-version 13.00.5426.0.v1 \
      --allocated-storage 100 \
      --manage-master-user-password \
      --master-username admin \
      --storage-type gp2 \
      --license-model li
      --db-parameter-group-name dbmail-sqlserver-se-13
  ```

  For Windows:

  ```
  aws rds create-db-instance ^
      --db-instance-identifier mydbinstance ^
      --db-instance-class db.m5.2xlarge ^
      --engine sqlserver-se ^
      --engine-version 13.00.5426.0.v1 ^
      --allocated-storage 100 ^
      --manage-master-user-password ^
      --master-username admin ^
      --storage-type gp2 ^
      --license-model li ^
      --db-parameter-group-name dbmail-sqlserver-se-13
  ```

**To modify a DB instance and associate the Database Mail parameter group**
+ Use one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-instance \
      --db-instance-identifier mydbinstance \
      --db-parameter-group-name dbmail-sqlserver-se-13 \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds modify-db-instance ^
      --db-instance-identifier mydbinstance ^
      --db-parameter-group-name dbmail-sqlserver-se-13 ^
      --apply-immediately
  ```

# Configuring Database Mail
<a name="SQLServer.DBMail.Configure"></a>

You perform the following tasks to configure Database Mail:

1. Create the Database Mail profile.

1. Create the Database Mail account.

1. Add the Database Mail account to the Database Mail profile.

1. Add users to the Database Mail profile.

**Note**  
To configure Database Mail, make sure that you have `execute` permission on the stored procedures in the `msdb` database.

## Creating the Database Mail profile
<a name="SQLServer.DBMail.Configure.Profile"></a>

To create the Database Mail profile, you use the [sysmail\$1add\$1profile\$1sp](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-add-profile-sp-transact-sql) stored procedure. The following example creates a profile named `Notifications`.

**To create the profile**
+ Use the following SQL statement.

  ```
  USE msdb
  GO
  
  EXECUTE msdb.dbo.sysmail_add_profile_sp  
      @profile_name         = 'Notifications',  
      @description          = 'Profile used for sending outgoing notifications using Amazon SES.';
  GO
  ```

## Creating the Database Mail account
<a name="SQLServer.DBMail.Configure.Account"></a>

To create the Database Mail account, you use the [sysmail\$1add\$1account\$1sp](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-add-account-sp-transact-sql) stored procedure. The following example creates an account named `SES` on an RDS for SQL Server DB instance in a private VPC, using Amazon Simple Email Service.

Using Amazon SES requires the following parameters:
+ `@email_address` – An Amazon SES verified identity. For more information, see [Verified identities in Amazon SES](https://docs.aws.amazon.com/ses/latest/dg/verify-addresses-and-domains.html).
+ `@mailserver_name` – An Amazon SES SMTP endpoint. For more information, see [Connecting to an Amazon SES SMTP endpoint](https://docs.aws.amazon.com/ses/latest/dg/smtp-connect.html).
+ `@username` – An Amazon SES SMTP user name. For more information, see [Obtaining Amazon SES SMTP credentials](https://docs.aws.amazon.com/ses/latest/dg/smtp-credentials.html).

  Don't use an AWS Identity and Access Management user name.
+ `@password` – An Amazon SES SMTP password. For more information, see [Obtaining Amazon SES SMTP credentials](https://docs.aws.amazon.com/ses/latest/dg/smtp-credentials.html).

**To create the account**
+ Use the following SQL statement.

  ```
  USE msdb
  GO
  
  EXECUTE msdb.dbo.sysmail_add_account_sp
      @account_name        = 'SES',
      @description         = 'Mail account for sending outgoing notifications.',
      @email_address       = 'nobody@example.com',
      @display_name        = 'Automated Mailer',
      @mailserver_name     = 'vpce-0a1b2c3d4e5f-01234567.email-smtp.us-west-2.vpce.amazonaws.com',
      @port                = 587,
      @enable_ssl          = 1,
      @username            = 'Smtp_Username',
      @password            = 'Smtp_Password';
  GO
  ```
**Note**  
Specify credentials other than the prompts shown here as a security best practice.

## Adding the Database Mail account to the Database Mail profile
<a name="SQLServer.DBMail.Configure.AddAccount"></a>

To add the Database Mail account to the Database Mail profile, you use the [sysmail\$1add\$1profileaccount\$1sp](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-add-profileaccount-sp-transact-sql) stored procedure. The following example adds the `SES` account to the `Notifications` profile.

**To add the account to the profile**
+ Use the following SQL statement.

  ```
  USE msdb
  GO
  
  EXECUTE msdb.dbo.sysmail_add_profileaccount_sp
      @profile_name        = 'Notifications',
      @account_name        = 'SES',
      @sequence_number     = 1;
  GO
  ```

## Adding users to the Database Mail profile
<a name="SQLServer.DBMail.Configure.AddUser"></a>

To grant permission for an `msdb` database principal to use a Database Mail profile, you use the [sysmail\$1add\$1principalprofile\$1sp](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sysmail-add-principalprofile-sp-transact-sql) stored procedure. A *principal* is an entity that can request SQL Server resources. The database principal must map to a SQL Server authentication user, a Windows Authentication user, or a Windows Authentication group.

The following example grants public access to the `Notifications` profile.

**To add a user to the profile**
+ Use the following SQL statement.

  ```
  USE msdb
  GO
  
  EXECUTE msdb.dbo.sysmail_add_principalprofile_sp  
      @profile_name       = 'Notifications',  
      @principal_name     = 'public',  
      @is_default         = 1;
  GO
  ```

## Amazon RDS stored procedures and functions for Database Mail
<a name="SQLServer.DBMail.StoredProc"></a>

Microsoft provides [stored procedures](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/database-mail-stored-procedures-transact-sql) for using Database Mail, such as creating, listing, updating, and deleting accounts and profiles. In addition, RDS provides the stored procedures and functions for Database Mail shown in the following table.


| Procedure/Function | Description | 
| --- | --- | 
| rds\$1fn\$1sysmail\$1allitems | Shows sent messages, including those submitted by other users. | 
| rds\$1fn\$1sysmail\$1event\$1log | Shows events, including those for messages submitted by other users. | 
| rds\$1fn\$1sysmail\$1mailattachments | Shows attachments, including those to messages submitted by other users. | 
| rds\$1sysmail\$1control | Starts and stops the mail queue (DatabaseMail.exe process). | 
| rds\$1sysmail\$1delete\$1mailitems\$1sp | Deletes email messages sent by all users from the Database Mail internal tables. | 

# Sending email messages using Database Mail
<a name="SQLServer.DBMail.Send"></a>

You use the [sp\$1send\$1dbmail](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-send-dbmail-transact-sql) stored procedure to send email messages using Database Mail.

## Usage
<a name="SQLServer.DBMail.Send.Usage"></a>

```
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'profile_name',
@recipients = 'recipient1@example.com[; recipient2; ... recipientn]',
@subject = 'subject',
@body = 'message_body',
[@body_format = 'HTML'],
[@file_attachments = 'file_path1; file_path2; ... file_pathn'],
[@query = 'SQL_query'],
[@attach_query_result_as_file = 0|1]';
```

The following parameters are required:
+ `@profile_name` – The name of the Database Mail profile from which to send the message.
+ `@recipients` – The semicolon-delimited list of email addresses to which to send the message.
+ `@subject` – The subject of the message.
+ `@body` – The body of the message. You can also use a declared variable as the body.

The following parameters are optional:
+ `@body_format` – This parameter is used with a declared variable to send email in HTML format.
+ `@file_attachments` – The semicolon-delimited list of message attachments. File paths must be absolute paths.
+ `@query` – A SQL query to run. The query results can be attached as a file or included in the body of the message.
+ `@attach_query_result_as_file` – Whether to attach the query result as a file. Set to 0 for no, 1 for yes. The default is 0.

## Examples
<a name="SQLServer.DBMail.Send.Examples"></a>

The following examples demonstrate how to send email messages.

**Example of sending a message to a single recipient**  

```
USE msdb
GO

EXEC msdb.dbo.sp_send_dbmail
     @profile_name       = 'Notifications',
     @recipients         = 'nobody@example.com',
     @subject            = 'Automated DBMail message - 1',
     @body               = 'Database Mail configuration was successful.';
GO
```

**Example of sending a message to multiple recipients**  

```
USE msdb
GO

EXEC msdb.dbo.sp_send_dbmail
     @profile_name       = 'Notifications',
     @recipients         = 'recipient1@example.com;recipient2@example.com',
     @subject            = 'Automated DBMail message - 2',
     @body               = 'This is a message.';
GO
```

**Example of sending a SQL query result as a file attachment**  

```
USE msdb
GO

EXEC msdb.dbo.sp_send_dbmail
     @profile_name       = 'Notifications',
     @recipients         = 'nobody@example.com',
     @subject            = 'Test SQL query',
     @body               = 'This is a SQL query test.',
     @query              = 'SELECT * FROM abc.dbo.test',
     @attach_query_result_as_file = 1;
GO
```

**Example of sending a message in HTML format**  

```
USE msdb
GO

DECLARE @HTML_Body as NVARCHAR(500) = 'Hi, <h4> Heading </h4> </br> See the report. <b> Regards </b>';

EXEC msdb.dbo.sp_send_dbmail
     @profile_name       = 'Notifications',
     @recipients         = 'nobody@example.com',
     @subject            = 'Test HTML message',
     @body               = @HTML_Body,
     @body_format        = 'HTML';
GO
```

**Example of sending a message using a trigger when a specific event occurs in the database**  

```
USE AdventureWorks2017
GO
IF OBJECT_ID ('Production.iProductNotification', 'TR') IS NOT NULL
DROP TRIGGER Purchasing.iProductNotification
GO

CREATE TRIGGER iProductNotification ON Production.Product
   FOR INSERT
   AS
   DECLARE @ProductInformation nvarchar(255);
   SELECT
   @ProductInformation = 'A new product, ' + Name + ', is now available for $' + CAST(StandardCost AS nvarchar(20)) + '!'
   FROM INSERTED i;

EXEC msdb.dbo.sp_send_dbmail
     @profile_name       = 'Notifications',
     @recipients         = 'nobody@example.com',
     @subject            = 'New product information',
     @body               = @ProductInformation;
GO
```

# Viewing messages, logs, and attachments
<a name="SQLServer.DBMail.View"></a>

You use RDS stored procedures to view messages, event logs, and attachments.

**To view all email messages**
+ Use the following SQL query.

  ```
  SELECT * FROM msdb.dbo.rds_fn_sysmail_allitems(); --WHERE sent_status='sent' or 'failed' or 'unsent'
  ```

**To view all email event logs**
+ Use the following SQL query.

  ```
  SELECT * FROM msdb.dbo.rds_fn_sysmail_event_log();
  ```

**To view all email attachments**
+ Use the following SQL query.

  ```
  SELECT * FROM msdb.dbo.rds_fn_sysmail_mailattachments();
  ```

# Deleting messages
<a name="SQLServer.DBMail.Delete"></a>

You use the `rds_sysmail_delete_mailitems_sp` stored procedure to delete messages.

**Note**  
RDS automatically deletes mail table items when DBMail history data reaches 1 GB in size, with a retention period of at least 24 hours.  
If you want to keep mail items for a longer period, you can archive them. For more information, see [Create a SQL Server Agent job to archive Database Mail messages and event logs](https://docs.microsoft.com/en-us/sql/relational-databases/database-mail/create-a-sql-server-agent-job-to-archive-database-mail-messages-and-event-logs) in the Microsoft documentation.

**To delete all email messages**
+ Use the following SQL statement.

  ```
  DECLARE @GETDATE datetime
  SET @GETDATE = GETDATE();
  EXECUTE msdb.dbo.rds_sysmail_delete_mailitems_sp @sent_before = @GETDATE;
  GO
  ```

**To delete all email messages with a particular status**
+ Use the following SQL statement to delete all failed messages.

  ```
  DECLARE @GETDATE datetime
  SET @GETDATE = GETDATE();
  EXECUTE msdb.dbo.rds_sysmail_delete_mailitems_sp @sent_status = 'failed';
  GO
  ```

# Starting and stopping mail queue
<a name="SQLServer.DBMail.StartStop"></a>

Use the following instructions to start and stop the DB mail queue:

**Topics**
+ [

## Starting the mail queue
](#SQLServer.DBMail.Start)
+ [

## Stopping the mail queue
](#SQLServer.DBMail.Stop)

## Starting the mail queue
<a name="SQLServer.DBMail.Start"></a>

You use the `rds_sysmail_control` stored procedure to start the Database Mail process.

**Note**  
Enabling Database Mail automatically starts the mail queue.

**To start the mail queue**
+ Use the following SQL statement.

  ```
  EXECUTE msdb.dbo.rds_sysmail_control start;
  GO
  ```

## Stopping the mail queue
<a name="SQLServer.DBMail.Stop"></a>

You use the `rds_sysmail_control` stored procedure to stop the Database Mail process.

**To stop the mail queue**
+ Use the following SQL statement.

  ```
  EXECUTE msdb.dbo.rds_sysmail_control stop;
  GO
  ```

## Working with file attachments
<a name="SQLServer.DBMail.Files"></a>

The following file attachment extensions aren't supported in Database Mail messages from RDS on SQL Server: .ade, .adp, .apk, .appx, .appxbundle, .bat, .bak, .cab, .chm, .cmd, .com, .cpl, .dll, .dmg, .exe, .hta, .inf1, .ins, .isp, .iso, .jar, .job, .js, .jse, .ldf, .lib, .lnk, .mde, .mdf, .msc, .msi, .msix, .msixbundle, .msp, .mst, .nsh, .pif, .ps, .ps1, .psc1, .reg, .rgs, .scr, .sct, .shb, .shs, .svg, .sys, .u3p, .vb, .vbe, .vbs, .vbscript, .vxd, .ws, .wsc, .wsf, and .wsh.

Database Mail uses the Microsoft Windows security context of the current user to control access to files. Users who log in with SQL Server Authentication can't attach files using the `@file_attachments` parameter with the `sp_send_dbmail` stored procedure. Windows doesn't allow SQL Server to provide credentials from a remote computer to another remote computer. Therefore, Database Mail can't attach files from a network share when the command is run from a computer other than the computer running SQL Server.

However, you can use SQL Server Agent jobs to attach files. For more information on SQL Server Agent, see [Using SQL Server Agent for Amazon RDS](Appendix.SQLServer.CommonDBATasks.Agent.md) and [SQL Server Agent](https://docs.microsoft.com/en-us/sql/ssms/agent/sql-server-agent) in the Microsoft documentation.

## Considerations for Multi-AZ deployments
<a name="SQLServer.DBMail.MAZ"></a>

When you configure Database Mail on a Multi-AZ DB instance, the configuration isn't automatically propagated to the secondary. We recommend converting the Multi-AZ instance to a Single-AZ instance, configuring Database Mail, and then converting the DB instance back to Multi-AZ. Then both the primary and secondary nodes have the Database Mail configuration.

If you create a read replica from your Multi-AZ instance that has Database Mail configured, the replica inherits the configuration, but without the password to the SMTP server. Update the Database Mail account with the password.

## Removing the SMTP (port 25) restriction
<a name="SQLServer.DBMail.SMTP"></a>

By default, AWS blocks outbound traffic on SMTP (port 25) for RDS for SQL Server DB instances. This is done to prevent spam based on the elastic network interface owner's policies. You can remove this restriction if needed. For more information, see [ How do I remove the restriction on port 25 from my Amazon EC2 instance or Lambda function?](https://repost.aws/knowledge-center/ec2-port-25-throttle). 

# Instance store support for the tempdb database on Amazon RDS for SQL Server
<a name="SQLServer.InstanceStore"></a>

An *instance store* provides temporary block-level storage for your DB instance. This storage is located on disks that are physically attached to the host computer. These disks have Non-Volatile Memory Express (NVMe) instance storage that is based on solid-state drives (SSDs). This storage is optimized for low latency, very high random I/O performance, and high sequential read throughput.

By placing `tempdb` data files and `tempdb` log files on the instance store, you can achieve lower read and write latencies compared to standard storage based on Amazon EBS.

**Note**  
SQL Server database files and database log files aren't placed on the instance store.

## Enabling the instance store
<a name="SQLServer.InstanceStore.Enable"></a>

When RDS provisions DB instances with one of the following instance classes, the `tempdb` database is automatically placed onto the instance store:
+ db.m5d
+ db.r5d
+ db.x2iedn

To enable the instance store, do one of the following:
+ Create a SQL Server DB instance using one of these instance types. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).
+ Modify an existing SQL Server DB instance to use one of them. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

The instance store is available in all AWS Regions where one or more of these instance types are supported. For more information on the `db.m5d` and `db.r5d` instance classes, see [DB instance classes](Concepts.DBInstanceClass.md). For more information on the instance classes supported by Amazon RDS for SQL Server, see [DB instance class support for Microsoft SQL Server](SQLServer.Concepts.General.InstanceClasses.md).

## File location and size considerations
<a name="SQLServer.InstanceStore.Files"></a>

On instances without an instance store, RDS stores the `tempdb` data and log files in the `D:\rdsdbdata\DATA` directory. Both files start at 8 MB by default.

On instances with an instance store, RDS stores the `tempdb` data and log files in the `T:\rdsdbdata\DATA` directory.

When `tempdb` has only one data file (`tempdb.mdf`) and one log file (`templog.ldf`), `templog.ldf` starts at 8 MB by default and `tempdb.mdf` starts at 80% or more of the instance's storage capacity. Twenty percent of the storage capacity or 200 GB, whichever is less, is kept free to start. Multiple `tempdb` data files split the 80% disk space evenly, while log files always have an 8-MB initial size.

For example, if you modify your DB instance class from `db.m5.2xlarge` to `db.m5d.2xlarge`, the size of `tempdb` data files increases from 8 MB each to 234 GB in total.

**Note**  
Besides the `tempdb` data and log files on the instance store (`T:\rdsdbdata\DATA`), you can still create extra `tempdb` data and log files on the data volume (`D:\rdsdbdata\DATA`). Those files always have an 8 MB initial size.

## Backup considerations
<a name="SQLServer.InstanceStore.Backups"></a>

You might need to retain backups for long periods, incurring costs over time. The `tempdb` data and log blocks can change very often depending on the workload. This can greatly increase the DB snapshot size.

When `tempdb` is on the instance store, snapshots don't include temporary files. This means that snapshot sizes are smaller and consume less of the free backup allocation compared to EBS-only storage.

## Disk full errors
<a name="SQLServer.InstanceStore.DiskFull"></a>

If you use all of the available space in the instance store, you might receive errors such as the following:
+ The transaction log for database 'tempdb' is full due to 'ACTIVE\$1TRANSACTION'.
+ Could not allocate space for object 'dbo.SORT temporary run storage: 140738941419520' in database 'tempdb' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

You can do one or more of the following when the instance store is full:
+ Adjust your workload or the way you use `tempdb`.
+ Scale up to use a DB instance class with more NVMe storage.
+ Stop using the instance store, and use an instance class with only EBS storage.
+ Use a mixed mode by adding secondary data or log files for `tempdb` on the EBS volume.

## Removing the instance store
<a name="SQLServer.InstanceStore.Disable"></a>

To remove the instance store, modify your SQL Server DB instance to use an instance type that doesn't support instance store, such as db.m5, db.r5, or db.x1e.

**Note**  
When you remove the instance store, the temporary files are moved to the `D:\rdsdbdata\DATA` directory and reduced in size to 8 MB.

# Using extended events with Amazon RDS for Microsoft SQL Server
<a name="SQLServer.ExtendedEvents"></a>

You can use extended events in Microsoft SQL Server to capture debugging and troubleshooting information for Amazon RDS for SQL Server. Extended events replace SQL Trace and Server Profiler, which have been deprecated by Microsoft. Extended events are similar to profiler traces but with more granular control on the events being traced. Extended events are supported for SQL Server versions 2016 and later on Amazon RDS. For more information, see [Extended events overview](https://docs.microsoft.com/en-us/sql/relational-databases/extended-events/extended-events) in the Microsoft documentation.

Extended events are turned on automatically for users with master user privileges in Amazon RDS for SQL Server.

**Topics**
+ [

## Limitations and recommendations
](#SQLServer.ExtendedEvents.Limits)
+ [

## Configuring extended events on RDS for SQL Server
](#SQLServer.ExtendedEvents.Config)
+ [

## Considerations for Multi-AZ deployments
](#SQLServer.ExtendedEvents.MAZ)
+ [

## Querying extended event files
](#SQLServer.ExtendedEvents.Querying)

## Limitations and recommendations
<a name="SQLServer.ExtendedEvents.Limits"></a>

When using extended events on RDS for SQL Server, the following limitations apply:
+ Extended events are supported only for the Enterprise and Standard Editions.
+ You can't alter default extended event sessions.
+ Make sure to set the session memory partition mode to `NONE`.
+ Session event retention mode can be either `ALLOW_SINGLE_EVENT_LOSS` or `ALLOW_MULTIPLE_EVENT_LOSS`.
+ Event Tracing for Windows (ETW) targets aren't supported.
+ Make sure that file targets are in the `D:\rdsdbdata\log` directory.
+ For pair matching targets, set the `respond_to_memory_pressure` property to `1`.
+ Ring buffer target memory can't be greater than 4 MB.
+ The following actions aren't supported:
  + `debug_break`
  + `create_dump_all_threads`
  + `create_dump_single_threads`
+ The `rpc_completed` event is supported on the following versions and later: 15.0.4083.2, 14.0.3370.1, 13.0.5865.1, 12.0.6433.1, 11.0.7507.2.

## Configuring extended events on RDS for SQL Server
<a name="SQLServer.ExtendedEvents.Config"></a>

On RDS for SQL Server, you can configure the values of certain parameters of extended event sessions. The following table describes the configurable parameters.


| Parameter name | Description | RDS default value | Minimum value | Maximum value | 
| --- | --- | --- | --- | --- | 
| xe\$1session\$1max\$1memory | Specifies the maximum amount of memory to allocate to the session for event buffering. This value corresponds to the max\$1memory setting of the event session. | 4 MB | 4 MB | 8 MB | 
| xe\$1session\$1max\$1event\$1size | Specifies the maximum memory size allowed for large events. This value corresponds to the max\$1event\$1size setting of the event session. | 4 MB | 4 MB | 8 MB | 
| xe\$1session\$1max\$1dispatch\$1latency | Specifies the amount of time that events are buffered in memory before being dispatched to extended event session targets. This value corresponds to the max\$1dispatch\$1latency setting of the event session. | 30 seconds | 1 second | 30 seconds | 
| xe\$1file\$1target\$1size | Specifies the maximum size of the file target. This value corresponds to the max\$1file\$1size setting of the file target. | 100 MB | 10 MB | 1 GB | 
| xe\$1file\$1retention | Specifies the retention time in days for files generated by the file targets of event sessions. | 7 days | 0 days | 7 days | 

**Note**  
Setting `xe_file_retention` to zero causes .xel files to be removed automatically after the lock on these files is released by SQL Server. The lock is released whenever an .xel file reaches the size limit set in `xe_file_target_size`.

You can use the `rdsadmin.dbo.rds_show_configuration` stored procedure to show the current values of these parameters. For example, use the following SQL statement to view the current setting of `xe_session_max_memory`.

```
exec rdsadmin.dbo.rds_show_configuration 'xe_session_max_memory'
```

You can use the `rdsadmin.dbo.rds_set_configuration` stored procedure to modify them. For example, use the following SQL statement to set `xe_session_max_memory` to 4 MB.

```
exec rdsadmin.dbo.rds_set_configuration 'xe_session_max_memory', 4
```

## Considerations for Multi-AZ deployments
<a name="SQLServer.ExtendedEvents.MAZ"></a>

When you create an extended event session on a primary DB instance, it doesn't propagate to the standby replica. You can fail over and create the extended event session on the new primary DB instance. Or you can remove and then re-add the Multi-AZ configuration to propagate the extended event session to the standby replica. RDS stops all nondefault extended event sessions on the standby replica, so that these sessions don't consume resources on the standby. Because of this, after a standby replica becomes the primary DB instance, make sure to manually start the extended event sessions on the new primary.

**Note**  
This approach applies to both Always On Availability Groups and Database Mirroring.

You can also use a SQL Server Agent job to track the standby replica and start the sessions if the standby becomes the primary. For example, use the following query in your SQL Server Agent job step to restart event sessions on a primary DB instance.

```
BEGIN
    IF (DATABASEPROPERTYEX('rdsadmin','Updateability')='READ_WRITE'
    AND DATABASEPROPERTYEX('rdsadmin','status')='ONLINE'
    AND (DATABASEPROPERTYEX('rdsadmin','Collation') IS NOT NULL OR DATABASEPROPERTYEX('rdsadmin','IsAutoClose')=1)
    )
    BEGIN
        IF NOT EXISTS (SELECT 1 FROM sys.dm_xe_sessions WHERE name='xe1')
            ALTER EVENT SESSION xe1 ON SERVER STATE=START
        IF NOT EXISTS (SELECT 1 FROM sys.dm_xe_sessions WHERE name='xe2')
            ALTER EVENT SESSION xe2 ON SERVER STATE=START
    END
END
```

This query restarts the event sessions `xe1` and `xe2` on a primary DB instance if these sessions are in a stopped state. You can also add a schedule with a convenient interval to this query.

## Querying extended event files
<a name="SQLServer.ExtendedEvents.Querying"></a>

You can either use SQL Server Management Studio or the `sys.fn_xe_file_target_read_file` function to view data from extended events that use file targets. For more information on this function, see [sys.fn\$1xe\$1file\$1target\$1read\$1file (Transact-SQL)](https://docs.microsoft.com/en-us/sql/relational-databases/system-functions/sys-fn-xe-file-target-read-file-transact-sql) in the Microsoft documentation.

Extended event file targets can only write files to the `D:\rdsdbdata\log` directory on RDS for SQL Server.

As an example, use the following SQL query to list the contents of all files of extended event sessions whose names start with `xe`.

```
SELECT * FROM sys.fn_xe_file_target_read_file('d:\rdsdbdata\log\xe*', null,null,null);
```

# Access to transaction log backups with RDS for SQL Server
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess"></a>

With access to transaction log backups for RDS for SQL Server, you can list the transaction log backup files for a database and copy them to a target Amazon S3 bucket. By copying transaction log backups in an Amazon S3 bucket, you can use them in combination with full and differential database backups to perform point in time database restores. You use RDS stored procedures to set up access to transaction log backups, list available transaction log backups, and copy them to your Amazon S3 bucket.

Access to transaction log backups provides the following capabilities and benefits:
+ List and view the metadata of available transaction log backups for a database on an RDS for SQL Server DB instance.
+ Copy available transaction log backups from RDS for SQL Server to a target Amazon S3 bucket.
+ Perform point-in-time restores of databases without the need to restore an entire DB instance. For more information on restoring a DB instance to a point in time, see [Restoring a DB instance to a specified time for Amazon RDS](USER_PIT.md).

## Availability and support
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.Availability"></a>

Access to transaction log backups is supported in all AWS Regions. Access to transaction log backups is available for all editions and versions of Microsoft SQL Server supported on Amazon RDS. 

## Requirements
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.Requirements"></a>

The following requirements must be met before enabling access to transaction log backups: 
+  Automated backups must be enabled on the DB instance and the backup retention must be set to a value of one or more days. For more information on enabling automated backups and configuring a retention policy, see [Enabling automated backups](USER_WorkingWithAutomatedBackups.Enabling.md). 
+ An Amazon S3 bucket must exist in the same account and Region as the source DB instance. Before enabling access to transaction log backups, choose an existing Amazon S3 bucket or [create a new bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingaBucket.html) to use for your transaction log backup files.
+ An Amazon S3 bucket permissions policy must be configured as follows to allow Amazon RDS to copy transaction log files into it:

  1. Set the object account ownership property on the bucket to **Bucket Owner Preferred**.

  1. Add the following policy. There will be no policy by default, so use the bucket Access Control Lists (ACL) to edit the bucket policy and add it.

  

  The following example uses an ARN to specify a resource. We recommend using the `SourceArn` and `SourceAccount` global condition context keys in resource-based trust relationships to limit the service's permissions to a specific resource. For more information on working with ARNs, see [ Amazon resource names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) and [Amazon Resource Names (ARNs) in Amazon RDS](USER_Tagging.ARN.md).

    
**Example of an Amazon S3 permissions policy for access to transaction log backups**  

------
#### [ JSON ]

****  

  ```
      {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "Only allow writes to my bucket with bucket owner full control",
              "Effect": "Allow",
              "Principal": {
                  "Service": "backups.rds.amazonaws.com"
              },
              "Action": "s3:PutObject",
              "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/{customer_path}/*",
              "Condition": {
                  "StringEquals": {
                      "s3:x-amz-acl": "bucket-owner-full-control",
                      "aws:sourceAccount": "{customer_account}",
                      "aws:sourceArn": "{db_instance_arn}"
                  }
              }
          }
      ]
  }
  ```

------
+ An AWS Identity and Access Management (IAM) role to access the Amazon S3 bucket. If you already have an IAM role, you can use that. You can choose to have a new IAM role created for you when you add the `SQLSERVER_BACKUP_RESTORE` option by using the AWS Management Console. Alternatively, you can create a new one manually. For more information on creating and configuring an IAM role with `SQLSERVER_BACKUP_RESTORE`, see [Manually creating an IAM role for native backup and restore](SQLServer.Procedural.Importing.Native.Enabling.md#SQLServer.Procedural.Importing.Native.Enabling.IAM).
+ The `SQLSERVER_BACKUP_RESTORE` option must be added to an option group on your DB instance. For more information on adding the `SQLSERVER_BACKUP_RESTORE` option, see [Support for native backup and restore in SQL Server](Appendix.SQLServer.Options.BackupRestore.md).
**Note**  
If your DB instance has storage encryption enabled, the AWS KMS (KMS) actions and key must be provided in the IAM role provided in the native backup and restore option group.

  Optionally, if you intend to use the `rds_restore_log` stored procedure to perform point in time database restores, we recommend using the same Amazon S3 path for the native backup and restore option group and access to transaction log backups. This method ensures that when Amazon RDS assumes the role from the option group to perform the restore log functions, it has access to retrieve transaction log backups from the same Amazon S3 path.
+ If the DB instance is encrypted, regardless of encryption type (AWS managed key or customer managed key), you must provide a customer managed KMS key in the IAM role and in the `rds_tlog_backup_copy_to_S3` stored procedure. 

## Limitations and recommendations
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.Limitations"></a>

Access to transaction log backups has the following limitations and recommendations:
+  You can list and copy up to the last seven days of transaction log backups for any DB instance that has backup retention configured between one to 35 days. 
+  The Amazon S3 bucket used for access to transaction log backups must exist in the same account and Region as the source DB instance. Cross-account and cross-region copy is not supported. 
+  Only one Amazon S3 bucket can be configured as a target to copy transaction log backups into. You can choose a new target Amazon S3 bucket with the `rds_tlog_copy_setup` stored procedure. For more information on choosing a new target Amazon S3 bucket, see [Setting up access to transaction log backups](USER.SQLServer.AddlFeat.TransactionLogAccess.Enabling.md).
+  You cannot specify the KMS key when using the `rds_tlog_backup_copy_to_S3` stored procedure if your RDS instance is not enabled for storage encryption. 
+  Multi-account copying is not supported. The IAM role used for copying will only permit write access to Amazon S3 buckets within the owner account of the DB instance. 
+  Only two concurrent tasks of any type may be run on an RDS for SQL Server DB instance. 
+  Only one copy task can run for a single database at a given time. If you want to copy transaction log backups for multiple databases on the DB instance, use a separate copy task for each database. 
+  If you copy a transaction log backup that already exists with the same name in the Amazon S3 bucket, the existing transaction log backup will be overwritten. 
+  You can only run the stored procedures that are provided with access to transaction log backups on the primary DB instance. You can’t run these stored procedures on an RDS for SQL Server read replica or on a secondary instance of a Multi-AZ DB cluster. 
+  If the RDS for SQL Server DB instance is rebooted while the `rds_tlog_backup_copy_to_S3` stored procedure is running, the task will automatically restart from the beginning when the DB instance is back online. Any transaction log backups that had been copied to the Amazon S3 bucket while the task was running before the reboot will be overwritten. 
+ The Microsoft SQL Server system databases and the `RDSAdmin` database cannot be configured for access to transaction log backups.
+  Copying to buckets encrypted by SSE-KMS isn't supported. 

# Setting up access to transaction log backups
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.Enabling"></a>

To set up access to transaction log backups, complete the list of requirements in the [Requirements](USER.SQLServer.AddlFeat.TransactionLogAccess.md#USER.SQLServer.AddlFeat.TransactionLogAccess.Requirements) section, and then run the `rds_tlog_copy_setup` stored procedure. The procedure will enable the access to transaction log backups feature at the DB instance level. You don't need to run it for each individual database on the DB instance. 

**Important**  
The database user must be granted the `db_owner` role within SQL Server on each database to configure and use the access to transaction log backups feature.

**Example usage:**  

```
exec msdb.dbo.rds_tlog_copy_setup
@target_s3_arn='arn:aws:s3:::amzn-s3-demo-bucket/myfolder';
```

The following parameter is required:
+ `@target_s3_arn` – The ARN of the target Amazon S3 bucket to copy transaction log backups files to.

**Example of setting an Amazon S3 target bucket:**  

```
exec msdb.dbo.rds_tlog_copy_setup @target_s3_arn='arn:aws:s3:::amzn-s3-demo-logging-bucket/mytestdb1';
```

To validate the configuration, call the `rds_show_configuration` stored procedure.

**Example of validating the configuration:**  

```
exec rdsadmin.dbo.rds_show_configuration @name='target_s3_arn_for_tlog_copy';
```

To modify access to transaction log backups to point to a different Amazon S3 bucket, you can view the current Amazon S3 bucket value and re-run the `rds_tlog_copy_setup` stored procedure using a new value for the `@target_s3_arn`.

**Example of viewing the existing Amazon S3 bucket configured for access to transaction log backups**  

```
exec rdsadmin.dbo.rds_show_configuration @name='target_s3_arn_for_tlog_copy';
```

**Example of updating to a new target Amazon S3 bucket**  

```
exec msdb.dbo.rds_tlog_copy_setup @target_s3_arn='arn:aws:s3:::amzn-s3-demo-logging-bucket1/mynewfolder';
```

# Listing available transaction log backups
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.Listing"></a>

With RDS for SQL Server, databases configured to use the full recovery model and a DB instance backup retention set to one or more days have transaction log backups automatically enabled. By enabling access to transaction log backups, up to seven days of those transaction log backups are made available for you to copy into your Amazon S3 bucket.

After you have enabled access to transaction log backups, you can start using it to list and copy available transaction log backup files.

**Listing transaction log backups**

To list all transaction log backups available for an individual database, call the `rds_fn_list_tlog_backup_metadata` function. You can use an `ORDER BY` or a `WHERE` clause when calling the function.

**Example of listing and filtering available transaction log backup files**  

```
SELECT * from msdb.dbo.rds_fn_list_tlog_backup_metadata('mydatabasename');
SELECT * from msdb.dbo.rds_fn_list_tlog_backup_metadata('mydatabasename') WHERE rds_backup_seq_id = 3507;
SELECT * from msdb.dbo.rds_fn_list_tlog_backup_metadata('mydatabasename') WHERE backup_file_time_utc > '2022-09-15 20:44:01' ORDER BY backup_file_time_utc DESC;
```

![\[Output from rds_fn_list_tlog_backup_metadata\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/sql_accesstransactionlogs_func.png)


The `rds_fn_list_tlog_backup_metadata` function returns the following output:


****  

| Column name | Data type | Description | 
| --- | --- | --- | 
| `db_name` | sysname | The database name provided to list the transaction log backups for. | 
| `db_id` | int | The internal database identifier for the input parameter `db_name`. | 
| `family_guid` | uniqueidentifier | The unique ID of the original database at creation. This value remains the same when the database is restored, even to a different database name. | 
| `rds_backup_seq_id` | int | The ID that RDS uses internally to maintain a sequence number for each transaction log backup file. | 
| `backup_file_epoch` | bigint | The epoch time that a transaction backup file was generated. | 
| `backup_file_time_utc` | datetime | The UTC time-converted value for the `backup_file_epoch` value. | 
| `starting_lsn` | numeric(25,0) | The log sequence number of the first or oldest log record of a transaction log backup file. | 
| `ending_lsn` | numeric(25,0) | The log sequence number of the last or next log record of a transaction log backup file. | 
| `is_log_chain_broken` | bit | A boolean value indicating if the log chain is broken between the current transaction log backup file and the previous transaction log backup file. | 
| `file_size_bytes` | bigint | The size of the transactional backup set in bytes. | 
| `Error` | varchar(4000) | Error message if the `rds_fn_list_tlog_backup_metadata` function throws an exception. NULL if no exceptions. | 

# Copying transaction log backups
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.Copying"></a>

To copy a set of available transaction log backups for an individual database to your Amazon S3 bucket, call the `rds_tlog_backup_copy_to_S3` stored procedure. The `rds_tlog_backup_copy_to_S3` stored procedure will initiate a new task to copy transaction log backups. 

**Note**  
The `rds_tlog_backup_copy_to_S3` stored procedure will copy the transaction log backups without validating against `is_log_chain_broken` attribute. For this reason, you should manually confirm an unbroken log chain before running the `rds_tlog_backup_copy_to_S3` stored procedure. For further explanation, see [Validating the transaction log backup log chain](#USER.SQLServer.AddlFeat.TransactionLogAccess.Copying.LogChain).

**Example usage of the `rds_tlog_backup_copy_to_S3` stored procedure**  

```
exec msdb.dbo.rds_tlog_backup_copy_to_S3
	@db_name='mydatabasename',
	[@kms_key_arn='arn:aws:kms:region:account-id:key/key-id'],	
	[@backup_file_start_time='2022-09-01 01:00:15'],
	[@backup_file_end_time='2022-09-01 21:30:45'],
	[@starting_lsn=149000000112100001],
	[@ending_lsn=149000000120400001],
	[@rds_backup_starting_seq_id=5],
	[@rds_backup_ending_seq_id=10];
```

The following input parameters are available:


****  

| Parameter | Description | 
| --- | --- | 
| `@db_name` | The name of the database to copy transaction log backups for | 
| `@kms_key_arn` |  A customer managed KMS key. If you encrypt your DB instance with an AWS managed KMS key, you must create a customer managed key. If you encrypt your DB instance with a customer managed key, you can use the same KMS key ARN. | 
| `@backup_file_start_time` | The UTC timestamp as provided from the `[backup_file_time_utc]` column of the `rds_fn_list_tlog_backup_metadata` function. | 
| `@backup_file_end_time` | The UTC timestamp as provided from the `[backup_file_time_utc]` column of the `rds_fn_list_tlog_backup_metadata` function. | 
| `@starting_lsn` | The log sequence number (LSN) as provided from the `[starting_lsn]` column of the `rds_fn_list_tlog_backup_metadata` function | 
| `@ending_lsn` | The log sequence number (LSN) as provided from the `[ending_lsn]` column of the `rds_fn_list_tlog_backup_metadata` function. | 
| `@rds_backup_starting_seq_id` | The sequence ID as provided from the `[rds_backup_seq_id]` column of the `rds_fn_list_tlog_backup_metadata` function. | 
| `@rds_backup_ending_seq_id` | The sequence ID as provided from the `[rds_backup_seq_id]` column of the `rds_fn_list_tlog_backup_metadata` function. | 

You can specify a set of either the time, LSN, or sequence ID parameters. Only one set of parameters are required.

You can also specify just a single parameter in any of the sets. For example, by providing a value for only the `backup_file_end_time` parameter, all available transaction log backup files prior to that time within the seven-day limit will be copied to your Amazon S3 bucket. 

Following are the valid input parameter combinations for the `rds_tlog_backup_copy_to_S3` stored procedure.


****  

| Parameters provided | Expected result | 
| --- | --- | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3  <br />	@db_name = 'testdb1',<br />            @backup_file_start_time='2022-08-23 00:00:00',<br />            @backup_file_end_time='2022-08-30 00:00:00';</pre>  | Copies transaction log backups from the last seven days and exist between the provided range of `backup_file_start_time` and `backup_file_end_time`. In this example, the stored procedure will copy transaction log backups that were generated between '2022-08-23 00:00:00' and '2022-08-30 00:00:00'.  | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />           @db_name = 'testdb1',<br />           @backup_file_start_time='2022-08-23 00:00:00';</pre>  | Copies transaction log backups from the last seven days and starting from the provided `backup_file_start_time`. In this example, the stored procedure will copy transaction log backups from '2022-08-23 00:00:00' up to the latest transaction log backup.  | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />          @db_name = 'testdb1',<br />          @backup_file_end_time='2022-08-30 00:00:00';</pre>  | Copies transaction log backups from the last seven days up to the provided `backup_file_end_time`. In this example, the stored procedure will copy transaction log backups from '2022-08-23 00:00:00 up to '2022-08-30 00:00:00'.  | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />         @db_name='testdb1',<br />         @starting_lsn =1490000000040007,<br />         @ending_lsn =  1490000000050009;</pre>  | Copies transaction log backups that are available from the last seven days and are between the provided range of the `starting_lsn` and `ending_lsn`. In this example, the stored procedure will copy transaction log backups from the last seven days with an LSN range between 1490000000040007 and 1490000000050009.   | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />        @db_name='testdb1',<br />        @starting_lsn =1490000000040007;</pre>  |  Copies transaction log backups that are available from the last seven days, beginning from the provided `starting_lsn`. In this example, the stored procedure will copy transaction log backups from LSN 1490000000040007 up to the latest transaction log backup.   | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />        @db_name='testdb1',<br />        @ending_lsn  =1490000000050009;</pre>  |  Copies transaction log backups that are available from the last seven days, up to the provided `ending_lsn`. In this example, the stored procedure will copy transaction log backups beginning from the last seven days up to lsn 1490000000050009.   | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />       @db_name='testdb1',<br />       @rds_backup_starting_seq_id= 2000,<br />       @rds_backup_ending_seq_id= 5000;</pre>  |  Copies transaction log backups that are available from the last seven days, and exist between the provided range of `rds_backup_starting_seq_id` and `rds_backup_ending_seq_id`. In this example, the stored procedure will copy transaction log backups beginning from the last seven days and within the provided rds backup sequence id range, starting from seq\$1id 2000 up to seq\$1id 5000.   | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />       @db_name='testdb1',<br />       @rds_backup_starting_seq_id= 2000;</pre>  |  Copies transaction log backups that are available from the last seven days, beginning from the provided `rds_backup_starting_seq_id`. In this example, the stored procedure will copy transaction log backups beginning from seq\$1id 2000, up to the latest transaction log backup.   | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />      @db_name='testdb1',<br />      @rds_backup_ending_seq_id= 5000;</pre>  |  Copies transaction log backups that are available from the last seven days, up to the provided `rds_backup_ending_seq_id`. In this example, the stored procedure will copy transaction log backups beginning from the last seven days, up to seq\$1id 5000.   | 
|  <pre>exec msdb.dbo.rds_tlog_backup_copy_to_S3<br />      @db_name='testdb1',<br />      @rds_backup_starting_seq_id= 2000;<br />      @rds_backup_ending_seq_id= 2000;</pre>  |  Copies a single transaction log backup with the provided `rds_backup_starting_seq_id`, if available within the last seven days. In this example, the stored procedure will copy a single transaction log backup that has a seq\$1id of 2000, if it exists within the last seven days.   | 

## Validating the transaction log backup log chain
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.Copying.LogChain"></a>

 Databases configured for access to transaction log backups must have automated backup retention enabled. Automated backup retention sets the databases on the DB instance to the `FULL` recovery model. To support point in time restore for a database, avoid changing the database recovery model, which can result in a broken log chain. We recommend keeping the database set to the `FULL` recovery model.

To manually validate the log chain before copying transaction log backups, call the `rds_fn_list_tlog_backup_metadata` function and review the values in the `is_log_chain_broken` column. A value of "1" indicates the log chain was broken between the current log backup and the previous log backup.

The following example shows a broken log chain in the output from the `rds_fn_list_tlog_backup_metadata` stored procedure. 

![\[Output from rds_fn_list_tlog_backup_metadata showing a broken log chain.\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/sql_accesstransactionlogs_logchain_error.png)


In a normal log chain, the log sequence number (LSN) value for first\$1lsn for given rds\$1sequence\$1id should match the value of last\$1lsn in the preceding rds\$1sequence\$1id. In the image, the rds\$1sequence\$1id of 45 has a first\$1lsn value 90987, which does not match the last\$1lsn value of 90985 for preceeding rds\$1sequence\$1id 44.

For more information about SQL Server transaction log architecture and log sequence numbers, see [Transaction Log Logical Architecture](https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-log-architecture-and-management-guide?view=sql-server-ver15#Logical_Arch) in the Microsoft SQL Server documentation.

# Amazon S3 bucket folder and file structure
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.S3namingConvention"></a>

Transaction log backups have the following standard structure and naming convention within an Amazon S3 bucket:
+ A new folder is created under the `target_s3_arn` path for each database with the naming structure as `{db_id}.{family_guid}`.
+ Within the folder, transaction log backups have a filename structure as `{db_id}.{family_guid}.{rds_backup_seq_id}.{backup_file_epoch}`.
+ You can view the details of `family_guid,db_id,rds_backup_seq_id and backup_file_epoch` with the `rds_fn_list_tlog_backup_metadata`function.

The following example shows the folder and file structure of a set of transaction log backups within an Amazon S3 bucket.

![\[Amazon S3 bucket structure with access to transaction logs\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/sql_accesstransactionlogs_s3.png)


# Tracking the status of tasks
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.TrackTaskStatus"></a>

 To track the status of your copy tasks, call the `rds_task_status` stored procedure. If you don't provide any parameters, the stored procedure returns the status of all tasks. 

**Example usage:**  

```
exec msdb.dbo.rds_task_status
  @db_name='database_name',
  @task_id=ID_number;
```

The following parameters are optional:
+ `@db_name` – The name of the database to show the task status for.
+ `@task_id` – The ID of the task to show the task status for.

**Example of listing the status for a specific task ID:**  

```
exec msdb.dbo.rds_task_status @task_id=5;
```

**Example of listing the status for a specific database and task:**  

```
exec msdb.dbo.rds_task_status@db_name='my_database',@task_id=5;
```

**Example of listing all tasks and their status for a specific database:**  

```
exec msdb.dbo.rds_task_status @db_name='my_database';
```

**Example of listing all tasks and their status on the current DB instance:**  

```
exec msdb.dbo.rds_task_status;
```

# Canceling a task
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.CancelTask"></a>

To cancel a running task, call the `rds_cancel_task` stored procedure.

**Example usage:**  

```
exec msdb.dbo.rds_cancel_task @task_id=ID_number;
```

The following parameter is required:
+ `@task_id` – The ID of the task to cancel. You can view the task ID by calling the `rds_task_status` stored procedure.

For more information on viewing and canceling running tasks, see [Importing and exporting SQL Server databases using native backup and restore](SQLServer.Procedural.Importing.md).

# Troubleshooting access to transaction log backups
<a name="USER.SQLServer.AddlFeat.TransactionLogAccess.Troubleshooting"></a>

The following are issues you might encounter when you use the stored procedures for access to transaction log backups.


****  

| Stored Procedure | Error Message | Issue | Troubleshooting suggestions | 
| --- | --- | --- | --- | 
| rds\$1tlog\$1copy\$1setup | Backups are disabled on this DB instance. Enable DB instance backups with a retention of at least "1" and try again. | Automated backups are not enabled for the DB instance. |  DB instance backup retention must be enabled with a retention of at least one day. For more information on enabling automated backups and configuring backup retention, see [Backup retention period](USER_WorkingWithAutomatedBackups.BackupRetention.md).  | 
| rds\$1tlog\$1copy\$1setup | Error running the rds\$1tlog\$1copy\$1setup stored procedure. Reconnect to the RDS endpoint and try again. | An internal error occurred. | Reconnect to the RDS endpoint and run the `rds_tlog_copy_setup` stored procedure again. | 
| rds\$1tlog\$1copy\$1setup | Running the rds\$1tlog\$1backup\$1copy\$1setup stored procedure inside a transaction is not supported. Verify that the session has no open transactions and try again.  | The stored procedure was attempted within a transaction using `BEGIN` and `END`. | Avoid using `BEGIN` and `END` when running the `rds_tlog_copy_setup` stored procedure. | 
| rds\$1tlog\$1copy\$1setup | The S3 bucket name for the input parameter `@target_s3_arn` should contain at least one character other than a space.  | An incorrect value was provided for the input parameter `@target_s3_arn`. | Ensure the input parameter `@target_s3_arn` specifies the complete Amazon S3 bucket ARN. | 
| rds\$1tlog\$1copy\$1setup | The `SQLSERVER_BACKUP_RESTORE` option isn't enabled or is in the process of being enabled. Enable the option or try again later.  | The `SQLSERVER_BACKUP_RESTORE` option is not enabled on the DB instance or was just enabled and pending internal activation. | Enable the `SQLSERVER_BACKUP_RESTORE` option as specified in the Requirements section. Wait a few minutes and run the `rds_tlog_copy_setup` stored procedure again. | 
| rds\$1tlog\$1copy\$1setup | The target S3 arn for the input parameter `@target_s3_arn` can't be empty or null.  | An `NULL` value was provided for the input parameter `@target_s3_arn`, or the value wasn't provided. | Ensure the input parameter `@target_s3_arn` specifies the complete Amazon S3 bucket ARN. | 
| rds\$1tlog\$1copy\$1setup | The target S3 arn for the input parameter `@target_s3_arn` must begin with arn:aws.  | The input parameter `@target_s3_arn` was provide without `arn:aws` on the front. | Ensure the input parameter `@target_s3_arn` specifies the complete Amazon S3 bucket ARN. | 
| rds\$1tlog\$1copy\$1setup | The target S3 ARN is already set to the provided value.  | The `rds_tlog_copy_setup` stored procedure previously ran and was configured with an Amazon S3 bucket ARN. | To modify the Amazon S3 bucket value for access to transaction log backups, provide a different `target S3 ARN`. | 
| rds\$1tlog\$1copy\$1setup | Unable to generate credentials for enabling Access to Transaction Log Backups. Confirm the S3 path ARN provided with `rds_tlog_copy_setup`, and try again later.  | There was an unspecified error while generating credentials to enable access to transaction log backups. | Review your setup configuration and try again.  | 
| rds\$1tlog\$1copy\$1setup | You cannot run the rds\$1tlog\$1copy\$1setup stored procedure while there are pending tasks. Wait for the pending tasks to complete and try again.  | Only two tasks may run at any time. There are pending tasks awaiting completion. | View pending tasks and wait for them to complete. For more information on monitoring task status, see [Tracking the status of tasks](USER.SQLServer.AddlFeat.TransactionLogAccess.TrackTaskStatus.md).  | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | A T-log backup file copy task has already been issued for database: %s with task Id: %d, please try again later.  | Only one copy task may run at any time for a given database. There is a pending copy task awaiting completion. | View pending tasks and wait for them to complete. For more information on monitoring task status, see [Tracking the status of tasks](USER.SQLServer.AddlFeat.TransactionLogAccess.TrackTaskStatus.md).  | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | At least one of these three parameter sets must be provided. SET-1:(@backup\$1file\$1start\$1time, @backup\$1file\$1end\$1time) \$1 SET-2:(@starting\$1lsn, @ending\$1lsn) \$1 SET-3:(@rds\$1backup\$1starting\$1seq\$1id, @rds\$1backup\$1ending\$1seq\$1id)  | None of the three parameter sets were provided, or a provided parameter set is missing a required parameter. | You can specify either the time, lsn, or sequence ID parameters. One set from these three sets of parameters are required. For more information on required parameters, see [Copying transaction log backups](USER.SQLServer.AddlFeat.TransactionLogAccess.Copying.md). | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | Backups are disabled on your instance. Please enable backups and try again in some time. | Automated backups are not enabled for the DB instance. |  For more information on enabling automated backups and configuring backup retention, see [Backup retention period](USER_WorkingWithAutomatedBackups.BackupRetention.md).  | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | Cannot find the given database %s.  | The value provided for input parameter `@db_name` does not match a database name on the DB instance. | Use the correct database name. To list all databases by name, run `SELECT * from sys.databases` | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | Cannot run the rds\$1tlog\$1backup\$1copy\$1to\$1S3 stored procedure for SQL Server system databases or the rdsadmin database.  | The value provided for input parameter `@db_name` matches a SQL Server system database name or the RDSAdmin database. | The following databases are not allowed to be used with access to transaction log backups: `master, model, msdb, tempdb, RDSAdmin.`  | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | Database name for the input parameter @db\$1name can't be empty or null.  | The value provided for input parameter `@db_name` was empty or `NULL`. | Use the correct database name. To list all databases by name, run `SELECT * from sys.databases` | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | DB instance backup retention period must be set to at least 1 to run the rds\$1tlog\$1backup\$1copy\$1setup stored procedure.  | Automated backups are not enabled for the DB instance. | For more information on enabling automated backups and configuring backup retention, see [Backup retention period](USER_WorkingWithAutomatedBackups.BackupRetention.md). | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | Error running the stored procedure rds\$1tlog\$1backup\$1copy\$1to\$1S3. Reconnect to the RDS endpoint and try again.  | An internal error occurred. | Reconnect to the RDS endpoint and run the `rds_tlog_backup_copy_to_S3` stored procedure again. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | Only one of these three parameter sets can be provided. SET-1:(@backup\$1file\$1start\$1time, @backup\$1file\$1end\$1time) \$1 SET-2:(@starting\$1lsn, @ending\$1lsn) \$1 SET-3:(@rds\$1backup\$1starting\$1seq\$1id, @rds\$1backup\$1ending\$1seq\$1id)  | Multiple parameter sets were provided. | You can specify either the time, lsn, or sequence ID parameters. One set from these three sets of parameters are required. For more information on required parameters, see [Copying transaction log backups](USER.SQLServer.AddlFeat.TransactionLogAccess.Copying.md).  | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | Running the rds\$1tlog\$1backup\$1copy\$1to\$1S3 stored procedure inside a transaction is not supported. Verify that the session has no open transactions and try again.  | The stored procedure was attempted within a transaction using `BEGIN` and `END`. | Avoid using `BEGIN` and `END` when running the `rds_tlog_backup_copy_to_S3` stored procedure. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | The provided parameters fall outside of the transaction backup log retention period. To list of available transaction log backup files, run the rds\$1fn\$1list\$1tlog\$1backup\$1metadata function.  | There are no available transactional log backups for the provided input parameters that fit in the copy retention window. | Try again with a valid set of parameters. For more information on required parameters, see [Copying transaction log backups](USER.SQLServer.AddlFeat.TransactionLogAccess.Copying.md). | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | There was a permissions error in processing the request. Ensure the bucket is in the same Account and Region as the DB Instance, and confirm the S3 bucket policy permissions against the template in the public documentation.  | There was an issue detected with the provided S3 bucket or its policy permissions. | Confirm your setup for access to transaction log backups is correct. For more information on setup requirements for your S3 bucket, see [Requirements](USER.SQLServer.AddlFeat.TransactionLogAccess.md#USER.SQLServer.AddlFeat.TransactionLogAccess.Requirements). | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | Running the `rds_tlog_backup_copy_to_S3` stored procedure on an RDS read replica instance isn't permitted.  | The stored procedure was attempted on a RDS read replica instance. | Connect to the RDS primary DB instance to run the `rds_tlog_backup_copy_to_S3` stored procedure. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | The LSN for the input parameter `@starting_lsn` must be less than `@ending_lsn`.  | The value provided for input parameter `@starting_lsn` was greater than the value provided for input parameter `@ending_lsn`. | Ensure the value provided for input parameter `@starting_lsn` is less than the value provided for input parameter `@ending_lsn`. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | The `rds_tlog_backup_copy_to_S3` stored procedure can only be performed by the members of `db_owner` role in the source database.  | The `db_owner` role has not been granted for the account attempting to run the `rds_tlog_backup_copy_to_S3` stored procedure on the provided `db_name`. | Ensure the account running the stored procedure is permissioned with the `db_owner` role for the provided `db_name`. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | The sequence ID for the input parameter `@rds_backup_starting_seq_id` must be less than or equal to `@rds_backup_ending_seq_id`.  | The value provided for input parameter `@rds_backup_starting_seq_id` was greater than the value provided for input parameter `@rds_backup_ending_seq_id`. | Ensure the value provided for input parameter `@rds_backup_starting_seq_id` is less than the value provided for input parameter `@rds_backup_ending_seq_id`. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | The SQLSERVER\$1BACKUP\$1RESTORE option isn't enabled or is in the process of being enabled. Enable the option or try again later.  | The `SQLSERVER_BACKUP_RESTORE` option is not enabled on the DB instance or was just enabled and pending internal activation. | Enable the `SQLSERVER_BACKUP_RESTORE` option as specified in the Requirements section. Wait a few minutes and run the `rds_tlog_backup_copy_to_S3` stored procedure again. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | The start time for the input parameter `@backup_file_start_time` must be less than `@backup_file_end_time`.  | The value provided for input parameter `@backup_file_start_time` was greater than the value provided for input parameter `@backup_file_end_time`. | Ensure the value provided for input parameter `@backup_file_start_time` is less than the value provided for input parameter `@backup_file_end_time`. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | We were unable to process the request due to a lack of access. Please check your setup and permissions for the feature.  | There may be an issue with the Amazon S3 bucket permissions, or the Amazon S3 bucket provided is in another account or Region. | Ensure the Amazon S3 bucket policy permissions are permissioned to allow RDS access. Ensure the Amazon S3 bucket is in the same account and Region as the DB instance. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | You cannot provide a KMS Key ARN as input parameter to the stored procedure for instances that are not storage-encrypted.  | When storage encryption is not enabled on the DB instance, the input parameter `@kms_key_arn` should not be provided. | Do not provide an input parameter for `@kms_key_arn`. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | You must provide a KMS Key ARN as input parameter to the stored procedure for storage encrypted instances.  | When storage encryption is enabled on the DB instance, the input parameter `@kms_key_arn` must be provided. | Provide an input parameter for `@kms_key_arn` with a value that matches the ARN of the Amazon S3 bucket to use for transaction log backups. | 
| rds\$1tlog\$1backup\$1copy\$1to\$1S3 | You must run the `rds_tlog_copy_setup` stored procedure and set the `@target_s3_arn`, before running the `rds_tlog_backup_copy_to_S3` stored procedure.  | The access to transaction log backups setup procedure was not completed before attempting to run the `rds_tlog_backup_copy_to_S3` stored procedure. | Run the `rds_tlog_copy_setup` stored procedure before running the `rds_tlog_backup_copy_to_S3` stored procedure. For more information on running the setup procedure for access to transaction log backups, see [Setting up access to transaction log backups](USER.SQLServer.AddlFeat.TransactionLogAccess.Enabling.md).  | 

# Options for the Microsoft SQL Server database engine
<a name="Appendix.SQLServer.Options"></a>

In this section, you can find descriptions for options that are available for Amazon RDS instances running the Microsoft SQL Server DB engine. To enable these options, you add them to an option group, and then associate the option group with your DB instance. For more information, see [Working with option groups](USER_WorkingWithOptionGroups.md). 

If you're looking for optional features that aren't added through RDS option groups (such as SSL, Microsoft Windows Authentication, and Amazon S3 integration), see [Additional features for Microsoft SQL Server on Amazon RDS](User.SQLServer.AdditionalFeatures.md).

Amazon RDS supports the following options for Microsoft SQL Server DB instances. 


****  

| Option | Option ID | Engine editions | 
| --- | --- | --- | 
|  [Linked Servers with Oracle OLEDB](Appendix.SQLServer.Options.LinkedServers_Oracle_OLEDB.md)  |  `OLEDB_ORACLE`  |  SQL Server Enterprise Edition SQL Server Standard Edition  | 
|  [Native backup and restore](Appendix.SQLServer.Options.BackupRestore.md)  |  `SQLSERVER_BACKUP_RESTORE`  |  SQL Server Enterprise Edition SQL Server Standard Edition SQL Server Web Edition SQL Server Express Edition  | 
|  [Transparent Data Encryption](Appendix.SQLServer.Options.TDE.md)  |  `TRANSPARENT_DATA_ENCRYPTION` (RDS console) `TDE` (AWS CLI and RDS API)  |  SQL Server 2016–2022 Enterprise Edition SQL Server 2022 Standard Edition | 
|  [SQL Server Audit](Appendix.SQLServer.Options.Audit.md)  |  `SQLSERVER_AUDIT`  |  In RDS, starting with SQL Server 2016, all editions of SQL Server support server-level audits, and Enterprise Edition also supports database-level audits. Starting with SQL Server SQL Server 2016 (13.x) SP1, all editions support both server-level and database-level audits. For more information, see [SQL Server Audit (database engine)](https://docs.microsoft.com/sql/relational-databases/security/auditing/sql-server-audit-database-engine?view=sql-server-2017) in the SQL Server documentation. | 
|  [SQL Server Analysis Services](Appendix.SQLServer.Options.SSAS.md)  |  `SSAS`  |  SQL Server Enterprise Edition SQL Server Standard Edition  | 
|  [SQL Server Integration Services](Appendix.SQLServer.Options.SSIS.md)  |  `SSIS`  |  SQL Server Enterprise Edition SQL Server Standard Edition  | 
|  [SQL Server Reporting Services](Appendix.SQLServer.Options.SSRS.md)  |  `SSRS`  |  SQL Server Enterprise Edition SQL Server Standard Edition  | 
|  [Microsoft Distributed Transaction Coordinator](Appendix.SQLServer.Options.MSDTC.md)  |  `MSDTC`  |  In RDS, starting with SQL Server 2016, all editions of SQL Server support distributed transactions.  | 
|  [SQL Server resource governor](Appendix.SQLServer.Options.ResourceGovernor.md)  |  `RESOURCE_GOVERNOR`  |  SQL Server Enterprise Edition SQL Server 2022 Developer Edition  | 

## Listing the available options for SQL Server versions and editions
<a name="Appendix.SQLServer.Options.Describe"></a>

You can use the `describe-option-group-options` AWS CLI command to list the available options for SQL Server versions and editions, and the settings for those options.

The following example shows the options and option settings for SQL Server 2019 Enterprise Edition. The `--engine-name` option is required.

```
aws rds describe-option-group-options --engine-name sqlserver-ee --major-engine-version 15.00
```

The output resembles the following:

```
{
    "OptionGroupOptions": [
        {
            "Name": "MSDTC",
            "Description": "Microsoft Distributed Transaction Coordinator",
            "EngineName": "sqlserver-ee",
            "MajorEngineVersion": "15.00",
            "MinimumRequiredMinorEngineVersion": "4043.16.v1",
            "PortRequired": true,
            "DefaultPort": 5000,
            "OptionsDependedOn": [],
            "OptionsConflictsWith": [],
            "Persistent": false,
            "Permanent": false,
            "RequiresAutoMinorEngineVersionUpgrade": false,
            "VpcOnly": false,
            "OptionGroupOptionSettings": [
                {
                    "SettingName": "ENABLE_SNA_LU",
                    "SettingDescription": "Enable support for SNA LU protocol",
                    "DefaultValue": "true",
                    "ApplyType": "DYNAMIC",
                    "AllowedValues": "true,false",
                    "IsModifiable": true,
                    "IsRequired": false,
                    "MinimumEngineVersionPerAllowedValue": []
                },
        ...

        {
            "Name": "TDE",
            "Description": "SQL Server - Transparent Data Encryption",
            "EngineName": "sqlserver-ee",
            "MajorEngineVersion": "15.00",
            "MinimumRequiredMinorEngineVersion": "4043.16.v1",
            "PortRequired": false,
            "OptionsDependedOn": [],
            "OptionsConflictsWith": [],
            "Persistent": true,
            "Permanent": false,
            "RequiresAutoMinorEngineVersionUpgrade": false,
            "VpcOnly": false,
            "OptionGroupOptionSettings": []
        }
    ]
}
```

# Support for Linked Servers with Oracle OLEDB in Amazon RDS for SQL Server
<a name="Appendix.SQLServer.Options.LinkedServers_Oracle_OLEDB"></a>

Linked servers with the Oracle Provider for OLEDB on RDS for SQL Server lets you access external data sources on an Oracle database. You can read data from remote Oracle data sources and run commands against remote Oracle database servers outside of your RDS for SQL Server DB instance. Using linked servers with Oracle OLEDB, you can:
+ Directly access data sources other than SQL Server
+ Query against diverse Oracle data sources with the same query without moving the data
+ Issue distributed queries, updates, commands, and transactions on data sources across an enterprise ecosystem
+ Integrate connections to an Oracle database from within the Microsoft Business Intelligence suite (SSIS, SSRS, SSAS)
+ Migrate from an Oracle database to RDS for SQL Server

You can activate one or more linked servers for Oracle on either an existing or new RDS for SQL Server DB instance. Then you can integrate external Oracle data sources with your DB instance.

**Contents**
+ [

## Supported versions and Regions
](#LinkedServers_Oracle_OLEDB.VersionRegionSupport)
+ [

## Limitations and recommendations
](#LinkedServers_Oracle_OLEDB.Limitations)
+ [

## Activating linked servers with Oracle
](#LinkedServers_Oracle_OLEDB.Enabling)
  + [

### Creating the option group for OLEDB\$1ORACLE
](#LinkedServers_Oracle_OLEDB.OptionGroup)
  + [

### Adding the `OLEDB_ORACLE` option to the option group
](#LinkedServers_Oracle_OLEDB.Add)
  + [

### Modifying the `OLEDB_ORACLE` version option to another version
](#LinkedServers_Oracle_OLEDB.Modify)
  + [

### Associating the option group with your DB instance
](#LinkedServers_Oracle_OLEDB.Apply)
+ [

## Modifying OLEDB provider properties
](#LinkedServers_Oracle_OLEDB.ModifyProviderProperties)
+ [

## Modifying OLEDB driver properties
](#LinkedServers_Oracle_OLEDB.ModifyDriverProperties)
+ [

## Deactivating linked servers with Oracle
](#LinkedServers_Oracle_OLEDB.Disable)

## Supported versions and Regions
<a name="LinkedServers_Oracle_OLEDB.VersionRegionSupport"></a>

RDS for SQL Server supports linked servers with Oracle OLEDB in all Regions for SQL Server Standard and Enterprise Editions on the following versions:
+ SQL Server 2022, all versions
+ SQL Server 2019, all versions
+ SQL Server 2017, all versions

Linked servers with Oracle OLEDB is supported for the following Oracle Database versions:
+ Oracle Database 21c, all versions
+ Oracle Database 19c, all versions
+ Oracle Database 18c, all versions

Linked servers with Oracle OLEDB is supported for the following OLEDB Oracle driver versions:
+ 21.7
+ 21.16

## Limitations and recommendations
<a name="LinkedServers_Oracle_OLEDB.Limitations"></a>

Keep in mind the following limitations and recommendations that apply to linked servers with Oracle OLEDB:
+ Allow network traffic by adding the applicable TCP port in the security group for each RDS for SQL Server DB instance. For example, if you’re configuring a linked server between an EC2 Oracle DB instance and an RDS for SQL Server DB instance, then you must allow traffic from the IP address of the EC2 Oracle DB instance. You also must allow traffic on the port that SQL Server is using to listen for database communication. For more information on security groups, see [Controlling access with security groups](Overview.RDSSecurityGroups.md).
+ Perform a reboot of the RDS for SQL Server DB instance after turning on, turning off, or modifying the `OLEDB_ORACLE` option in your option group. The option group status displays `pending_reboot` for these events and is required. For RDS for SQL Server Multi-AZ instances with AlwaysOn or Mirroring option enabled, a failover is expected when instance is rebooted after the new instance creation or restore.
+ Only simple authentication is supported with a user name and password for the Oracle data source.
+ Open Database Connectivity (ODBC) drivers are not supported. Only the OLEDB driver versions listed above are supported.
+ Distributed transactions (XA) are supported. To activate distributed transactions, turn on the `MSDTC` option in the Option Group for your DB instance and make sure XA transactions are turned on. For more information, see [Support for Microsoft Distributed Transaction Coordinator in RDS for SQL Server](Appendix.SQLServer.Options.MSDTC.md).
+ Creating data source names (DSNs) to use as a shortcut for a connection string is not supported.
+ OLEDB driver tracing is not supported. You can use SQL Server Extended Events to trace OLEDB events. For more information, see [Set up Extended Events in RDS for SQL Server](https://aws.amazon.com/blogs/database/set-up-extended-events-in-amazon-rds-for-sql-server/).
+ Access to the catalogs folder for an Oracle linked server is not supported using SQL Server Management Studio (SSMS).

## Activating linked servers with Oracle
<a name="LinkedServers_Oracle_OLEDB.Enabling"></a>

Activate linked servers with Oracle by adding the `OLEDB_ORACLE` option to your RDS for SQL Server DB instance. Use the following process:

1. Create a new option group, or choose an existing option group.

1. Add the `OLEDB_ORACLE` option to the option group.

1. Choose a version of the OLEDB driver to use.

1. Associate the option group with the DB instance.

1. Reboot the DB instance.

### Creating the option group for OLEDB\$1ORACLE
<a name="LinkedServers_Oracle_OLEDB.OptionGroup"></a>

To work with linked servers with Oracle, create an option group or modify an option group that corresponds to the SQL Server edition and version of the DB instance that you plan to use. To complete this procedure, use the AWS Management Console or the AWS CLI.

#### Console
<a name="LinkedServers_Oracle_OLEDB.OptionGroup.Console"></a>

The following procedure creates an option group for SQL Server Standard Edition 2019.

**To create the option group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose **Create group**.

1. In the **Create option group** window, do the following:

   1. For **Name**, enter a name for the option group that is unique within your AWS account, such as **oracle-oledb-se-2019**. The name can contain only letters, digits, and hyphens.

   1. For **Description**, enter a brief description of the option group, such as **OLEDB\$1ORACLE option group for SQL Server SE 2019**. The description is used for display purposes.

   1. For **Engine**, choose **sqlserver-se**.

   1. For **Major engine version**, choose **15.00**.

1. Choose **Create**.

#### CLI
<a name="LinkedServers_Oracle_OLEDB.OptionGroup.CLI"></a>

The following procedure creates an option group for SQL Server Standard Edition 2019.

**To create the option group**
+ Run one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-option-group \
      --option-group-name oracle-oledb-se-2019 \
      --engine-name sqlserver-se \
      --major-engine-version 15.00 \
      --option-group-description "OLEDB_ORACLE option group for SQL Server SE 2019"
  ```

  For Windows:

  ```
  aws rds create-option-group ^
      --option-group-name oracle-oledb-se-2019 ^
      --engine-name sqlserver-se ^
      --major-engine-version 15.00 ^
      --option-group-description "OLEDB_ORACLE option group for SQL Server SE 2019"
  ```

### Adding the `OLEDB_ORACLE` option to the option group
<a name="LinkedServers_Oracle_OLEDB.Add"></a>

Next, use the AWS Management Console or the AWS CLI to add the `OLEDB_ORACLE` option to your option group.

#### Console
<a name="LinkedServers_Oracle_OLEDB.Add.Console"></a>

**To add the OLEDB\$1ORACLE option**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose the option group that you just created, which is **oracle-oledb-se-2019** in this example.

1. Choose **Add option**.

1. Under **Option details**, choose **OLEDB\$1ORACLE** for **Option name**.

1. Under **Version**, choose the version of the OLEDB Oracle driver you want to install.

1. Under **Scheduling**, choose whether to add the option immediately or at the next maintenance window.

1. Choose **Add option**.

#### CLI
<a name="LinkedServers_Oracle_OLEDB.Add.CLI"></a>

**To add the OLEDB\$1ORACLE option**
+ Add the `OLEDB_ORACLE` option to the option group.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds add-option-to-option-group \
      --option-group-name oracle-oledb-se-2019 \
      --options OptionName=OLEDB_ORACLE, OptionVersion=21.16 \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds add-option-to-option-group ^
      --option-group-name oracle-oledb-se-2019 ^
      --options OptionName=OLEDB_ORACLE, OptionVersion=21.16 ^
      --apply-immediately
  ```

### Modifying the `OLEDB_ORACLE` version option to another version
<a name="LinkedServers_Oracle_OLEDB.Modify"></a>

To modify the `OLEDB_ORACLE` option version to another version, use the AWS Management Console or the AWS CLI.

#### Console
<a name="LinkedServers_Oracle_OLEDB.Modify.Console"></a>

**To Modify the OLEDB\$1ORACLE option**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose the option group with the `OLEDB_ORACLE` option (**oracle-oledb-se-2019** in the previous example).

1. Choose **Modify option**.

1. Under **Option details**, choose **OLEDB\$1ORACLE** for **Option name**.

1. Under **Version**, choose the version of the OLEDB Oracle driver you want to use.

1. Under **Scheduling**, choose whether to modify the option immediately or at the next maintenance window.

1. Choose **Modify option**.

#### CLI
<a name="LinkedServers_Oracle_OLEDB.Add.CLI"></a>

To modify the `OLEDB_ORACLE` option version, use the [https://docs.aws.amazon.com/cli/latest/reference/rds/add-option-to-option-group.html](https://docs.aws.amazon.com/cli/latest/reference/rds/add-option-to-option-group.html)AWS CLI command with the option group and option version that you want to use.

**To modify the OLEDB\$1ORACLE option**
+   
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds add-option-to-option-group \
      --option-group-name oracle-oledb-se-2019 \
      --options OptionName=OLEDB_ORACLE, OptionVersion=21.7 \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds add-option-to-option-group ^
      --option-group-name oracle-oledb-se-2019 ^
      --options OptionName=OLEDB_ORACLE, OptionVersion=21.7 ^
      --apply-immediately
  ```

### Associating the option group with your DB instance
<a name="LinkedServers_Oracle_OLEDB.Apply"></a>

To associate the `OLEDB_ORACLE` option group and parameter group with your DB instance, use the AWS Management Console or the AWS CLI 

#### Console
<a name="LinkedServers_Oracle_OLEDB.Apply.Console"></a>

To finish activating linked servers for Oracle, associate your `OLEDB_ORACLE` option group with a new or existing DB instance:
+ For a new DB instance, associate them when you launch the instance. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).
+ For an existing DB instance, associate them by modifying the instance. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

#### CLI
<a name="LinkedServers_Oracle_OLEDB.Apply.CLI"></a>

You can associate the `OLEDB_ORACLE` option group and parameter group with a new or existing DB instance.

**To create an instance with the `OLEDB_ORACLE` option group and parameter group**
+ Specify the same DB engine type and major version that you used when creating the option group.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-db-instance \
      --db-instance-identifier mytestsqlserveroracleoledbinstance \
      --db-instance-class db.m5.2xlarge \
      --engine sqlserver-se \
      --engine-version 15.0.4236.7.v1 \
      --allocated-storage 100 \
      --manage-master-user-password \
      --master-username admin \
      --storage-type gp2 \
      --license-model li \
      --domain-iam-role-name my-directory-iam-role \
      --domain my-domain-id \
      --option-group-name oracle-oledb-se-2019 \
      --db-parameter-group-name my-parameter-group-name
  ```

  For Windows:

  ```
  aws rds create-db-instance ^
      --db-instance-identifier mytestsqlserveroracleoledbinstance ^
      --db-instance-class db.m5.2xlarge ^
      --engine sqlserver-se ^
      --engine-version 15.0.4236.7.v1 ^
      --allocated-storage 100 ^
      --manage-master-user-password ^
      --master-username admin ^
      --storage-type gp2 ^
      --license-model li ^
      --domain-iam-role-name my-directory-iam-role ^
      --domain my-domain-id ^
      --option-group-name oracle-oledb-se-2019 ^
      --db-parameter-group-name my-parameter-group-name
  ```

**To modify an instance and associate the `OLEDB_ORACLE` option group**
+ Run one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-instance \
      --db-instance-identifier mytestsqlserveroracleoledbinstance \
      --option-group-name oracle-oledb-se-2019 \
      --db-parameter-group-name my-parameter-group-name \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds modify-db-instance ^
      --db-instance-identifier mytestsqlserveroracleoledbinstance ^
      --option-group-name oracle-oledb-se-2019 ^
      --db-parameter-group-name my-parameter-group-name ^
      --apply-immediately
  ```

## Modifying OLEDB provider properties
<a name="LinkedServers_Oracle_OLEDB.ModifyProviderProperties"></a>

You can view and change the properties of the OLEDB provider. Only the `master` user can perform this task. All linked servers for Oracle that are created on the DB instance use the same properties of that OLEDB provider. Call the `sp_MSset_oledb_prop` stored procedure to change the properties of the OLEDB provider.

To change the OLEDB provider properties

```
				
USE [master]
GO
EXEC sp_MSset_oledb_prop N'OraOLEDB.Oracle', N'AllowInProcess', 1 
EXEC sp_MSset_oledb_prop N'OraOLEDB.Oracle', N'DynamicParameters', 0
GO
```

The following properties can be modified:


****  

| Property name | Recommended Value (1 = On, 0 = Off) | Description | 
| --- | --- | --- | 
| `Dynamic parameter` | 1 | Allows SQL placeholders (represented by '?') in parameterized queries. | 
| `Nested queries` | 1 | Allows nested `SELECT` statements in the `FROM` clause, such as sub-queries. | 
| `Level zero only` | 0 | Only base-level OLEDB interfaces are called against the provider. | 
| `Allow inprocess` | 1 | If turned on, Microsoft SQL Server allows the provider to be instantiated as an in-process server. Set this property to 1 to use Oracle linked servers. | 
| `Non transacted updates` | 0 | If non-zero, SQL Server allows updates. | 
| `Index as access path` | False | If non-zero, SQL Server attempts to use indexes of the provider to fetch data. | 
| `Disallow adhoc access` | False | If set, SQL Server does not allow running pass-through queries against the OLEDB provider. While this option can be checked, it is sometimes appropriate to run pass-through queries. | 
| `Supports LIKE operator` | 1 | Indicates that the provider supports queries using the LIKE keyword. | 

## Modifying OLEDB driver properties
<a name="LinkedServers_Oracle_OLEDB.ModifyDriverProperties"></a>

You can view and change the properties of the OLEDB driver when creating a linked server for Oracle. Only the `master` user can perform this task. Driver properties define how the OLEDB driver handles data when working with a remote Oracle data source. Driver properties are specific to each Oracle linked server created on the DB instance. Call the `master.dbo.sp_addlinkedserver` stored procedure to change the properties of the OLEDB driver.

Example: To create a linked server and change the OLEDB driver `FetchSize` property

```
	
EXEC master.dbo.sp_addlinkedserver
@server = N'Oracle_link2',
@srvproduct=N'Oracle',
@provider=N'OraOLEDB.Oracle',
@datasrc=N'my-oracle-test.cnetsipka.us-west-2.rds.amazonaws.com:1521/ORCL',
@provstr='FetchSize=200'
GO
```

```
	
EXEC master.dbo.sp_addlinkedsrvlogin
@rmtsrvname=N'Oracle_link2',
@useself=N'False',
@locallogin=NULL,
@rmtuser=N'master',
@rmtpassword='Test#1234'
GO
```

**Note**  
Specify a password other than the prompt shown here as a security best practice.

## Deactivating linked servers with Oracle
<a name="LinkedServers_Oracle_OLEDB.Disable"></a>

To deactivate linked servers with Oracle, remove the `OLEDB_ORACLE` option from its option group.

**Important**  
Removing the option doesn't delete the existing linked server configurations on the DB instance. You must manually drop them to remove them from the DB instance.  
You can reactivate the `OLEDB_ORACLE` option after removal to reuse the linked server configurations that were previously configured on the DB instance.

### Console
<a name="LinkedServers_Oracle_OLEDB.Disable.Console"></a>

The following procedure removes the `OLEDB_ORACLE` option.

**To remove the OLEDB\$1ORACLE option from its option group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose the option group with the `OLEDB_ORACLE` option (`oracle-oledb-se-2019` in the previous examples).

1. Choose **Delete option**.

1. Under **Deletion options**, choose **OLEDB\$1ORACLE** for **Options to delete**.

1. Under **Apply immediately**, choose **Yes** to delete the option immediately, or **No** to delete it during the next maintenance window.

1. Choose **Delete**.

### CLI
<a name="LinkedServers_Oracle_OLEDB.Disable.CLI"></a>

The following procedure removes the `OLEDB_ORACLE` option.

**To remove the OLEDB\$1ORACLE option from its option group**
+ Run one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds remove-option-from-option-group \
      --option-group-name oracle-oledb-se-2019 \
      --options OLEDB_ORACLE \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds remove-option-from-option-group ^
      --option-group-name oracle-oledb-se-2019 ^
      --options OLEDB_ORACLE ^
      --apply-immediately
  ```

# Linked Servers with Teradata ODBC in RDS for SQL Server
<a name="USER_SQLServerTeradata"></a>

Support for linked servers with the Teradata ODBC driver on RDS for SQL Server lets you access external data sources on a Teradata database. You can read data and run commands from remote Teradata database servers outside of your RDS for SQL Server instance. Use linked-servers with Teradata ODBC to enable the following capabilities:
+ Directly access data sources other than SQL Server.
+ Query against diverse Teradata data sources with the same query without moving the data.
+ Issue distributed queries, updates, commands, and transactions on data sources across an enterprise ecosystem.
+ Integrate connections to a Teradata database from within the Microsoft Business Intelligence Suite (SSIS, SSRS, SSAS).
+ Migrate from a Teradata database to RDS for SQL Server.

You can choose to activate one or more linked servers for Teradata on either an existing or new RDS for SQL Server DB instance. You can then integrate external Teradata data sources with your DB instance.

**Topics**
+ [

## Supported versions and Regions
](#USER_SQLServerTeradata.VersionRegionSupport)
+ [

## Limitations and recommendations
](#USER_SQLServerTeradata.LimitsandRecommendations)
+ [

## Considerations for Multi-AZ deployment
](#USER_SQLServerTeradata.MultiAZ)
+ [

# Activating linked servers with Teradata
](USER_SQLServerTeradata.Activate.md)
+ [

# Creating linked servers with Teradata
](USER_SQLServerTeradata.CreateLinkedServers.md)
+ [

# Deactivating servers linked to Teradata
](USER_SQLServerTeradata.Deactivate.md)

## Supported versions and Regions
<a name="USER_SQLServerTeradata.VersionRegionSupport"></a>

RDS for SQL Server supports linked servers with Teradata ODBC in all AWS Regions for SQL Server Standard and Enterprise Edition for the following versions:
+ SQL Server 2022, all versions
+ SQL Server 2019, all versions
+ SQL Server 2017, all versions

The following Teradata database versions support linking with RDS for SQL Server
+ Teradata 17.20, all versions

## Limitations and recommendations
<a name="USER_SQLServerTeradata.LimitsandRecommendations"></a>

The following limitations apply to linked servers with Teradata ODBC:
+ RDS for SQL Server support only simple authentication with a username and password for the Teradata source.
+ RDS for SQL Server supports only Teradata ODBC driver version 17.20.0.33.
+ RDS for SQL Server does not support creating data source names (DSNs) to use as shortcuts for a connection string.
+ RDS for SQL Server does not support ODBC driver tracing. Use SQL Server Extended Events to trace ODBC events. For more information, see [Set up Extended Events in RDS for SQL Server](https://aws.amazon.com/blogs/database/set-up-extended-events-in-amazon-rds-for-sql-server/).
+ RDS for SQL Server does not support access to the catalogs folder for a Teradata linked server when using SQL Server Management Studio (SSMS).

Consider the following recommendations when using linked servers with Teradata ODBC:
+ Allow network traffic by adding the applicable TCP port in the security group for each RDS for SQL Server DB instance. If you're configuring a linked server between an EC2 Teradata DB instance and an RDS for SQL Server DB instance, then you must allow traffic from the IP address of the EC2 Teradata DB instance. You also must allow traffic on the port that the RDS for SQL Server DB instance is using to listen for database communication. For more information on security groups, see [Controlling access with security groups](Overview.RDSSecurityGroups.md).
+ Distributed transactions (XA) are supported. To activate distributed transactions, turn on the `MSDTC` option in the option group for your DB instance and make sure XA transactions are turned on. For more information, see [Support for Microsoft Distributed Transaction Coordinator in RDS for SQL Server](Appendix.SQLServer.Options.MSDTC.md).
+ Linked Teradata ODBC support SSL/TLS as long as configured on the Teradata Server. For more information, see [ Enable TLS Connectivity on Teradata Vantage](https://docs.teradata.com/r/Enterprise_IntelliFlex_Lake_VMware/Teradata-Call-Level-Interface-Version-2-Reference-for-Workstation-Attached-Systems-20.00/Mainframe-TLS-Connectivity-Supplement/Enable-TLS-Connectivity-on-Teradata-Vantage).

## Considerations for Multi-AZ deployment
<a name="USER_SQLServerTeradata.MultiAZ"></a>

RDS for SQL Server currently doesn't replicate linked servers to the mirrored database server (or Always-On availability group secondary server) in a Multi-AZ deployment. If the linked servers are added before the configuration is changed to add mirroring or Always-On, then the linked servers are copied for the existing linked servers.

Alternatively, you can create the linked servers on the primary instance, fail over to the high availability server instance and then create the linked servers again so that they are on both RDS for SQL Server instances. 

# Activating linked servers with Teradata
<a name="USER_SQLServerTeradata.Activate"></a>

Activate linked servers with Teradata by adding the `ODBC_TERADATA` option to your RDS for SQL Server DB instance. Use the following process:

**Topics**
+ [

## Creating the option group for `ODBC_TERADATA`
](#USER_SQLServerTeradata.Activate.CreateOG)
+ [

## Adding the `ODBC_TERADATA` option to the option group
](#USER_SQLServerTeradata.Activate.AddOG)
+ [

## Associating the `ODBC_TERADATA` option with your DB instance
](#USER_SQLServerTeradata.Activate.AssociateOG)

## Creating the option group for `ODBC_TERADATA`
<a name="USER_SQLServerTeradata.Activate.CreateOG"></a>

To work with linked servers with Teradata, create an option group or modify an option group that corresponds to the SQL Server eddition and version of the DB instance that you plan to use. To complete this procedure, use the AWS Management Console or the AWS CLI.

### Console
<a name="USER_SQLServerTeradata.Activate.CreateOG.Console"></a>

Use the following procedure to create an option group for SQL Server Standard Edition 2019.

**To create the option group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose **Create group**.

1. In the **Create option group** window, do the following:

   1. For **Name**, enter a name for the option group that is unique within your AWS account, such as `teradata-odbc-se-2019`. The name can contain only letters, digits, and hyphens. 

   1. For **Description**, enter a brief description of the option group.

   1. For **Engine**, choose **sqlserver-se**.

   1. For **Major engine version**, choose **15.00**.

1. Choose **Create**.

### AWS CLI
<a name="USER_SQLServerTeradata.Activate.CreateOG.CLI"></a>

The following procedure creates an option group for SQL Server Standard Edition 2019.

**Example**  
For Linux, macOS, or Unix:  

```
aws rds create-option-group \
    --option-group-name teradata-odbc-se-2019 \
    --engine-name sqlserver-se \
    --major-engine-version 15.00 \
    --option-group-description "ODBC_TERADATA option group for SQL Server SE 2019"
```

**Example**  
For Windows:  

```
aws rds create-option-group ^
    --option-group-name teradata-odbc-se-2019 ^
    --engine-name sqlserver-se ^
    --major-engine-version 15.00 ^
    --option-group-description "ODBC_TERADATA option group for SQL Server SE 2019"
```

## Adding the `ODBC_TERADATA` option to the option group
<a name="USER_SQLServerTeradata.Activate.AddOG"></a>

Next, use the AWS Management Console or the AWS CLI to add the `ODBC_Teradata` option to your option group.

### Console
<a name="USER_SQLServerTeradata.Activate.AddOG.Console"></a>

Use the following procedure creates an option group for SQL Server Standard Edition 2019.

**To add the `ODBC_TERADATA` option**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose your new option group.

1. Choose **Add option**.

1. Under **Option details**:

   1. Choose **ODBC\$1TERADATA** for **Option name**.

   1. For `17.20.33.00` for **Option version**.

1. Under scheduling, choose whether to add the option immediately or at the next maintenance window.

1. Choose **Add option**.

### AWS CLI
<a name="USER_SQLServerTeradata.Activate.AddOG.CLI"></a>

The following procedure adds the `ODBC_TERADATA` option to your option group.

**Example**  
For Linux, macOS, or Unix:  

```
aws rds add-option-to-option-group \
    --option-group-name teradata-odbc-se-2019 \
    --options "OptionName=ODBC_TERADATA,OptionVersion=17.20.33.00" \
    --apply-immediately
```

**Example**  
For Windows:  

```
aws rds add-option-to-option-group ^
    --option-group-name teradata-odbc-se-2019 ^
    --options "OptionName=ODBC_TERADATA,OptionVersion=17.20.33.00" ^
    --apply-immediately
```

## Associating the `ODBC_TERADATA` option with your DB instance
<a name="USER_SQLServerTeradata.Activate.AssociateOG"></a>

To associate the `ODBC_TERADATA` option group with your DB instance, use the AWS Management Console or AWS CLI.

### Console
<a name="USER_SQLServerTeradata.Activate.AssociateOG.Console"></a>

To finish activating linked servers for Teradata, associate your option group with a new or existing DB instance:
+ For a new DB instance, associate it when you launch the instance. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).
+ For an existing DB instance, associate it by modifying the instance. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

### AWS CLI
<a name="USER_SQLServerTeradata.Activate.AssociateOG.CLI"></a>

Specify the same DB engine type and major version that you used when creating the option group.

For Linux, macOS, or Unix:

```
aws rds create-db-instance \
    --db-instance-identifier mytestsqlserverteradataodbcinstance \
    --db-instance-class db.m5.2xlarge \
    --engine sqlserver-se \
    --engine-version 15.00 \
    --license-model license-included \
    --allocated-storage 100 \
    --master-username admin \
    --master-user-password password \
    --storage-type gp2 \
    --option-group-name teradata-odbc-se-2019
```

For Windows:

```
aws rds create-db-instance ^
    --db-instance-identifier mytestsqlserverteradataodbcinstance ^
    --db-instance-class db.m5.2xlarge ^
    --engine sqlserver-se ^
    --engine-version 15.00 ^
    --license-model license-included ^ 
    --allocated-storage 100 ^
    --master-username admin ^
    --master-user-password password ^
    --storage-type gp2 ^
    --option-group-name teradata-odbc-se-2019
```

To modify an instance and associate the new option group:

For Linux, macOS, or Unix:

```
aws rds modify-db-instance \
    --db-instance-identifier mytestsqlserverteradataodbcinstance \
    --option-group-name teradata-odbc-se-2019 \
    --apply-immediately
```

For Windows:

```
aws rds modify-db-instance ^
    --db-instance-identifier mytestsqlserverteradataodbcinstance ^
    --option-group-name teradata-odbc-se-2019 ^
    --apply-immediately
```

# Creating linked servers with Teradata
<a name="USER_SQLServerTeradata.CreateLinkedServers"></a>

To create a linked server with Teradata, run the following commands:

```
EXECUTE master.dbo.sp_addlinkedserver 
    @server = N'LinkedServer_NAME', 
    @srvproduct=N'', 
    @provider=N'MSDASQL', 
    @provstr=N'"PROVIDER=MSDASQL;DRIVER={Teradata Database ODBC Driver 17.20};
                DBCName=Server;UID=user_name;PWD=user_password;
                UseDataEncryption=YES/NO;SSLMODE=PREFER/ALLOW/DISABLE>;"', 
    @catalog='database'
```

```
EXECUTE master.dbo.sp_addlinkedsrvlogin 
    @rmtsrvname = N'LinkedServer_NAME', 
    @locallogin = NULL , 
    @useself = N'False', 
    @rmtuser = N'user_name', 
    @rmtpassword = N'user_password'
```

An example of the the commands above are shown here:

```
EXECUTE master.dbo.sp_addlinkedserver 
    @server = N'LinkedServerToTeradata', 
    @srvproduct=N'', 
    @provider=N'MSDASQL', 
    @provstr=N'"PROVIDER=MSDASQL;DRIVER={Teradata Database ODBC Driver 17.20};
                DBCName=my-teradata-test.cnetsipka.us-west-2.rds.amazonaws.com;
                UID=master;
                PWD=Test#1234;
                UseDataEncryption=YES;
                SSLMODE=PREFER;"', 
    @catalog='MyTestTeradataDB'

EXECUTE master.dbo.sp_addlinkedsrvlogin 
    @rmtsrvname = N'LinkedServerToTeradata', 
    @locallogin = NULL , 
    @useself = N'False', 
    @rmtuser = N'master', 
    @rmtpassword = N'Test#1234'
```

**Note**  
Specify a password other than the prompt shown here as a security best practice.

# Deactivating servers linked to Teradata
<a name="USER_SQLServerTeradata.Deactivate"></a>

To deactivate linked servers to Teradata, remove the `ODBC_TERADATA` option from its option group.

**Important**  
Deleting the option doesn't delete the linked server configurations on the DB instance. You must manually drop them to remove them from the DB instance.  
You can reactivate the `ODBC_TERADATA` after removal to reuse the linked server configurations that were previously configured on the DB instance.

## Console
<a name="USER_SQLServerTeradata.Deactivate.Console"></a>

To remove the `ODBC_TERADATA` option from the option group

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose the option group with the `ODBC_TERADATA` option. 

1. Choose **Delete**.

1. Under **Deletion options**, choose `ODBC_TERADATA` under **Options to delete**.

1. Under **Apply immediately**, choose **Yes** to delete the option immediately, or **No** to delete it during the next maintenance window.

1. Choose **Delete**.

## AWS CLI
<a name="USER_SQLServerTeradata.Deactivate.CLI"></a>

The following commands removes the `ODBC_TERADATA` option.

For Linux, macOS, or Unix:

```
aws rds remove-option-from-option-group \
    --option-group-name teradata-odbc-se-2019 \
    --options ODBC_TERADATA \
    --apply-immediately
```

For Windows:

```
aws rds remove-option-from-option-group ^
    --option-group-name teradata-odbc-se-2019 ^
    --options ODBC_TERADATA ^
    --apply-immediately
```

# Support for native backup and restore in SQL Server
<a name="Appendix.SQLServer.Options.BackupRestore"></a>

By using native backup and restore for SQL Server databases, you can create a differential or full backup of your on-premises database and store the backup files on Amazon S3. You can then restore to an existing Amazon RDS DB instance running SQL Server. You can also back up an RDS for SQL Server database, store it on Amazon S3, and restore it in other locations. In addition, you can restore the backup to an on-premises server, or a different Amazon RDS DB instance running SQL Server. For more information, see [Importing and exporting SQL Server databases using native backup and restore](SQLServer.Procedural.Importing.md).

Amazon RDS supports native backup and restore for Microsoft SQL Server databases by using differential and full backup files (.bak files).

## Adding the native backup and restore option
<a name="Appendix.SQLServer.Options.BackupRestore.Add"></a>

The general process for adding the native backup and restore option to a DB instance is the following:

1. Create a new option group, or copy or modify an existing option group.

1. Add the `SQLSERVER_BACKUP_RESTORE` option to the option group.

1. Associate an AWS Identity and Access Management (IAM) role with the option. The IAM role must have access to an S3 bucket to store the database backups.

   That is, the option must have as its option setting a valid Amazon Resource Name (ARN) in the format `arn:aws:iam::account-id:role/role-name`. For more information, see [Amazon Resource Names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-iam) in the *AWS General Reference.*

   The IAM role must also have a trust relationship and a permissions policy attached. The trust relationship allows RDS to assume the role, and the permissions policy defines the actions that the role can perform. For more information, see [Manually creating an IAM role for native backup and restore](SQLServer.Procedural.Importing.Native.Enabling.md#SQLServer.Procedural.Importing.Native.Enabling.IAM).

1. Associate the option group with the DB instance.

After you add the native backup and restore option, you don't need to restart your DB instance. As soon as the option group is active, you can begin backing up and restoring immediately.

### Console
<a name="Add.Native.Backup.Restore.Console"></a>

**To add the native backup and restore option**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Create a new option group or use an existing option group. For information on how to create a custom DB option group, see [Creating an option group](USER_WorkingWithOptionGroups.md#USER_WorkingWithOptionGroups.Create).

   To use an existing option group, skip to the next step.

1. Add the **SQLSERVER\$1BACKUP\$1RESTORE** option to the option group. For more information about adding options, see [Adding an option to an option group](USER_WorkingWithOptionGroups.md#USER_WorkingWithOptionGroups.AddOption).

1. Do one of the following:
   + To use an existing IAM role and Amazon S3 settings, choose an existing IAM role for **IAM Role**. If you use an existing IAM role, RDS uses the Amazon S3 settings configured for this role.
   + To create a new role and configure Amazon S3 settings, do the following: 

     1. For **IAM role**, choose **Create a new role**.

     1. For **S3 bucket**, choose an S3 bucket from the list.

     1. For **S3 prefix (optional)**, specify a prefix to use for the files stored in your Amazon S3 bucket. 

        This prefix can include a file path but doesn't have to. If you provide a prefix, RDS attaches that prefix to all backup files. RDS then uses the prefix during a restore to identify related files and ignore irrelevant files. For example, you might use the S3 bucket for purposes besides holding backup files. In this case, you can use the prefix to have RDS perform native backup and restore only on a particular folder and its subfolders.

        If you leave the prefix blank, then RDS doesn't use a prefix to identify backup files or files to restore. As a result, during a multiple-file restore, RDS attempts to restore every file in every folder of the S3 bucket.

     1. Choose the **Enable encryption** check box to encrypt the backup file. Leave the check box cleared (the default) to have the backup file unencrypted.

        If you chose **Enable encryption**, choose an encryption key for **AWS KMS key**. For more information about encryption keys, see [Getting started](https://docs.aws.amazon.com/kms/latest/developerguide/getting-started.html) in the *AWS Key Management Service Developer Guide.*

1. Choose **Add option**.

1. Apply the option group to a new or existing DB instance:
   + For a new DB instance, apply the option group when you launch the instance. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md). 
   + For an existing DB instance, apply the option group by modifying the instance and attaching the new option group. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md). 

### CLI
<a name="Add.Native.Backup.Restore.CLI"></a>

This procedure makes the following assumptions:
+ You're adding the SQLSERVER\$1BACKUP\$1RESTORE option to an option group that already exists. For more information about adding options, see [Adding an option to an option group](USER_WorkingWithOptionGroups.md#USER_WorkingWithOptionGroups.AddOption).
+ You're associating the option with an IAM role that already exists and has access to an S3 bucket to store the backups.
+ You're applying the option group to a DB instance that already exists. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md). 

**To add the native backup and restore option**

1. Add the `SQLSERVER_BACKUP_RESTORE` option to the option group.  
**Example**  

   For Linux, macOS, or Unix:

   ```
   aws rds add-option-to-option-group \
   	--apply-immediately \
   	--option-group-name mybackupgroup \
   	--options "OptionName=SQLSERVER_BACKUP_RESTORE, \
   	  OptionSettings=[{Name=IAM_ROLE_ARN,Value=arn:aws:iam::account-id:role/role-name}]"
   ```

   For Windows:

   ```
   aws rds add-option-to-option-group ^
   	--option-group-name mybackupgroup ^
   	--options "[{\"OptionName\": \"SQLSERVER_BACKUP_RESTORE\", ^
   	\"OptionSettings\": [{\"Name\": \"IAM_ROLE_ARN\", ^
   	\"Value\": \"arn:aws:iam::account-id:role/role-name"}]}]" ^
   	--apply-immediately
   ```
**Note**  
When using the Windows command prompt, you must escape double quotes (") in JSON code by prefixing them with a backslash (\$1).

1. Apply the option group to the DB instance.  
**Example**  

   For Linux, macOS, or Unix:

   ```
   aws rds modify-db-instance \
   	--db-instance-identifier mydbinstance \
   	--option-group-name mybackupgroup \
   	--apply-immediately
   ```

   For Windows:

   ```
   aws rds modify-db-instance ^
   	--db-instance-identifier mydbinstance ^
   	--option-group-name mybackupgroup ^
   	--apply-immediately
   ```

## Modifying native backup and restore option settings
<a name="Appendix.SQLServer.Options.BackupRestore.ModifySettings"></a>

After you enable the native backup and restore option, you can modify the settings for the option. For more information about how to modify option settings, see [Modifying an option setting](USER_WorkingWithOptionGroups.md#USER_WorkingWithOptionGroups.ModifyOption).

## Removing the native backup and restore option
<a name="Appendix.SQLServer.Options.BackupRestore.Remove"></a>

You can turn off native backup and restore by removing the option from your DB instance. After you remove the native backup and restore option, you don't need to restart your DB instance. 

To remove the native backup and restore option from a DB instance, do one of the following: 
+ Remove the option from the option group it belongs to. This change affects all DB instances that use the option group. For more information, see [Removing an option from an option group](USER_WorkingWithOptionGroups.md#USER_WorkingWithOptionGroups.RemoveOption). 
+ Modify the DB instance and specify a different option group that doesn't include the native backup and restore option. This change affects a single DB instance. You can specify the default (empty) option group, or a different custom option group. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md). 

# Support for Transparent Data Encryption in SQL Server
<a name="Appendix.SQLServer.Options.TDE"></a>

Amazon RDS supports using Transparent Data Encryption (TDE) to encrypt stored data on your DB instances running Microsoft SQL Server. TDE automatically encrypts data before it is written to storage, and automatically decrypts data when the data is read from storage. 

Amazon RDS supports TDE for the following SQL Server versions and editions:
+ SQL Server 2022 Standard and Enterprise Editions
+ SQL Server 2019 Standard and Enterprise Editions
+ SQL Server 2017 Enterprise Edition
+ SQL Server 2016 Enterprise Edition

**Note**  
RDS for SQL Server does not support TDE for read-only databases.

Transparent Data Encryption for SQL Server provides encryption key management by using a two-tier key architecture. A certificate, which is generated from the database master key, is used to protect the data encryption keys. The database encryption key performs the actual encryption and decryption of data on the user database. Amazon RDS backs up and manages the database master key and the TDE certificate.

Transparent Data Encryption is used in scenarios where you need to encrypt sensitive data. For example, you might want to provide data files and backups to a third party, or address security-related regulatory compliance issues. You can't encrypt the system databases for SQL Server, such as the `model` or `master` databases.

A detailed discussion of Transparent Data Encryption is beyond the scope of this guide, but make sure that you understand the security strengths and weaknesses of each encryption algorithm and key. For information about Transparent Data Encryption for SQL Server, see [Transparent Data Encryption (TDE)](http://msdn.microsoft.com/en-us/library/bb934049.aspx) in the Microsoft documentation.

**Topics**
+ [

## Turning on TDE for RDS for SQL Server
](#TDE.Enabling)
+ [

# Encrypting data on RDS for SQL Server
](TDE.Encrypting.md)
+ [

# Backing up and restoring TDE certificates on RDS for SQL Server
](TDE.BackupRestoreRDS.md)
+ [

# Backing up and restoring TDE certificates for on-premises databases
](TDE.BackupRestoreOnPrem.md)
+ [

# Turning off TDE for RDS for SQL Server
](TDE.Disabling.md)

## Turning on TDE for RDS for SQL Server
<a name="TDE.Enabling"></a>

To turn on Transparent Data Encryption for an RDS for SQL Server DB instance, specify the TDE option in an RDS option group that's associated with that DB instance:

1. Determine whether your DB instance is already associated with an option group that has the TDE option. To view the option group that a DB instance is associated with, use the RDS console, the [describe-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html) AWS CLI command, or the API operation [DescribeDBInstances](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html).

1.  If the DB instance isn't associated with an option group that has TDE turned on, you have two choices. You can create an option group and add the TDE option, or you can modify the associated option group to add it.
**Note**  
In the RDS console, the option is named `TRANSPARENT_DATA_ENCRYPTION`. In the AWS CLI and RDS API, it's named `TDE`.

   For information about creating or modifying an option group, see [Working with option groups](USER_WorkingWithOptionGroups.md). For information about adding an option to an option group, see [Adding an option to an option group](USER_WorkingWithOptionGroups.md#USER_WorkingWithOptionGroups.AddOption).

1.  Associate the DB instance with the option group that has the TDE option. For information about associating a DB instance with an option group, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

### Option group considerations
<a name="TDE.Options"></a>

The TDE option is a persistent option. You can't remove it from an option group unless all DB instances and backups are no longer associated with the option group. After you add the TDE option to an option group, the option group can be associated only with DB instances that use TDE. For more information about persistent options in an option group, see [Option groups overview](USER_WorkingWithOptionGroups.md#Overview.OptionGroups). 

Because the TDE option is a persistent option, you can have a conflict between the option group and an associated DB instance. You can have a conflict in the following situations:
+ The current option group has the TDE option, and you replace it with an option group that doesn't have the TDE option.
+ You restore from a DB snapshot to a new DB instance that doesn't have an option group that contains the TDE option. For more information about this scenario, see [Considerations for option groups](USER_CopySnapshot.md#USER_CopySnapshot.Options). 

### SQL Server performance considerations
<a name="TDE.Perf"></a>

Using Transparent Data Encryption can affect the performance of a SQL Server DB instance.

Performance for unencrypted databases can also be degraded if the databases are on a DB instance that has at least one encrypted database. As a result, we recommend that you keep encrypted and unencrypted databases on separate DB instances.

# Encrypting data on RDS for SQL Server
<a name="TDE.Encrypting"></a>

When the TDE option is added to an option group, Amazon RDS generates a certificate that's used in the encryption process. You can then use the certificate to run SQL statements that encrypt data in a database on the DB instance.

The following example uses the RDS-created certificate called `RDSTDECertificateName` to encrypt a database called `myDatabase`.

```
 1. ---------- Turning on TDE -------------
 2. 
 3. -- Find an RDS TDE certificate to use
 4. USE [master]
 5. GO
 6. SELECT name FROM sys.certificates WHERE name LIKE 'RDSTDECertificate%'
 7. GO
 8. 
 9. USE [myDatabase]
10. GO
11. -- Create a database encryption key (DEK) using one of the certificates from the previous step
12. CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_256
13. ENCRYPTION BY SERVER CERTIFICATE [RDSTDECertificateName]
14. GO
15. 
16. -- Turn on encryption for the database
17. ALTER DATABASE [myDatabase] SET ENCRYPTION ON
18. GO
19. 
20. -- Verify that the database is encrypted
21. USE [master]
22. GO
23. SELECT name FROM sys.databases WHERE is_encrypted = 1
24. GO
25. SELECT db_name(database_id) as DatabaseName, * FROM sys.dm_database_encryption_keys
26. GO
```

The time that it takes to encrypt a SQL Server database using TDE depends on several factors. These include the size of the DB instance, whether the instance uses Provisioned IOPS storage, the amount of data, and other factors.

# Backing up and restoring TDE certificates on RDS for SQL Server
<a name="TDE.BackupRestoreRDS"></a>

RDS for SQL Server provides stored procedures for backing up, restoring, and dropping TDE certificates. RDS for SQL Server also provides a function for viewing restored user TDE certificates.

User TDE certificates are used to restore databases to RDS for SQL Server that are on-premises and have TDE turned on. These certificates have the prefix `UserTDECertificate_`. After restoring databases, and before making them available to use, RDS modifies the databases that have TDE turned on to use RDS-generated TDE certificates. These certificates have the prefix `RDSTDECertificate`.

User TDE certificates remain on the RDS for SQL Server DB instance, unless you drop them using the `rds_drop_tde_certificate` stored procedure. For more information, see [Dropping restored TDE certificates](#TDE.BackupRestoreRDS.Drop).

You can use a user TDE certificate to restore other databases from the source DB instance. The databases to restore must use the same TDE certificate and have TDE turned on. You don't have to import (restore) the same certificate again. 

**Topics**
+ [

## Prerequisites
](#TDE.BackupRestoreRDS.Prereqs)
+ [

## Limitations
](#TDE.Limitations)
+ [

## Backing up a TDE certificate
](#TDE.BackupRestoreRDS.Backup)
+ [

## Restoring a TDE certificate
](#TDE.BackupRestoreRDS.Restore)
+ [

## Viewing restored TDE certificates
](#TDE.BackupRestoreRDS.Show)
+ [

## Dropping restored TDE certificates
](#TDE.BackupRestoreRDS.Drop)

## Prerequisites
<a name="TDE.BackupRestoreRDS.Prereqs"></a>

Before you can back up or restore TDE certificates on RDS for SQL Server, make sure to perform the following tasks. The first three are described in [Setting up for native backup and restore](SQLServer.Procedural.Importing.Native.Enabling.md).

1. Create Amazon S3 general purpose buckets or directory buckets for storing files to back up and restore.

   We recommend that you use separate buckets for database backups and for TDE certificate backups.

1. Create an IAM role for backing up and restoring files.

   The IAM role must be both a user and an administrator for the AWS KMS key.

   When using directory buckets, no additional permissions are required other than the permissions required for [Manually creating an IAM role for native backup and restore](SQLServer.Procedural.Importing.Native.Enabling.md#SQLServer.Procedural.Importing.Native.Enabling.IAM) with directory buckets.

   When using S3 resources, the IAM role also requires the following permissions in addition to the permissions required for [Manually creating an IAM role for native backup and restore](SQLServer.Procedural.Importing.Native.Enabling.md#SQLServer.Procedural.Importing.Native.Enabling.IAM):
   + `s3:GetBucketAcl`, `s3:GetBucketLocation`, and `s3:ListBucket` on the S3 bucket resource

1. Add the `SQLSERVER_BACKUP_RESTORE` option to an option group on your DB instance.

   This is in addition to the `TRANSPARENT_DATA_ENCRYPTION` (`TDE`) option.

1. Make sure that you have a symmetric encryption KMS key. You have the following options:
   + If you have an existing KMS key in your account, you can use it. No further action is necessary.
   + If you don't have an existing symmetric encryption KMS key in your account, create a KMS key by following the instructions in [Creating keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html#create-symmetric-cmk) in the *AWS Key Management Service Developer Guide*.

1. Enable Amazon S3 integration to transfer files between the DB instance and Amazon S3.

   For more information on enabling Amazon S3 integration, see [Integrating an Amazon RDS for SQL Server DB instance with Amazon S3](User.SQLServer.Options.S3-integration.md).

   Note that directory buckets are not supported for S3 integration. This step is only required for [Backing up and restoring TDE certificates for on-premises databases](TDE.BackupRestoreOnPrem.md).

## Limitations
<a name="TDE.Limitations"></a>

Using stored procedures to back up and restore TDE certificates has the following limitations:
+ Both the `SQLSERVER_BACKUP_RESTORE` and `TRANSPARENT_DATA_ENCRYPTION` (`TDE`) options must be added to the option group that you associated with your DB instance.
+ TDE certificate backup and restore aren't supported on Multi-AZ DB instances.
+ Canceling TDE certificate backup and restore tasks isn't supported.
+ You can't use a user TDE certificate for TDE encryption of any other database on your RDS for SQL Server DB instance. You can use it to restore only other databases from the source DB instance that have TDE turned on and that use the same TDE certificate.
+ You can drop only user TDE certificates.
+ The maximum number of user TDE certificates supported on RDS is 10. If the number exceeds 10, drop unused TDE certificates and try again.
+ The certificate name can't be empty or null.
+ When restoring a certificate, the certificate name can't include the keyword `RDSTDECERTIFICATE`, and must start with the `UserTDECertificate_` prefix.
+ The `@certificate_name` parameter can include only the following characters: a-z, 0-9, @, \$1, \$1, and underscore (\$1).
+ The file extension for `@certificate_file_s3_arn` must be .cer (case-insensitive).
+ The file extension for `@private_key_file_s3_arn` must be .pvk (case-insensitive).
+ The S3 metadata for the private key file must include the `x-amz-meta-rds-tde-pwd` tag. For more information, see [Backing up and restoring TDE certificates for on-premises databases](TDE.BackupRestoreOnPrem.md).
+ RDS for SQL Server does not support using cross-account keys for TDE.

## Backing up a TDE certificate
<a name="TDE.BackupRestoreRDS.Backup"></a>

To back up TDE certificates, use the `rds_backup_tde_certificate` stored procedure. It has the following syntax.

```
EXECUTE msdb.dbo.rds_backup_tde_certificate
    @certificate_name='UserTDECertificate_certificate_name | RDSTDECertificatetimestamp',
    @certificate_file_s3_arn='arn:aws:s3:::bucket_name/certificate_file_name.cer',
    @private_key_file_s3_arn='arn:aws:s3:::bucket_name/key_file_name.pvk',
    @kms_password_key_arn='arn:aws:kms:region:account-id:key/key-id',
    [@overwrite_s3_files=0|1];
```

The following parameters are required:
+ `@certificate_name` – The name of the TDE certificate to back up.
+ `@certificate_file_s3_arn` – The destination Amazon Resource Name (ARN) for the certificate backup file in Amazon S3.
+ `@private_key_file_s3_arn` – The destination S3 ARN of the private key file that secures the TDE certificate.
+ `@kms_password_key_arn` – The ARN of the symmetric KMS key used to encrypt the private key password.

The following parameter is optional:
+ `@overwrite_s3_files` – Indicates whether to overwrite the existing certificate and private key files in S3:
  + `0` – Doesn't overwrite the existing files. This value is the default.

    Setting `@overwrite_s3_files` to 0 returns an error if a file already exists.
  + `1` – Overwrites an existing file that has the specified name, even if it isn't a backup file.

**Example of backing up a TDE certificate**  

```
EXECUTE msdb.dbo.rds_backup_tde_certificate
    @certificate_name='RDSTDECertificate20211115T185333',
    @certificate_file_s3_arn='arn:aws:s3:::TDE_certs/mycertfile.cer',
    @private_key_file_s3_arn='arn:aws:s3:::TDE_certs/mykeyfile.pvk',
    @kms_password_key_arn='arn:aws:kms:us-west-2:123456789012:key/AKIAIOSFODNN7EXAMPLE',
    @overwrite_s3_files=1;
```

## Restoring a TDE certificate
<a name="TDE.BackupRestoreRDS.Restore"></a>

You use the `rds_restore_tde_certificate` stored procedure to restore (import) user TDE certificates. It has the following syntax.

```
EXECUTE msdb.dbo.rds_restore_tde_certificate
    @certificate_name='UserTDECertificate_certificate_name',
    @certificate_file_s3_arn='arn:aws:s3:::bucket_name/certificate_file_name.cer',
    @private_key_file_s3_arn='arn:aws:s3:::bucket_name/key_file_name.pvk',
    @kms_password_key_arn='arn:aws:kms:region:account-id:key/key-id';
```

The following parameters are required:
+ `@certificate_name` – The name of the TDE certificate to restore. The name must start with the `UserTDECertificate_` prefix.
+ `@certificate_file_s3_arn` – The S3 ARN of the backup file used to restore the TDE certificate.
+ `@private_key_file_s3_arn` – The S3 ARN of the private key backup file of the TDE certificate to be restored.
+ `@kms_password_key_arn` – The ARN of the symmetric KMS key used to encrypt the private key password.

**Example of restoring a TDE certificate**  

```
EXECUTE msdb.dbo.rds_restore_tde_certificate
    @certificate_name='UserTDECertificate_myTDEcertificate',
    @certificate_file_s3_arn='arn:aws:s3:::TDE_certs/mycertfile.cer',
    @private_key_file_s3_arn='arn:aws:s3:::TDE_certs/mykeyfile.pvk',
    @kms_password_key_arn='arn:aws:kms:us-west-2:123456789012:key/AKIAIOSFODNN7EXAMPLE';
```

## Viewing restored TDE certificates
<a name="TDE.BackupRestoreRDS.Show"></a>

You use the `rds_fn_list_user_tde_certificates` function to view restored (imported) user TDE certificates. It has the following syntax.

```
SELECT * FROM msdb.dbo.rds_fn_list_user_tde_certificates();
```

The output resembles the following. Not all columns are shown here.


|  |  |  |  |  |  |  |  |  |  |  | 
| --- |--- |--- |--- |--- |--- |--- |--- |--- |--- |--- |
| name | certificate\$1id | principal\$1id | pvt\$1key\$1encryption\$1type\$1desc | issuer\$1name | cert\$1serial\$1number | thumbprint | subject | start\$1date | expiry\$1date | pvt\$1key\$1last\$1backup\$1date | 
| UserTDECertificate\$1tde\$1cert | 343 | 1 | ENCRYPTED\$1BY\$1MASTER\$1KEY | AnyCompany Shipping | 79 3e 57 a3 69 fd 1d 9e 47 2c 32 67 1d 9c ca af | 0x6BB218B34110388680B FE1BA2D86C695096485B5 | AnyCompany Shipping | 2022-04-05 19:49:45.0000000 | 2023-04-05 19:49:45.0000000 | NULL | 

## Dropping restored TDE certificates
<a name="TDE.BackupRestoreRDS.Drop"></a>

To drop restored (imported) user TDE certificates that you aren't using, use the `rds_drop_tde_certificate` stored procedure. It has the following syntax.

```
EXECUTE msdb.dbo.rds_drop_tde_certificate @certificate_name='UserTDECertificate_certificate_name';
```

The following parameter is required:
+ `@certificate_name` – The name of the TDE certificate to drop.

You can drop only restored (imported) TDE certificates. You can't drop RDS-created certificates.

**Example of dropping a TDE certificate**  

```
EXECUTE msdb.dbo.rds_drop_tde_certificate @certificate_name='UserTDECertificate_myTDEcertificate';
```

# Backing up and restoring TDE certificates for on-premises databases
<a name="TDE.BackupRestoreOnPrem"></a>

You can back up TDE certificates for on-premises databases, then later restore them to RDS for SQL Server. You can also restore an RDS for SQL Server TDE certificate to an on-premises DB instance.

**Note**  
RDS for SQL Server does not support using cross-account keys for TDE.

The following procedure backs up a TDE certificate and private key. The private key is encrypted using a data key generated from your symmetric encryption KMS key.

**To back up an on-premises TDE certificate**

1. Generate the data key using the AWS CLI [generate-data-key](https://docs.aws.amazon.com/cli/latest/reference/kms/generate-data-key.html) command.

   ```
   aws kms generate-data-key \
       --key-id my_KMS_key_ID \
       --key-spec AES_256
   ```

   The output resembles the following.

   ```
   {
   "CiphertextBlob": "AQIDAHimL2NEoAlOY6Bn7LJfnxi/OZe9kTQo/XQXduug1rmerwGiL7g5ux4av9GfZLxYTDATAAAAfjB8BgkqhkiG9w0B
   BwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMyCxLMi7GRZgKqD65AgEQgDtjvZLJo2cQ31Vetngzm2ybHDc3d2vI74SRUzZ
   2RezQy3sAS6ZHrCjfnfn0c65bFdhsXxjSMnudIY7AKw==",
   "Plaintext": "U/fpGtmzGCYBi8A2+0/9qcRQRK2zmG/aOn939ZnKi/0=",
   "KeyId": "arn:aws:kms:us-west-2:123456789012:key/1234abcd-00ee-99ff-88dd-aa11bb22cc33"
   }
   ```

   You use the plain text output in the next step as the private key password.

1. Back up your TDE certificate as shown in the following example.

   ```
   BACKUP CERTIFICATE myOnPremTDEcertificate TO FILE = 'D:\tde-cert-backup.cer'
   WITH PRIVATE KEY (
   FILE = 'C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\DATA\cert-backup-key.pvk',
   ENCRYPTION BY PASSWORD = 'U/fpGtmzGCYBi8A2+0/9qcRQRK2zmG/aOn939ZnKi/0=');
   ```

1. Save the certificate backup file to your Amazon S3 certificate bucket.

1. Save the private key backup file to your S3 certificate bucket, with the following tag in the file's metadata:
   + Key – `x-amz-meta-rds-tde-pwd`
   + Value – The `CiphertextBlob` value from generating the data key, as in the following example.

     ```
     AQIDAHimL2NEoAlOY6Bn7LJfnxi/OZe9kTQo/XQXduug1rmerwGiL7g5ux4av9GfZLxYTDATAAAAfjB8BgkqhkiG9w0B
     BwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMyCxLMi7GRZgKqD65AgEQgDtjvZLJo2cQ31Vetngzm2ybHDc3d2vI74SRUzZ
     2RezQy3sAS6ZHrCjfnfn0c65bFdhsXxjSMnudIY7AKw==
     ```

The following procedure restores an RDS for SQL Server TDE certificate to an on-premises DB instance. You copy and restore the TDE certificate on your destination DB instance using the certificate backup, corresponding private key file, and data key. The restored certificate is encrypted by the database master key of the new server. 

**To restore a TDE certificate**

1. Copy the TDE certificate backup file and private key file from Amazon S3 to the destination instance. For more information on copying files from Amazon S3, see [Transferring files between RDS for SQL Server and Amazon S3](Appendix.SQLServer.Options.S3-integration.using.md).

1. Use your KMS key to decrypt the output cipher text to retrieve the plain text of the data key. The cipher text is located in the S3 metadata of the private key backup file.

   ```
   aws kms decrypt \
       --key-id my_KMS_key_ID \
       --ciphertext-blob fileb://exampleCiphertextFile | base64 -d \
       --output text \
       --query Plaintext
   ```

   You use the plain text output in the next step as the private key password.

1. Use the following SQL command to restore your TDE certificate.

   ```
   CREATE CERTIFICATE myOnPremTDEcertificate FROM FILE='D:\tde-cert-backup.cer'
   WITH PRIVATE KEY (FILE = N'D:\tde-cert-key.pvk',
   DECRYPTION BY PASSWORD = 'plain_text_output');
   ```

For more information on KMS decryption, see [decrypt](https://docs.aws.amazon.com/cli/latest/reference/kms/decrypt.html) in the KMS section of the *AWS CLI Command Reference*.

After the TDE certificate is restored on the destination DB instance, you can restore encrypted databases with that certificate.

**Note**  
You can use the same TDE certificate to encrypt multiple SQL Server databases on the source DB instance. To migrate multiple databases to a destination instance, copy the TDE certificate associated with them to the destination instance only once.

# Turning off TDE for RDS for SQL Server
<a name="TDE.Disabling"></a>

To turn off TDE for an RDS for SQL Server DB instance, first make sure that there are no encrypted objects left on the DB instance. To do so, either decrypt the objects or drop them. If any encrypted objects exist on the DB instance, you can't turn off TDE for the DB instance. If a user TDE certificate for encryption was restored (imported), then it should be dropped. When you use the console to remove the TDE option from an option group, the console indicates that it's processing. In addition, an error event is created if the option group is associated with an encrypted DB instance or DB snapshot.

The following example removes the TDE encryption from a database called `customerDatabase`. 

```
 1. ------------- Removing TDE ----------------
 2. 
 3. USE [customerDatabase]
 4. GO
 5. 
 6. -- Turn off encryption of the database
 7. ALTER DATABASE [customerDatabase]
 8. SET ENCRYPTION OFF
 9. GO
10. 
11. -- Wait until the encryption state of the database becomes 1. The state is 5 (Decryption in progress) for a while
12. SELECT db_name(database_id) as DatabaseName, * FROM sys.dm_database_encryption_keys
13. GO
14. 
15. -- Drop the DEK used for encryption
16. DROP DATABASE ENCRYPTION KEY
17. GO
18. 
19. -- Drop a user TDE certificate if it was restored (imported)
20. EXECUTE msdb.dbo.rds_drop_tde_certificate @certificate_name='UserTDECertificate_certificate_name';
21. 
22. -- Alter to SIMPLE Recovery mode so that your encrypted log gets truncated
23. USE [master]
24. GO
25. ALTER DATABASE [customerDatabase] SET RECOVERY SIMPLE
26. GO
```

When all objects are decrypted, you have two options:

1. You can modify the DB instance to be associated with an option group without the TDE option.

1. You can remove the TDE option from the option group.

# SQL Server Audit
<a name="Appendix.SQLServer.Options.Audit"></a>

In Amazon RDS, you can audit Microsoft SQL Server databases by using the built-in SQL Server auditing mechanism. You can create audits and audit specifications in the same way that you create them for on-premises database servers. 

RDS uploads the completed audit logs to your S3 bucket, using the IAM role that you provide. If you enable retention, RDS keeps your audit logs on your DB instance for the configured period of time.

For more information, see [SQL Server Audit (database engine)](https://docs.microsoft.com/sql/relational-databases/security/auditing/sql-server-audit-database-engine) in the Microsoft SQL Server documentation.

## SQL Server Audit with Database Activity Streams
<a name="Appendix.SQLServer.DAS.Audit"></a>

You can use Database Activity Streams for RDS to integrate SQL Server Audit events with database activity monitoring tools from Imperva, McAfee, and IBM. For more information about auditing with Database Activity Streams for RDS SQL Server, see [Auditing in Microsoft SQL Server](DBActivityStreams.md#DBActivityStreams.Overview.SQLServer-auditing) 

**Topics**
+ [

## SQL Server Audit with Database Activity Streams
](#Appendix.SQLServer.DAS.Audit)
+ [

## Support for SQL Server Audit
](#Appendix.SQLServer.Options.Audit.Support)
+ [

# Adding SQL Server Audit to the DB instance options
](Appendix.SQLServer.Options.Audit.Adding.md)
+ [

# Using SQL Server Audit
](Appendix.SQLServer.Options.Audit.CreateAuditsAndSpecifications.md)
+ [

# Viewing audit logs
](Appendix.SQLServer.Options.Audit.AuditRecords.md)
+ [

## Using SQL Server Audit with Multi-AZ instances
](#Appendix.SQLServer.Options.Audit.Multi-AZ)
+ [

# Configuring an S3 bucket
](Appendix.SQLServer.Options.Audit.S3bucket.md)
+ [

# Manually creating an IAM role for SQL Server Audit
](Appendix.SQLServer.Options.Audit.IAM.md)

## Support for SQL Server Audit
<a name="Appendix.SQLServer.Options.Audit.Support"></a>

In Amazon RDS, starting with SQL Server 2016, all editions of SQL Server support server-level audits, and the Enterprise edition also supports database-level audits. Starting with SQL Server 2016 (13.x) SP1, all editions support both server-level and database-level audits. For more information, see [SQL Server Audit (database engine)](https://docs.microsoft.com/sql/relational-databases/security/auditing/sql-server-audit-database-engine) in the SQL Server documentation.

RDS supports configuring the following option settings for SQL Server Audit. 


| Option setting | Valid values | Description | 
| --- | --- | --- | 
| IAM\$1ROLE\$1ARN | A valid Amazon Resource Name (ARN) in the format arn:aws:iam::account-id:role/role-name. | The ARN of the IAM role that grants access to the S3 bucket where you want to store your audit logs. For more information, see [Amazon Resource Names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-iam) in the AWS General Reference. | 
| S3\$1BUCKET\$1ARN | A valid ARN in the format arn:aws:s3:::amzn-s3-demo-bucket or arn:aws:s3:::amzn-s3-demo-bucket/key-prefix | The ARN for the S3 bucket where you want to store your audit logs. | 
| ENABLE\$1COMPRESSION | true or false | Controls audit log compression. By default, compression is enabled (set to true). | 
| RETENTION\$1TIME | 0 to 840 | The retention time (in hours) that SQL Server audit records are kept on your RDS instance. By default, retention is disabled. | 

# Adding SQL Server Audit to the DB instance options
<a name="Appendix.SQLServer.Options.Audit.Adding"></a>

Enabling SQL Server Audit requires two steps: enabling the option on the DB instance, and enabling the feature inside SQL Server. The process for adding the SQL Server Audit option to a DB instance is as follows: 

1. Create a new option group, or copy or modify an existing option group. 

1. Add and configure all required options.

1. Associate the option group with the DB instance.

After you add the SQL Server Audit option, you don't need to restart your DB instance. As soon as the option group is active, you can create audits and store audit logs in your S3 bucket. 

**To add and configure SQL Server Audit on a DB instance's option group**

1. Choose one of the following:
   + Use an existing option group.
   + Create a custom DB option group and use that option group. For more information, see [Creating an option group](USER_WorkingWithOptionGroups.md#USER_WorkingWithOptionGroups.Create). 

1. Add the **SQLSERVER\$1AUDIT** option to the option group, and configure the option settings. For more information about adding options, see [Adding an option to an option group](USER_WorkingWithOptionGroups.md#USER_WorkingWithOptionGroups.AddOption). 
   + For **IAM role**, if you already have an IAM role with the required policies, you can choose that role. To create a new IAM role, choose **Create a New Role**. For information about the required policies, see [Manually creating an IAM role for SQL Server Audit](Appendix.SQLServer.Options.Audit.IAM.md).
   + For **Select S3 destination**, if you already have an S3 bucket that you want to use, choose it. To create an S3 bucket, choose **Create a New S3 Bucket**. 
   + For **Enable Compression**, leave this option chosen to compress audit files. Compression is enabled by default. To disable compression, clear **Enable Compression**. 
   + For **Audit log retention**, to keep audit records on the DB instance, choose this option. Specify a retention time in hours. The maximum retention time is 35 days.

1. Apply the option group to a new or existing DB instance. Choose one of the following:
   + If you are creating a new DB instance, apply the option group when you launch the instance. 
   + On an existing DB instance, apply the option group by modifying the instance and then attaching the new option group. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md). 

## Modifying the SQL Server Audit option
<a name="Appendix.SQLServer.Options.Audit.Modifying"></a>

After you enable the SQL Server Audit option, you can modify the settings. For information about how to modify option settings, see [Modifying an option setting](USER_WorkingWithOptionGroups.md#USER_WorkingWithOptionGroups.ModifyOption).

## Removing SQL Server Audit from the DB instance options
<a name="Appendix.SQLServer.Options.Audit.Removing"></a>

You can turn off the SQL Server Audit feature by disabling audits and then deleting the option. 

**To remove auditing**

1. Disable all of the audit settings inside SQL Server. To learn where audits are running, query the SQL Server security catalog views. For more information, see [Security catalog views](https://docs.microsoft.com/sql/relational-databases/system-catalog-views/security-catalog-views-transact-sql) in the Microsoft SQL Server documentation. 

1. Delete the SQL Server Audit option from the DB instance. Choose one of the following: 
   + Delete the SQL Server Audit option from the option group that the DB instance uses. This change affects all DB instances that use the same option group. For more information, see [Removing an option from an option group](USER_WorkingWithOptionGroups.md#USER_WorkingWithOptionGroups.RemoveOption).
   + Modify the DB instance, and then choose an option group without the SQL Server Audit option. This change affects only the DB instance that you modify. You can specify the default (empty) option group, or a different custom option group. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

1. After you delete the SQL Server Audit option from the DB instance, you don't need to restart the instance. Remove unneeded audit files from your S3 bucket.

# Using SQL Server Audit
<a name="Appendix.SQLServer.Options.Audit.CreateAuditsAndSpecifications"></a>

You can control server audits, server audit specifications, and database audit specifications the same way that you control them for on-premises database servers.

## Creating audits
<a name="Appendix.SQLServer.Options.Audit.CreateAudits"></a>

You create server audits in the same way that you create them for on-premises database servers. For information about how to create server audits, see [CREATE SERVER AUDIT](https://docs.microsoft.com/sql/t-sql/statements/create-server-audit-transact-sql) in the Microsoft SQL Server documentation.

To avoid errors, adhere to the following limitations:
+ Don't exceed the maximum number of supported server audits per instance of 50. 
+ Instruct SQL Server to write data to a binary file.
+ Don't use `RDS_` as a prefix in the server audit name.
+ For `FILEPATH`, specify `D:\rdsdbdata\SQLAudit`.
+ For `MAXSIZE`, specify a size between 2 MB and 50 MB.
+ Don't configure `MAX_ROLLOVER_FILES` or `MAX_FILES`.
+ Don't configure SQL Server to shut down the DB instance if it fails to write the audit record.

## Creating audit specifications
<a name="Appendix.SQLServer.Options.Audit.CreateSpecifications"></a>

You create server audit specifications and database audit specifications the same way that you create them for on-premises database servers. For information about creating audit specifications, see [CREATE SERVER AUDIT SPECIFICATION](https://docs.microsoft.com/sql/t-sql/statements/create-server-audit-specification-transact-sql) and [CREATE DATABASE AUDIT SPECIFICATION](https://docs.microsoft.com/sql/t-sql/statements/create-database-audit-specification-transact-sql) in the Microsoft SQL Server documentation.

To avoid errors, don't use `RDS_` as a prefix in the name of the database audit specification or server audit specification. 

# Viewing audit logs
<a name="Appendix.SQLServer.Options.Audit.AuditRecords"></a>

Your audit logs are stored in `D:\rdsdbdata\SQLAudit`.

After SQL Server finishes writing to an audit log file—when the file reaches its size limit—Amazon RDS uploads the file to your S3 bucket. If retention is enabled, Amazon RDS moves the file into the retention folder: `D:\rdsdbdata\SQLAudit\transmitted`. 

For information about configuring retention, see [Adding SQL Server Audit to the DB instance options](Appendix.SQLServer.Options.Audit.Adding.md).

Audit records are kept on the DB instance until the audit log file is uploaded. You can view the audit records by running the following command.

```
SELECT   * 
	FROM     msdb.dbo.rds_fn_get_audit_file
	             ('D:\rdsdbdata\SQLAudit\*.sqlaudit'
	             , default
	             , default )
```

You can use the same command to view audit records in your retention folder by changing the filter to `D:\rdsdbdata\SQLAudit\transmitted\*.sqlaudit`.

```
SELECT   * 
	FROM     msdb.dbo.rds_fn_get_audit_file
	             ('D:\rdsdbdata\SQLAudit\transmitted\*.sqlaudit'
	             , default
	             , default )
```

## Using SQL Server Audit with Multi-AZ instances
<a name="Appendix.SQLServer.Options.Audit.Multi-AZ"></a>

For Multi-AZ instances, the process for sending audit log files to Amazon S3 is similar to the process for Single-AZ instances. However, there are some important differences: 
+ Database audit specification objects are replicated to all nodes.
+ Server audits and server audit specifications aren't replicated to secondary nodes. Instead, you have to create or modify them manually.

To capture server audits or a server audit specification from both nodes:

1. Create a server audit or a server audit specification on the primary node.

1. Fail over to the secondary node and create a server audit or a server audit specification with the same name and GUID on the secondary node. Use the `AUDIT_GUID` parameter to specify the GUID.

# Configuring an S3 bucket
<a name="Appendix.SQLServer.Options.Audit.S3bucket"></a>

The audit log files are automatically uploaded from the DB instance to your S3 bucket. The following restrictions apply to the S3 bucket that you use as a target for audit files: 
+ It must be in the same AWS Region and AWS account as the DB instance.
+ It must not be open to the public.
+ The bucket owner must also be the IAM role owner.
+ Your IAM role must have permissions for the customer-managed KMS key associated with the S3 bucket server-side encryption.

The target key that is used to store the data follows this naming schema: `amzn-s3-demo-bucket/key-prefix/instance-name/audit-name/node_file-name.ext` 

**Note**  
You set both the bucket name and the key prefix values with the (`S3_BUCKET_ARN`) option setting.

The schema is composed of the following elements:
+ ***amzn-s3-demo-bucket*** – The name of your S3 bucket.
+ **`key-prefix`** – The custom key prefix you want to use for audit logs.
+ **`instance-name`** – The name of your Amazon RDS instance.
+ **`audit-name`** – The name of the audit.
+ **`node`** – The identifier of the node that is the source of the audit logs (`node1` or `node2`). There is one node for a Single-AZ instance and two replication nodes for a Multi-AZ instance. These are not primary and secondary nodes, because the roles of primary and secondary change over time. Instead, the node identifier is a simple label. 
  + **`node1`** – The first replication node (Single-AZ has one node only).
  + **`node2`** – The second replication node (Multi-AZ has two nodes).
+ **`file-name`** – The target file name. The file name is taken as-is from SQL Server.
+ **`ext`** – The extension of the file (`zip` or `sqlaudit`):
  + **`zip`** – If compression is enabled (default).
  + **`sqlaudit`** – If compression is disabled.

# Manually creating an IAM role for SQL Server Audit
<a name="Appendix.SQLServer.Options.Audit.IAM"></a>

Typically, when you create a new option, the AWS Management Console creates the IAM role and the IAM trust policy for you. However, you can manually create a new IAM role to use with SQL Server Audits, so that you can customize it with any additional requirements you might have. To do this, you create an IAM role and delegate permissions so that the Amazon RDS service can use your Amazon S3 bucket. When you create this IAM role, you attach trust and permissions policies. The trust policy allows Amazon RDS to assume this role. The permission policy defines the actions that this role can do. For more information, see [Creating a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *AWS Identity and Access Management User Guide*. 

You can use the examples in this section to create the trust relationships and permissions policies you need.

The following example shows a trust relationship for SQL Server Audit. It uses the *service principal* `rds.amazonaws.com` to allow RDS to write to the S3 bucket. A *service principal* is an identifier that is used to grant permissions to a service. Anytime you allow access to `rds.amazonaws.com` in this way, you are allowing RDS to perform an action on your behalf. For more information about service principals, see [AWS JSON policy elements: Principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html).

**Example trust relationship for SQL Server Audit**    
****  

```
{
	    "Version":"2012-10-17",		 	 	 
	    "Statement": [
	        {
	            "Effect": "Allow",
	            "Principal": {
	                "Service": "rds.amazonaws.com"
	            },
	            "Action": "sts:AssumeRole"
	        }
	    ]
	}
```

We recommend using the [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn) and [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount) global condition context keys in resource-based trust relationships to limit the service's permissions to a specific resource. This is the most effective way to protect against the [confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html).

You might use both global condition context keys and have the `aws:SourceArn` value contain the account ID. In this case, the `aws:SourceAccount` value and the account in the `aws:SourceArn` value must use the same account ID when used in the same statement.
+ Use `aws:SourceArn` if you want cross-service access for a single resource.
+ Use `aws:SourceAccount` if you want to allow any resource in that account to be associated with the cross-service use.

In the trust relationship, make sure to use the `aws:SourceArn` global condition context key with the full Amazon Resource Name (ARN) of the resources accessing the role. For SQL Server Audit, make sure to include both the DB option group and the DB instances, as shown in the following example.

**Example trust relationship with global condition context key for SQL Server Audit**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "rds.amazonaws.com"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                    "aws:SourceArn": [
                        "arn:aws:rds:Region:my_account_ID:db:db_instance_identifier",
                        "arn:aws:rds:Region:my_account_ID:og:option_group_name"
                    ]
                }
            }
        }
    ]
}
```

In the following example of a permissions policy for SQL Server Audit, we specify an ARN for the Amazon S3 bucket. You can use ARNs to identify a specific account, user, or role that you want grant access to. For more information about using ARNs, see [ Amazon resource names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).

**Example permissions policy for SQL Server Audit**    
****  

```
{
	    "Version":"2012-10-17",		 	 	 
	    "Statement": [
	        {
	            "Effect": "Allow",
	            "Action": "s3:ListAllMyBuckets",
	            "Resource": "*"
	        },
	        {
	            "Effect": "Allow",
	            "Action": [
	                "s3:ListBucket",
	                "s3:GetBucketACL",
	                "s3:GetBucketLocation"
	            ],
	            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
	        },
	        {
	            "Effect": "Allow",
	            "Action": [
	                "s3:PutObject",
	                "s3:ListMultipartUploadParts",
	                "s3:AbortMultipartUpload"
	            ],
	            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/key_prefix/*"
	        }
	    ]
	}
```

**Note**  
The `s3:ListAllMyBuckets` action is required for verifying that the same AWS account owns both the S3 bucket and the SQL Server DB instance. The action lists the names of the buckets in the account.  
S3 bucket namespaces are global. If you accidentally delete your bucket, another user can create a bucket with the same name in a different account. Then the SQL Server Audit data is written to the new bucket.

# Support for SQL Server Analysis Services in Amazon RDS for SQL Server
<a name="Appendix.SQLServer.Options.SSAS"></a>

Microsoft SQL Server Analysis Services (SSAS) is part of the Microsoft Business Intelligence (MSBI) suite. SSAS is an online analytical processing (OLAP) and data mining tool that is installed within SQL Server. You use SSAS to analyze data to help make business decisions. SSAS differs from the SQL Server relational database because SSAS is optimized for queries and calculations common in a business intelligence environment.

 You can turn on SSAS for existing or new DB instances. It's installed on the same DB instance as your database engine. For more information on SSAS, see the Microsoft [Analysis services documentation](https://docs.microsoft.com/en-us/analysis-services).

Amazon RDS supports SSAS for SQL Server Standard and Enterprise Editions on the following versions:
+ Tabular mode:
  + SQL Server 2019, version 15.00.4043.16.v1 and higher
  + SQL Server 2017, version 14.00.3223.3.v1 and higher
  + SQL Server 2016, version 13.00.5426.0.v1 and higher
+ Multidimensional mode:
  + SQL Server 2019, version 15.00.4153.1.v1 and higher
  + SQL Server 2017, version 14.00.3381.3.v1 and higher
  + SQL Server 2016, version 13.00.5882.1.v1 and higher

**Contents**
+ [

## Limitations
](#SSAS.Limitations)
+ [

# Turning on SSAS
](SSAS.Enabling.md)
  + [

## Creating an option group for SSAS
](SSAS.Enabling.md#SSAS.OptionGroup)
  + [

## Adding the SSAS option to the option group
](SSAS.Enabling.md#SSAS.Add)
  + [

## Associating the option group with your DB instance
](SSAS.Enabling.md#SSAS.Apply)
  + [

## Allowing inbound access to your VPC security group
](SSAS.Enabling.md#SSAS.InboundRule)
  + [

## Enabling Amazon S3 integration
](SSAS.Enabling.md#SSAS.EnableS3)
+ [

# Deploying SSAS projects on Amazon RDS
](SSAS.Deploy.md)
+ [

# Monitoring the status of a deployment task
](SSAS.Monitor.md)
+ [

# Using SSAS on Amazon RDS
](SSAS.Use.md)
  + [

## Setting up a Windows-authenticated user for SSAS
](SSAS.Use.md#SSAS.Use.Auth)
  + [

## Adding a domain user as a database administrator
](SSAS.Use.md#SSAS.Admin)
  + [

## Creating an SSAS proxy
](SSAS.Use.md#SSAS.Use.Proxy)
  + [

## Scheduling SSAS database processing using SQL Server Agent
](SSAS.Use.md#SSAS.Use.Schedule)
  + [

## Revoking SSAS access from the proxy
](SSAS.Use.md#SSAS.Use.Revoke)
+ [

# Backing up an SSAS database
](SSAS.Backup.md)
+ [

# Restoring an SSAS database
](SSAS.Restore.md)
  + [

## Restoring a DB instance to a specified time
](SSAS.Restore.md#SSAS.PITR)
+ [

# Changing the SSAS mode
](SSAS.ChangeMode.md)
+ [

# Turning off SSAS
](SSAS.Disable.md)
+ [

# Troubleshooting SSAS issues
](SSAS.Trouble.md)

## Limitations
<a name="SSAS.Limitations"></a>

The following limitations apply to using SSAS on RDS for SQL Server:
+ RDS for SQL Server supports running SSAS in Tabular or Multidimensional mode. For more information, see [Comparing tabular and multidimensional solutions](https://docs.microsoft.com/en-us/analysis-services/comparing-tabular-and-multidimensional-solutions-ssas) in the Microsoft documentation.
+ You can only use one SSAS mode at a time. Before changing modes, make sure to delete all of the SSAS databases.

  For more information, see [Changing the SSAS mode](SSAS.ChangeMode.md).
+ Multi-AZ instances aren't supported.
+ Instances must use self-managed Active Directory or AWS Directory Service for Microsoft Active Directory for SSAS authentication. For more information, see [Working with Active Directory with RDS for SQL Server](User.SQLServer.ActiveDirectoryWindowsAuth.md).
+ Users aren't given SSAS server administrator access, but they can be granted database-level administrator access.
+ The only supported port for accessing SSAS is 2383.
+ You can't deploy projects directly. We provide an RDS stored procedure to do this. For more information, see [Deploying SSAS projects on Amazon RDS](SSAS.Deploy.md).
+ Processing during deployment isn't supported.
+ Using .xmla files for deployment isn't supported.
+ SSAS project input files and database backup output files can only be in the `D:\S3` folder on the DB instance.

# Turning on SSAS
<a name="SSAS.Enabling"></a>

Use the following process to turn on SSAS for your DB instance:

1. Create a new option group, or choose an existing option group.

1. Add the `SSAS` option to the option group.

1. Associate the option group with the DB instance.

1. Allow inbound access to the virtual private cloud (VPC) security group for the SSAS listener port.

1. Turn on Amazon S3 integration.

## Creating an option group for SSAS
<a name="SSAS.OptionGroup"></a>

Use the AWS Management Console or the AWS CLI to create an option group that corresponds to the SQL Server engine and version of the DB instance that you plan to use.

**Note**  
You can also use an existing option group if it's for the correct SQL Server engine and version.

### Console
<a name="SSAS.OptionGroup.Console"></a>

The following console procedure creates an option group for SQL Server Standard Edition 2017.

**To create the option group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose **Create group**.

1. In the **Create option group** pane, do the following:

   1. For **Name**, enter a name for the option group that is unique within your AWS account, such as **ssas-se-2017**. The name can contain only letters, digits, and hyphens.

   1. For **Description**, enter a brief description of the option group, such as **SSAS option group for SQL Server SE 2017**. The description is used for display purposes.

   1. For **Engine**, choose **sqlserver-se**.

   1. For **Major engine version**, choose **14.00**.

1. Choose **Create**.

### CLI
<a name="SSAS.OptionGroup.CLI"></a>

The following CLI example creates an option group for SQL Server Standard Edition 2017.

**To create the option group**
+ Use one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-option-group \
      --option-group-name ssas-se-2017 \
      --engine-name sqlserver-se \
      --major-engine-version 14.00 \
      --option-group-description "SSAS option group for SQL Server SE 2017"
  ```

  For Windows:

  ```
  aws rds create-option-group ^
      --option-group-name ssas-se-2017 ^
      --engine-name sqlserver-se ^
      --major-engine-version 14.00 ^
      --option-group-description "SSAS option group for SQL Server SE 2017"
  ```

## Adding the SSAS option to the option group
<a name="SSAS.Add"></a>

Next, use the AWS Management Console or the AWS CLI to add the `SSAS` option to the option group.

### Console
<a name="SSAS.Add.Console"></a>

**To add the SSAS option**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose the option group that you just created.

1. Choose **Add option**.

1. Under **Option details**, choose **SSAS** for **Option name**.

1. Under **Option settings**, do the following:

   1. For **Max memory**, enter a value in the range 10–80.

      **Max memory** specifies the upper threshold above which SSAS begins releasing memory more aggressively to make room for requests that are running, and also new high-priority requests. The number is a percentage of the total memory of the DB instance. The allowed values are 10–80, and the default is 45.

   1. For **Mode**, choose the SSAS server mode, **Tabular** or **Multidimensional**.

      If you don't see the **Mode** option setting, it means that Multidimensional mode isn't supported in your AWS Region. For more information, see [Limitations](Appendix.SQLServer.Options.SSAS.md#SSAS.Limitations).

      **Tabular** is the default.

   1. For **Security groups**, choose the VPC security group to associate with the option.
**Note**  
The port for accessing SSAS, 2383, is prepopulated.

1. Under **Scheduling**, choose whether to add the option immediately or at the next maintenance window.

1. Choose **Add option**.

### CLI
<a name="SSAS.Add.CLI"></a>

**To add the SSAS option**

1. Create a JSON file, for example `ssas-option.json`, with the following parameters:
   + `OptionGroupName` – The name of option group that you created or chose previously (`ssas-se-2017` in the following example).
   + `Port` – The port that you use to access SSAS. The only supported port is 2383.
   + `VpcSecurityGroupMemberships` – Memberships for VPC security groups for your RDS DB instance.
   + `MAX_MEMORY` – The upper threshold above which SSAS should begin releasing memory more aggressively to make room for requests that are running, and also new high-priority requests. The number is a percentage of the total memory of the DB instance. The allowed values are 10–80, and the default is 45.
   + `MODE` – The SSAS server mode, either `Tabular` or `Multidimensional`. `Tabular` is the default.

     If you receive an error that the `MODE` option setting isn't valid, it means that Multidimensional mode isn't supported in your AWS Region. For more information, see [Limitations](Appendix.SQLServer.Options.SSAS.md#SSAS.Limitations).

   The following is an example of a JSON file with SSAS option settings.

   ```
   {
   "OptionGroupName": "ssas-se-2017",
   "OptionsToInclude": [
   	{
   	"OptionName": "SSAS",
   	"Port": 2383,
   	"VpcSecurityGroupMemberships": ["sg-0abcdef123"],
   	"OptionSettings": [{"Name":"MAX_MEMORY","Value":"60"},{"Name":"MODE","Value":"Multidimensional"}]
   	}],
   "ApplyImmediately": true
   }
   ```

1. Add the `SSAS` option to the option group.  
**Example**  

   For Linux, macOS, or Unix:

   ```
   aws rds add-option-to-option-group \
       --cli-input-json file://ssas-option.json \
       --apply-immediately
   ```

   For Windows:

   ```
   aws rds add-option-to-option-group ^
       --cli-input-json file://ssas-option.json ^
       --apply-immediately
   ```

## Associating the option group with your DB instance
<a name="SSAS.Apply"></a>

You can use the console or the CLI to associate the option group with your DB instance.

### Console
<a name="SSAS.Apply.Console"></a>

Associate your option group with a new or existing DB instance:
+ For a new DB instance, associate the option group with the DB instance when you launch the instance. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).
+ For an existing DB instance, modify the instance and associate the new option group with it. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).
**Note**  
If you use an existing instance, it must already have an Active Directory domain and AWS Identity and Access Management (IAM) role associated with it. If you create a new instance, specify an existing Active Directory domain and IAM role. For more information, see [Working with Active Directory with RDS for SQL Server](User.SQLServer.ActiveDirectoryWindowsAuth.md).

### CLI
<a name="SSAS.Apply.CLI"></a>

You can associate your option group with a new or existing DB instance.

**Note**  
If you use an existing instance, it must already have an Active Directory domain and IAM role associated with it. If you create a new instance, specify an existing Active Directory domain and IAM role. For more information, see [Working with Active Directory with RDS for SQL Server](User.SQLServer.ActiveDirectoryWindowsAuth.md).

**To create a DB instance that uses the option group**
+ Specify the same DB engine type and major version that you used when creating the option group.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-db-instance \
      --db-instance-identifier myssasinstance \
      --db-instance-class db.m5.2xlarge \
      --engine sqlserver-se \
      --engine-version 14.00.3223.3.v1 \
      --allocated-storage 100 \
      --manage-master-user-password \
      --master-username admin \
      --storage-type gp2 \
      --license-model li \
      --domain-iam-role-name my-directory-iam-role \
      --domain my-domain-id \
      --option-group-name ssas-se-2017
  ```

  For Windows:

  ```
  aws rds create-db-instance ^
      --db-instance-identifier myssasinstance ^
      --db-instance-class db.m5.2xlarge ^
      --engine sqlserver-se ^
      --engine-version 14.00.3223.3.v1 ^
      --allocated-storage 100 ^
      --manage-master-user-password ^
      --master-username admin ^
      --storage-type gp2 ^
      --license-model li ^
      --domain-iam-role-name my-directory-iam-role ^
      --domain my-domain-id ^
      --option-group-name ssas-se-2017
  ```

**To modify a DB instance to associate the option group**
+ Use one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-instance \
      --db-instance-identifier myssasinstance \
      --option-group-name ssas-se-2017 \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds modify-db-instance ^
      --db-instance-identifier myssasinstance ^
      --option-group-name ssas-se-2017 ^
      --apply-immediately
  ```

## Allowing inbound access to your VPC security group
<a name="SSAS.InboundRule"></a>

Create an inbound rule for the specified SSAS listener port in the VPC security group associated with your DB instance. For more information about setting up security groups, see [Provide access to your DB instance in your VPC by creating a security group](CHAP_SettingUp.md#CHAP_SettingUp.SecurityGroup).

## Enabling Amazon S3 integration
<a name="SSAS.EnableS3"></a>

To download model configuration files to your host for deployment, use Amazon S3 integration. For more information, see [Integrating an Amazon RDS for SQL Server DB instance with Amazon S3](User.SQLServer.Options.S3-integration.md). 

# Deploying SSAS projects on Amazon RDS
<a name="SSAS.Deploy"></a>

On RDS, you can't deploy SSAS projects directly by using SQL Server Management Studio (SSMS). To deploy projects, use an RDS stored procedure.

**Note**  
Using .xmla files for deployment isn't supported.

Before you deploy projects, make sure of the following:
+ Amazon S3 integration is turned on. For more information, see [Integrating an Amazon RDS for SQL Server DB instance with Amazon S3](User.SQLServer.Options.S3-integration.md).
+ The `Processing Option` configuration setting is set to `Do Not Process`. This setting means that no processing happens after deployment.
+ You have both the `myssasproject.asdatabase` and `myssasproject.deploymentoptions` files. They're automatically generated when you build the SSAS project.

**To deploy an SSAS project on RDS**

1. Download the `.asdatabase` (SSAS model) file from your S3 bucket to your DB instance, as shown in the following example. For more information on the download parameters, see [Downloading files from an Amazon S3 bucket to a SQL Server DB instance](Appendix.SQLServer.Options.S3-integration.using.md#Appendix.SQLServer.Options.S3-integration.using.download).

   ```
   exec msdb.dbo.rds_download_from_s3 
   @s3_arn_of_file='arn:aws:s3:::bucket_name/myssasproject.asdatabase', 
   [@rds_file_path='D:\S3\myssasproject.asdatabase'],
   [@overwrite_file=1];
   ```

1. Download the `.deploymentoptions` file from your S3 bucket to your DB instance.

   ```
   exec msdb.dbo.rds_download_from_s3
   @s3_arn_of_file='arn:aws:s3:::bucket_name/myssasproject.deploymentoptions', 
   [@rds_file_path='D:\S3\myssasproject.deploymentoptions'],
   [@overwrite_file=1];
   ```

1. Deploy the project.

   ```
   exec msdb.dbo.rds_msbi_task
   @task_type='SSAS_DEPLOY_PROJECT',
   @file_path='D:\S3\myssasproject.asdatabase';
   ```

# Monitoring the status of a deployment task
<a name="SSAS.Monitor"></a>

To track the status of your deployment (or download) task, call the `rds_fn_task_status` function. It takes two parameters. The first parameter should always be `NULL` because it doesn't apply to SSAS. The second parameter accepts a task ID. 

To see a list of all tasks, set the first parameter to `NULL` and the second parameter to `0`, as shown in the following example.

```
SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,0);
```

To get a specific task, set the first parameter to `NULL` and the second parameter to the task ID, as shown in the following example.

```
SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,42);
```

The `rds_fn_task_status` function returns the following information.


| Output parameter | Description | 
| --- | --- | 
| `task_id` | The ID of the task. | 
| `task_type` | For SSAS, tasks can have the following task types: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SSAS.Monitor.html)  | 
| `database_name` | Not applicable to SSAS tasks. | 
| `% complete` | The progress of the task as a percentage. | 
| `duration (mins)` | The amount of time spent on the task, in minutes. | 
| `lifecycle` |  The status of the task. Possible statuses are the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SSAS.Monitor.html)  | 
| `task_info` | Additional information about the task. If an error occurs during processing, this column contains information about the error. For more information, see [Troubleshooting SSAS issues](SSAS.Trouble.md). | 
| `last_updated` | The date and time that the task status was last updated. | 
| `created_at` | The date and time that the task was created. | 
| `S3_object_arn` |  Not applicable to SSAS tasks.  | 
| `overwrite_S3_backup_file` | Not applicable to SSAS tasks. | 
| `KMS_master_key_arn` |  Not applicable to SSAS tasks.  | 
| `filepath` |  Not applicable to SSAS tasks.  | 
| `overwrite_file` |  Not applicable to SSAS tasks.  | 
| `task_metadata` | Metadata associated with the SSAS task. | 

# Using SSAS on Amazon RDS
<a name="SSAS.Use"></a>

After deploying the SSAS project, you can directly process the OLAP database on SSMS.

**To use SSAS on RDS**

1. In SSMS, connect to SSAS using the user name and password for the Active Directory domain.

1. Expand **Databases**. The newly deployed SSAS database appears.

1. Locate the connection string, and update the user name and password to give access to the source SQL database. Doing this is required for processing SSAS objects.

   1. For Tabular mode, do the following:

      1. Expand the **Connections** tab.

      1. Open the context (right-click) menu for the connection object, and then choose **Properties**.

      1. Update the user name and password in the connection string.

   1. For Multidimensional mode, do the following:

      1. Expand the **Data Sources** tab.

      1. Open the context (right-click) menu for the data source object, and then choose **Properties**.

      1. Update the user name and password in the connection string.

1. Open the context (right-click) menu for the SSAS database that you created and choose **Process Database**.

   Depending on the size of the input data, the processing operation might take several minutes to complete.

**Topics**
+ [

## Setting up a Windows-authenticated user for SSAS
](#SSAS.Use.Auth)
+ [

## Adding a domain user as a database administrator
](#SSAS.Admin)
+ [

## Creating an SSAS proxy
](#SSAS.Use.Proxy)
+ [

## Scheduling SSAS database processing using SQL Server Agent
](#SSAS.Use.Schedule)
+ [

## Revoking SSAS access from the proxy
](#SSAS.Use.Revoke)

## Setting up a Windows-authenticated user for SSAS
<a name="SSAS.Use.Auth"></a>

The main administrator user (sometimes called the master user) can use the following code example to set up a Windows-authenticated login and grant the required procedure permissions. Doing this grants permissions to the domain user to run SSAS customer tasks, use S3 file transfer procedures, create credentials, and work with the SQL Server Agent proxy. For more information, see [Credentials (database engine)](https://docs.microsoft.com/en-us/sql/relational-databases/security/authentication-access/credentials-database-engine?view=sql-server-ver15) and [Create a SQL Server Agent proxy](https://docs.microsoft.com/en-us/sql/ssms/agent/create-a-sql-server-agent-proxy?view=sql-server-ver15) in the Microsoft documentation.

You can grant some or all of the following permissions as needed to Windows-authenticated users.

**Example**  

```
-- Create a server-level domain user login, if it doesn't already exist
USE [master]
GO
CREATE LOGIN [mydomain\user_name] FROM WINDOWS
GO

-- Create domain user, if it doesn't already exist
USE [msdb]
GO
CREATE USER [mydomain\user_name] FOR LOGIN [mydomain\user_name]
GO

-- Grant necessary privileges to the domain user
USE [master]
GO
GRANT ALTER ANY CREDENTIAL TO [mydomain\user_name]
GO

USE [msdb]
GO
GRANT EXEC ON msdb.dbo.rds_msbi_task TO [mydomain\user_name] with grant option
GRANT SELECT ON msdb.dbo.rds_fn_task_status TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_task_status TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_cancel_task TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_download_from_s3 TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_upload_to_s3 TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_delete_from_filesystem TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_gather_file_details TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_add_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_update_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_grant_login_to_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_revoke_login_from_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_delete_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_enum_login_for_proxy to [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_enum_proxy_for_subsystem TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_sqlagent_proxy TO [mydomain\user_name] with grant option
ALTER ROLE [SQLAgentUserRole] ADD MEMBER [mydomain\user_name]
GO
```

## Adding a domain user as a database administrator
<a name="SSAS.Admin"></a>

You can add a domain user as an SSAS database administrator in the following ways:
+ A database administrator can use SSMS to create a role with `admin` privileges, then add users to that role.
+ You can use the following stored procedure.

  ```
  exec msdb.dbo.rds_msbi_task
  @task_type='SSAS_ADD_DB_ADMIN_MEMBER',
  @database_name='myssasdb',
  @ssas_role_name='exampleRole',
  @ssas_role_member='domain_name\domain_user_name';
  ```

  The following parameters are required:
  + `@task_type` – The type of the MSBI task, in this case `SSAS_ADD_DB_ADMIN_MEMBER`.
  + `@database_name` – The name of the SSAS database to which you're granting administrator privileges.
  + `@ssas_role_name` – The SSAS database administrator role name. If the role doesn't already exist, it's created.
  + `@ssas_role_member` – The SSAS database user that you're adding to the administrator role.

## Creating an SSAS proxy
<a name="SSAS.Use.Proxy"></a>

To be able to schedule SSAS database processing using SQL Server Agent, create an SSAS credential and an SSAS proxy. Run these procedures as a Windows-authenticated user.

**To create the SSAS credential**
+ Create the credential for the proxy. To do this, you can use SSMS or the following SQL statement.

  ```
  USE [master]
  GO
  CREATE CREDENTIAL [SSAS_Credential] WITH IDENTITY = N'mydomain\user_name', SECRET = N'mysecret'
  GO
  ```
**Note**  
`IDENTITY` must be a domain-authenticated login. Replace `mysecret` with the password for the domain-authenticated login.

**To create the SSAS proxy**

1. Use the following SQL statement to create the proxy.

   ```
   USE [msdb]
   GO
   EXEC msdb.dbo.sp_add_proxy @proxy_name=N'SSAS_Proxy',@credential_name=N'SSAS_Credential',@description=N''
   GO
   ```

1. Use the following SQL statement to grant access to the proxy to other users.

   ```
   USE [msdb]
   GO
   EXEC msdb.dbo.sp_grant_login_to_proxy @proxy_name=N'SSAS_Proxy',@login_name=N'mydomain\user_name'
   GO
   ```

1. Use the following SQL statement to give the SSAS subsystem access to the proxy.

   ```
   USE [msdb]
   GO
   EXEC msdb.dbo.rds_sqlagent_proxy @task_type='GRANT_SUBSYSTEM_ACCESS',@proxy_name='SSAS_Proxy',@proxy_subsystem='SSAS'
   GO
   ```

**To view the proxy and grants on the proxy**

1. Use the following SQL statement to view the grantees of the proxy.

   ```
   USE [msdb]
   GO
   EXEC sp_help_proxy
   GO
   ```

1. Use the following SQL statement to view the subsystem grants.

   ```
   USE [msdb]
   GO
   EXEC msdb.dbo.sp_enum_proxy_for_subsystem
   GO
   ```

## Scheduling SSAS database processing using SQL Server Agent
<a name="SSAS.Use.Schedule"></a>

After you create the credential and proxy and grant SSAS access to the proxy, you can create a SQL Server Agent job to schedule SSAS database processing.

**To schedule SSAS database processing**
+ Use SSMS or T-SQL for creating the SQL Server Agent job. The following example uses T-SQL. You can further configure its job schedule through SSMS or T-SQL.
  + The `@command` parameter outlines the XML for Analysis (XMLA) command to be run by the SQL Server Agent job. This example configures SSAS Multidimensional database processing.
  + The `@server` parameter outlines the target SSAS server name of the SQL Server Agent job.

    To call the SSAS service within the same RDS DB instance where the SQL Server Agent job resides, use `localhost:2383`.

    To call the SSAS service from outside the RDS DB instance, use the RDS endpoint. You can also use the Kerberos Active Directory (AD) endpoint (`your-DB-instance-name.your-AD-domain-name`) if the RDS DB instances are joined by the same domain. For external DB instances, make sure to properly configure the VPC security group associated with the RDS DB instance for a secure connection.

  You can further edit the query to support various XMLA operations. Make edits either by directly modifying the T-SQL query or by using the SSMS UI following SQL Server Agent job creation.

  ```
  USE [msdb]
  GO
  DECLARE @jobId BINARY(16)
  EXEC msdb.dbo.sp_add_job @job_name=N'SSAS_Job', 
      @enabled=1, 
      @notify_level_eventlog=0, 
      @notify_level_email=0, 
      @notify_level_netsend=0, 
      @notify_level_page=0, 
      @delete_level=0, 
      @category_name=N'[Uncategorized (Local)]', 
      @job_id = @jobId OUTPUT
  GO
  EXEC msdb.dbo.sp_add_jobserver 
      @job_name=N'SSAS_Job', 
      @server_name = N'(local)'
  GO
  EXEC msdb.dbo.sp_add_jobstep @job_name=N'SSAS_Job', @step_name=N'Process_SSAS_Object', 
      @step_id=1, 
      @cmdexec_success_code=0, 
      @on_success_action=1, 
      @on_success_step_id=0, 
      @on_fail_action=2, 
      @on_fail_step_id=0, 
      @retry_attempts=0, 
      @retry_interval=0, 
      @os_run_priority=0, @subsystem=N'ANALYSISCOMMAND', 
      @command=N'<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
          <Parallel>
              <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
                  xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" 
                  xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" 
                  xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200" xmlns:ddl300="http://schemas.microsoft.com/analysisservices/2011/engine/300" 
                  xmlns:ddl300_300="http://schemas.microsoft.com/analysisservices/2011/engine/300/300" xmlns:ddl400="http://schemas.microsoft.com/analysisservices/2012/engine/400" 
                  xmlns:ddl400_400="http://schemas.microsoft.com/analysisservices/2012/engine/400/400" xmlns:ddl500="http://schemas.microsoft.com/analysisservices/2013/engine/500" 
                  xmlns:ddl500_500="http://schemas.microsoft.com/analysisservices/2013/engine/500/500">
                  <Object>
                      <DatabaseID>Your_SSAS_Database_ID</DatabaseID>
                  </Object>
                  <Type>ProcessFull</Type>
                  <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
              </Process>
          </Parallel>
      </Batch>', 
      @server=N'localhost:2383', 
      @database_name=N'master', 
      @flags=0, 
      @proxy_name=N'SSAS_Proxy'
  GO
  ```

## Revoking SSAS access from the proxy
<a name="SSAS.Use.Revoke"></a>

You can revoke access to the SSAS subsystem and delete the SSAS proxy using the following stored procedures.

**To revoke access and delete the proxy**

1. Revoke subsystem access.

   ```
   USE [msdb]
   GO
   EXEC msdb.dbo.rds_sqlagent_proxy @task_type='REVOKE_SUBSYSTEM_ACCESS',@proxy_name='SSAS_Proxy',@proxy_subsystem='SSAS'
   GO
   ```

1. Revoke the grants on the proxy.

   ```
   USE [msdb]
   GO
   EXEC msdb.dbo.sp_revoke_login_from_proxy @proxy_name=N'SSAS_Proxy',@name=N'mydomain\user_name'
   GO
   ```

1. Delete the proxy.

   ```
   USE [msdb]
   GO
   EXEC dbo.sp_delete_proxy @proxy_name = N'SSAS_Proxy'
   GO
   ```

# Backing up an SSAS database
<a name="SSAS.Backup"></a>

You can create SSAS database backup files only in the `D:\S3` folder on the DB instance. To move the backup files to your S3 bucket, use Amazon S3.

You can back up an SSAS database as follows:
+ A domain user with the `admin` role for a particular database can use SSMS to back up the database to the `D:\S3` folder.

  For more information, see [Adding a domain user as a database administrator](SSAS.Use.md#SSAS.Admin).
+ You can use the following stored procedure. This stored procedure doesn't support encryption.

  ```
  exec msdb.dbo.rds_msbi_task
  @task_type='SSAS_BACKUP_DB',
  @database_name='myssasdb',
  @file_path='D:\S3\ssas_db_backup.abf',
  [@ssas_apply_compression=1],
  [@ssas_overwrite_file=1];
  ```

  The following parameters are required:
  + `@task_type` – The type of the MSBI task, in this case `SSAS_BACKUP_DB`.
  + `@database_name` – The name of the SSAS database that you're backing up.
  + `@file_path` – The path for the SSAS backup file. The `.abf` extension is required.

  The following parameters are optional:
  + `@ssas_apply_compression` – Whether to apply SSAS backup compression. Valid values are 1 (Yes) and 0 (No).
  + `@ssas_overwrite_file` – Whether to overwrite the SSAS backup file. Valid values are 1 (Yes) and 0 (No).

# Restoring an SSAS database
<a name="SSAS.Restore"></a>

Use the following stored procedure to restore an SSAS database from a backup. 

You can't restore a database if there is an existing SSAS database with the same name. The stored procedure for restoring doesn't support encrypted backup files.

```
exec msdb.dbo.rds_msbi_task
@task_type='SSAS_RESTORE_DB',
@database_name='mynewssasdb',
@file_path='D:\S3\ssas_db_backup.abf';
```

The following parameters are required:
+ `@task_type` – The type of the MSBI task, in this case `SSAS_RESTORE_DB`.
+ `@database_name` – The name of the new SSAS database that you're restoring to.
+ `@file_path` – The path to the SSAS backup file.

## Restoring a DB instance to a specified time
<a name="SSAS.PITR"></a>

Point-in-time recovery (PITR) doesn't apply to SSAS databases. If you do PITR, only the SSAS data in the last snapshot before the requested time is available on the restored instance.

**To have up-to-date SSAS databases on a restored DB instance**

1. Back up your SSAS databases to the `D:\S3` folder on the source instance.

1. Transfer the backup files to the S3 bucket.

1. Transfer the backup files from the S3 bucket to the `D:\S3` folder on the restored instance.

1. Run the stored procedure to restore the SSAS databases onto the restored instance.

   You can also reprocess the SSAS project to restore the databases.

# Changing the SSAS mode
<a name="SSAS.ChangeMode"></a>

You can change the mode in which SSAS runs, either Tabular or Multidimensional. To change the mode, use the AWS Management Console or the AWS CLI to modify the options settings in the SSAS option.

**Important**  
You can only use one SSAS mode at a time. Make sure to delete all of the SSAS databases before changing the mode, or you receive an error.

## Console
<a name="SSAS.ChangeMode.CON"></a>

The following Amazon RDS console procedure changes the SSAS mode to Tabular and sets the `MAX_MEMORY` parameter to 70 percent.

**To modify the SSAS option**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose the option group with the `SSAS` option that you want to modify (`ssas-se-2017` in the previous examples).

1. Choose **Modify option**.

1. Change the option settings:

   1. For **Max memory**, enter **70**.

   1. For **Mode**, choose **Tabular**.

1. Choose **Modify option**.

## AWS CLI
<a name="SSAS.ChangeMode.CLI"></a>

The following AWS CLI example changes the SSAS mode to Tabular and sets the `MAX_MEMORY` parameter to 70 percent.

For the CLI command to work, make sure to include all of the required parameters, even if you're not modifying them.

**To modify the SSAS option**
+ Use one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds add-option-to-option-group \
      --option-group-name ssas-se-2017 \
      --options "OptionName=SSAS,VpcSecurityGroupMemberships=sg-12345e67,OptionSettings=[{Name=MAX_MEMORY,Value=70},{Name=MODE,Value=Tabular}]" \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds add-option-to-option-group ^
      --option-group-name ssas-se-2017 ^
      --options OptionName=SSAS,VpcSecurityGroupMemberships=sg-12345e67,OptionSettings=[{Name=MAX_MEMORY,Value=70},{Name=MODE,Value=Tabular}] ^
      --apply-immediately
  ```

# Turning off SSAS
<a name="SSAS.Disable"></a>

To turn off SSAS, remove the `SSAS` option from its option group.

**Important**  
Before you remove the `SSAS` option, delete your SSAS databases.  
We highly recommend that you back up your SSAS databases before deleting them and removing the `SSAS` option.

## Console
<a name="SSAS.Disable.Console"></a>

**To remove the SSAS option from its option group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose the option group with the `SSAS` option that you want to remove (`ssas-se-2017` in the previous examples).

1. Choose **Delete option**.

1. Under **Deletion options**, choose **SSAS** for **Options to delete**.

1. Under **Apply immediately**, choose **Yes** to delete the option immediately, or **No** to delete it at the next maintenance window.

1. Choose **Delete**.

## AWS CLI
<a name="SSAS.Disable.CLI"></a>

**To remove the SSAS option from its option group**
+ Use one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds remove-option-from-option-group \
      --option-group-name ssas-se-2017 \
      --options SSAS \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds remove-option-from-option-group ^
      --option-group-name ssas-se-2017 ^
      --options SSAS ^
      --apply-immediately
  ```

# Troubleshooting SSAS issues
<a name="SSAS.Trouble"></a>

You might encounter the following issues when using SSAS.


| Issue | Type | Troubleshooting suggestions | 
| --- | --- | --- | 
| Unable to configure the SSAS option. The requested SSAS mode is new\$1mode, but the current DB instance has number current\$1mode databases. Delete the existing databases before switching to new\$1mode mode. To regain access to current\$1mode mode for database deletion, either update the current DB option group, or attach a new option group with %s as the MODE option setting value for the SSAS option. | RDS event | You can't change the SSAS mode if you still have SSAS databases that use the current mode. Delete the SSAS databases, then try again. | 
| Unable to remove the SSAS option because there are number existing mode databases. The SSAS option can't be removed until all SSAS databases are deleted. Add the SSAS option again, delete all SSAS databases, and try again. | RDS event | You can't turn off SSAS if you still have SSAS databases. Delete the SSAS databases, then try again. | 
| The SSAS option isn't enabled or is in the process of being enabled. Try again later. | RDS stored procedure | You can't run SSAS stored procedures when the option is turned off, or when it's being turned on. | 
| The SSAS option is configured incorrectly. Make sure that the option group membership status is "in-sync", and review the RDS event logs for relevant SSAS configuration error messages. Following these investigations, try again. If errors continue to occur, contact AWS Support. | RDS stored procedure |  You can't run SSAS stored procedures when your option group membership isn't in the `in-sync` status. This puts the SSAS option in an incorrect configuration state. If your option group membership status changes to `failed` due to SSAS option modification, there are two possible reasons:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SSAS.Trouble.html) Reconfigure the SSAS option, because RDS allows only one SSAS mode at a time, and doesn't support SSAS option removal with SSAS databases present. Check the RDS event logs for configuration errors for your SSAS instance, and resolve the issues accordingly.  | 
| Deployment failed. The change can only be deployed on a server running in deployment\$1file\$1mode mode. The current server mode is current\$1mode. | RDS stored procedure |  You can't deploy a Tabular database to a Multidimensional server, or a Multidimensional database to a Tabular server. Make sure that you're using files with the correct mode, and verify that the `MODE` option setting is set to the appropriate value.  | 
| The restore failed. The backup file can only be restored on a server running in restore\$1file\$1mode mode. The current server mode is current\$1mode. | RDS stored procedure |  You can't restore a Tabular database to a Multidimensional server, or a Multidimensional database to a Tabular server. Make sure that you're using files with the correct mode, and verify that the `MODE` option setting is set to the appropriate value.  | 
| The restore failed. The backup file and the RDS DB instance versions are incompatible. | RDS stored procedure |  You can't restore an SSAS database with a version incompatible to the SQL Server instance version. For more information, see [Compatibility levels for tabular models](https://docs.microsoft.com/en-us/analysis-services/tabular-models/compatibility-level-for-tabular-models-in-analysis-services) and [Compatibility level of a multidimensional database](https://docs.microsoft.com/en-us/analysis-services/multidimensional-models/compatibility-level-of-a-multidimensional-database-analysis-services) in the Microsoft documentation.  | 
| The restore failed. The backup file specified in the restore operation is damaged or is not an SSAS backup file. Make sure that @rds\$1file\$1path is correctly formatted. | RDS stored procedure |  You can't restore an SSAS database with a damaged file. Make sure that the file isn't damaged or corrupted. This error can also be raised when `@rds_file_path` isn't correctly formatted (for example, it has double backslashes as in `D:\S3\\incorrect_format.abf`).  | 
| The restore failed. The restored database name can't contain any reserved words or invalid characters: . , ; ' ` : / \$1\$1 \$1 \$1 ? \$1" & % \$1 \$1 \$1 = ( ) [ ] \$1 \$1 < >, or be longer than 100 characters. | RDS stored procedure |  The restored database name can't contain any reserved words or characters that aren't valid, or be longer than 100 characters. For SSAS object naming conventions, see [Object naming rules](https://docs.microsoft.com/en-us/analysis-services/multidimensional-models/olap-physical/object-naming-rules-analysis-services) in the Microsoft documentation.  | 
| An invalid role name was provided. The role name can't contain any reserved strings. | RDS stored procedure |  The role name can't contain any reserved strings. For SSAS object naming conventions, see [Object naming rules](https://docs.microsoft.com/en-us/analysis-services/multidimensional-models/olap-physical/object-naming-rules-analysis-services) in the Microsoft documentation.  | 
| An invalid role name was provided. The role name can't contain any of the following reserved characters: . , ; ' ` : / \$1\$1 \$1 \$1 ? \$1" & % \$1 \$1 \$1 = ( ) [ ] \$1 \$1 < > | RDS stored procedure |  The role name can't contain any reserved characters. For SSAS object naming conventions, see [Object naming rules](https://docs.microsoft.com/en-us/analysis-services/multidimensional-models/olap-physical/object-naming-rules-analysis-services) in the Microsoft documentation.  | 

# Support for SQL Server Integration Services in Amazon RDS for SQL Server
<a name="Appendix.SQLServer.Options.SSIS"></a>

Microsoft SQL Server Integration Services (SSIS) is a component that you can use to perform a broad range of data migration tasks. SSIS is a platform for data integration and workflow applications. It features a data warehousing tool used for data extraction, transformation, and loading (ETL). You can also use this tool to automate maintenance of SQL Server databases and updates to multidimensional cube data.

SSIS projects are organized into packages saved as XML-based .dtsx files. Packages can contain control flows and data flows. You use data flows to represent ETL operations. After deployment, packages are stored in SQL Server in the SSISDB database. SSISDB is an online transaction processing (OLTP) database in the full recovery mode.

Amazon RDS for SQL Server supports running SSIS directly on an RDS DB instance. You can enable SSIS on an existing or new DB instance. SSIS is installed on the same DB instance as your database engine.

RDS supports SSIS for SQL Server Standard and Enterprise Editions on the following versions:
+ SQL Server 2022, all versions
+ SQL Server 2019, version 15.00.4043.16.v1 and higher
+ SQL Server 2017, version 14.00.3223.3.v1 and higher
+ SQL Server 2016, version 13.00.5426.0.v1 and higher

**Contents**
+ [

## Limitations and recommendations
](#SSIS.Limitations)
+ [

## Enabling SSIS
](#SSIS.Enabling)
  + [

### Creating the option group for SSIS
](#SSIS.OptionGroup)
  + [

### Adding the SSIS option to the option group
](#SSIS.Add)
  + [

### Creating the parameter group for SSIS
](#SSIS.CreateParamGroup)
  + [

### Modifying the parameter for SSIS
](#SSIS.ModifyParam)
  + [

### Associating the option group and parameter group with your DB instance
](#SSIS.Apply)
  + [

### Enabling S3 integration
](#SSIS.EnableS3)
+ [

# Administrative permissions on SSISDB
](SSIS.Permissions.md)
  + [

## Setting up a Windows-authenticated user for SSIS
](SSIS.Permissions.md#SSIS.Use.Auth)
+ [

# Deploying an SSIS project
](SSIS.Deploy.md)
+ [

# Monitoring the status of a deployment task
](SSIS.Monitor.md)
+ [

# Using SSIS
](SSIS.Use.md)
  + [

## Setting database connection managers for SSIS projects
](SSIS.Use.md#SSIS.Use.ConnMgrs)
  + [

## Creating an SSIS proxy
](SSIS.Use.md#SSIS.Use.Proxy)
  + [

## Scheduling an SSIS package using SQL Server Agent
](SSIS.Use.md#SSIS.Use.Schedule)
  + [

## Revoking SSIS access from the proxy
](SSIS.Use.md#SSIS.Use.Revoke)
+ [

# Disable and drop SSIS database
](SSIS.DisableDrop.md)
  + [

## Disabling SSIS
](SSIS.DisableDrop.md#SSIS.Disable)
  + [

## Dropping the SSISDB database
](SSIS.DisableDrop.md#SSIS.Drop)

## Limitations and recommendations
<a name="SSIS.Limitations"></a>

The following limitations and recommendations apply to running SSIS on RDS for SQL Server:
+ The DB instance must have an associated parameter group with the `clr enabled` parameter set to 1. For more information, see [Modifying the parameter for SSIS](#SSIS.ModifyParam).
**Note**  
If you enable the `clr enabled` parameter on SQL Server 2017 or 2019, you can't use the common language runtime (CLR) on your DB instance. For more information, see [Features not supported and features with limited support](SQLServer.Concepts.General.FeatureNonSupport.md).
+ The following control flow tasks are supported:
  + Analysis Services Execute DDL Task
  + Analysis Services Processing Task
  + Bulk Insert Task
  + Check Database Integrity Task
  + Data Flow Task
  + Data Mining Query Task
  + Data Profiling Task
  + Execute Package Task
  + Execute SQL Server Agent Job Task
  + Execute SQL Task
  + Execute T-SQL Statement Task
  + Notify Operator Task
  + Rebuild Index Task
  + Reorganize Index Task
  + Shrink Database Task
  + Transfer Database Task
  + Transfer Jobs Task
  + Transfer Logins Task
  + Transfer SQL Server Objects Task
  + Update Statistics Task
+ Only project deployment is supported.
+ Running SSIS packages by using SQL Server Agent is supported.
+ SSIS log records can be inserted only into user-created databases.
+ Use only the `D:\S3` folder for working with files. Files placed in any other directory are deleted. Be aware of a few other file location details:
  + Place SSIS project input and output files in the `D:\S3` folder.
  + For the Data Flow Task, change the location for `BLOBTempStoragePath` and `BufferTempStoragePath` to a file inside the `D:\S3` folder. The file path must start with `D:\S3\`.
  + Ensure that all parameters, variables, and expressions used for file connections point to the `D:\S3` folder.
  + On Multi-AZ instances, files created by SSIS in the `D:\S3` folder are deleted after a failover. For more information, see [Multi-AZ limitations for S3 integration](User.SQLServer.Options.S3-integration.md#S3-MAZ).
  + Upload the files created by SSIS in the `D:\S3` folder to your Amazon S3 bucket to make them durable.
+ Import Column and Export Column transformations and the Script component on the Data Flow Task aren't supported.
+ You can't enable dump on running SSIS packages, and you can't add data taps on SSIS packages.
+ The SSIS Scale Out feature isn't supported.
+ You can't deploy projects directly. We provide RDS stored procedures to do this. For more information, see [Deploying an SSIS project](SSIS.Deploy.md).
+ Build SSIS project (.ispac) files with the `DoNotSavePasswords` protection mode for deploying on RDS.
+ SSIS isn't supported on Always On instances with read replicas.
+ You can't back up the SSISDB database that is associated with the `SSIS` option.
+ Importing and restoring the SSISDB database from other instances of SSIS isn't supported.
+ You can connect to other SQL Server DB instances or to an Oracle data source. Connecting to other database engines, such as MySQL or PostgreSQL, isn't supported for SSIS on RDS for SQL Server. For more information on connecting to an Oracle data source, see [Linked Servers with Oracle OLEDB](Appendix.SQLServer.Options.LinkedServers_Oracle_OLEDB.md). 
+ SSIS does not support a domain joined instance with an outgoing trust to an on-premises domain. When using an outgoing trust, run the SSIS job from an account in the local AWS domain.
+ Executing file system based packages is not supported.

## Enabling SSIS
<a name="SSIS.Enabling"></a>

You enable SSIS by adding the SSIS option to your DB instance. Use the following process:

1. Create a new option group, or choose an existing option group.

1. Add the `SSIS` option to the option group.

1. Create a new parameter group, or choose an existing parameter group.

1. Modify the parameter group to set the `clr enabled` parameter to 1.

1. Associate the option group and parameter group with the DB instance.

1. Enable Amazon S3 integration.

**Note**  
If a database with the name SSISDB or a reserved SSIS login already exists on the DB instance, you can't enable SSIS on the instance.

### Creating the option group for SSIS
<a name="SSIS.OptionGroup"></a>

To work with SSIS, create an option group or modify an option group that corresponds to the SQL Server edition and version of the DB instance that you plan to use. To do this, use the AWS Management Console or the AWS CLI.

#### Console
<a name="SSIS.OptionGroup.Console"></a>

The following procedure creates an option group for SQL Server Standard Edition 2016.

**To create the option group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose **Create group**.

1. In the **Create option group** window, do the following:

   1. For **Name**, enter a name for the option group that is unique within your AWS account, such as **ssis-se-2016**. The name can contain only letters, digits, and hyphens.

   1. For **Description**, enter a brief description of the option group, such as **SSIS option group for SQL Server SE 2016**. The description is used for display purposes. 

   1. For **Engine**, choose **sqlserver-se**.

   1. For **Major engine version**, choose **13.00**.

1. Choose **Create**.

#### CLI
<a name="SSIS.OptionGroup.CLI"></a>

The following procedure creates an option group for SQL Server Standard Edition 2016.

**To create the option group**
+ Run one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-option-group \
      --option-group-name ssis-se-2016 \
      --engine-name sqlserver-se \
      --major-engine-version 13.00 \
      --option-group-description "SSIS option group for SQL Server SE 2016"
  ```

  For Windows:

  ```
  aws rds create-option-group ^
      --option-group-name ssis-se-2016 ^
      --engine-name sqlserver-se ^
      --major-engine-version 13.00 ^
      --option-group-description "SSIS option group for SQL Server SE 2016"
  ```

### Adding the SSIS option to the option group
<a name="SSIS.Add"></a>

Next, use the AWS Management Console or the AWS CLI to add the `SSIS` option to your option group.

#### Console
<a name="SSIS.Add.Console"></a>

**To add the SSIS option**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose the option group that you just created, **ssis-se-2016** in this example.

1. Choose **Add option**.

1. Under **Option details**, choose **SSIS** for **Option name**.

1. Under **Scheduling**, choose whether to add the option immediately or at the next maintenance window.

1. Choose **Add option**.

#### CLI
<a name="SSIS.Add.CLI"></a>

**To add the SSIS option**
+ Add the `SSIS` option to the option group.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds add-option-to-option-group \
      --option-group-name ssis-se-2016 \
      --options OptionName=SSIS \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds add-option-to-option-group ^
      --option-group-name ssis-se-2016 ^
      --options OptionName=SSIS ^
      --apply-immediately
  ```

### Creating the parameter group for SSIS
<a name="SSIS.CreateParamGroup"></a>

Create or modify a parameter group for the `clr enabled` parameter that corresponds to the SQL Server edition and version of the DB instance that you plan to use for SSIS.

#### Console
<a name="SSIS.CreateParamGroup.Console"></a>

The following procedure creates a parameter group for SQL Server Standard Edition 2016.

**To create the parameter group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Parameter groups**.

1. Choose **Create parameter group**.

1. In the **Create parameter group** pane, do the following:

   1. For **Parameter group family**, choose **sqlserver-se-13.0**.

   1. For **Group name**, enter an identifier for the parameter group, such as **ssis-sqlserver-se-13**.

   1. For **Description**, enter **clr enabled parameter group**.

1. Choose **Create**.

#### CLI
<a name="SSIS.CreateParamGroup.CLI"></a>

The following procedure creates a parameter group for SQL Server Standard Edition 2016.

**To create the parameter group**
+ Run one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-db-parameter-group \
      --db-parameter-group-name ssis-sqlserver-se-13 \
      --db-parameter-group-family "sqlserver-se-13.0" \
      --description "clr enabled parameter group"
  ```

  For Windows:

  ```
  aws rds create-db-parameter-group ^
      --db-parameter-group-name ssis-sqlserver-se-13 ^
      --db-parameter-group-family "sqlserver-se-13.0" ^
      --description "clr enabled parameter group"
  ```

### Modifying the parameter for SSIS
<a name="SSIS.ModifyParam"></a>

Modify the `clr enabled` parameter in the parameter group that corresponds to the SQL Server edition and version of your DB instance. For SSIS, set the `clr enabled` parameter to 1.

#### Console
<a name="SSIS.ModifyParam.Console"></a>

The following procedure modifies the parameter group that you created for SQL Server Standard Edition 2016.

**To modify the parameter group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Parameter groups**.

1. Choose the parameter group, such as **ssis-sqlserver-se-13**.

1. Under **Parameters**, filter the parameter list for **clr**.

1. Choose **clr enabled**.

1. Choose **Edit parameters**.

1. From **Values**, choose **1**.

1. Choose **Save changes**.

#### CLI
<a name="SSIS.ModifyParam.CLI"></a>

The following procedure modifies the parameter group that you created for SQL Server Standard Edition 2016.

**To modify the parameter group**
+ Run one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-parameter-group \
      --db-parameter-group-name ssis-sqlserver-se-13 \
      --parameters "ParameterName='clr enabled',ParameterValue=1,ApplyMethod=immediate"
  ```

  For Windows:

  ```
  aws rds modify-db-parameter-group ^
      --db-parameter-group-name ssis-sqlserver-se-13 ^
      --parameters "ParameterName='clr enabled',ParameterValue=1,ApplyMethod=immediate"
  ```

### Associating the option group and parameter group with your DB instance
<a name="SSIS.Apply"></a>

To associate the SSIS option group and parameter group with your DB instance, use the AWS Management Console or the AWS CLI 

**Note**  
If you use an existing instance, it must already have an Active Directory domain and AWS Identity and Access Management (IAM) role associated with it. If you create a new instance, specify an existing Active Directory domain and IAM role. For more information, see [Working with Active Directory with RDS for SQL Server](User.SQLServer.ActiveDirectoryWindowsAuth.md).

#### Console
<a name="SSIS.Apply.Console"></a>

To finish enabling SSIS, associate your SSIS option group and parameter group with a new or existing DB instance:
+ For a new DB instance, associate them when you launch the instance. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).
+ For an existing DB instance, associate them by modifying the instance. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

#### CLI
<a name="SSIS.Apply.CLI"></a>

You can associate the SSIS option group and parameter group with a new or existing DB instance.

**To create an instance with the SSIS option group and parameter group**
+ Specify the same DB engine type and major version as you used when creating the option group.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-db-instance \
      --db-instance-identifier myssisinstance \
      --db-instance-class db.m5.2xlarge \
      --engine sqlserver-se \
      --engine-version 13.00.5426.0.v1 \
      --allocated-storage 100 \
      --manage-master-user-password \
      --master-username admin \
      --storage-type gp2 \
      --license-model li \
      --domain-iam-role-name my-directory-iam-role \
      --domain my-domain-id \
      --option-group-name ssis-se-2016 \
      --db-parameter-group-name ssis-sqlserver-se-13
  ```

  For Windows:

  ```
  aws rds create-db-instance ^
      --db-instance-identifier myssisinstance ^
      --db-instance-class db.m5.2xlarge ^
      --engine sqlserver-se ^
      --engine-version 13.00.5426.0.v1 ^
      --allocated-storage 100 ^
      --manage-master-user-password ^
      --master-username admin ^
      --storage-type gp2 ^
      --license-model li ^
      --domain-iam-role-name my-directory-iam-role ^
      --domain my-domain-id ^
      --option-group-name ssis-se-2016 ^
      --db-parameter-group-name ssis-sqlserver-se-13
  ```

**To modify an instance and associate the SSIS option group and parameter group**
+ Run one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-instance \
      --db-instance-identifier myssisinstance \
      --option-group-name ssis-se-2016 \
      --db-parameter-group-name ssis-sqlserver-se-13 \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds modify-db-instance ^
      --db-instance-identifier myssisinstance ^
      --option-group-name ssis-se-2016 ^
      --db-parameter-group-name ssis-sqlserver-se-13 ^
      --apply-immediately
  ```

### Enabling S3 integration
<a name="SSIS.EnableS3"></a>

To download SSIS project (.ispac) files to your host for deployment, use S3 file integration. For more information, see [Integrating an Amazon RDS for SQL Server DB instance with Amazon S3](User.SQLServer.Options.S3-integration.md).

# Administrative permissions on SSISDB
<a name="SSIS.Permissions"></a>

When the instance is created or modified with the SSIS option, the result is an SSISDB database with the ssis\$1admin and ssis\$1logreader roles granted to the master user. The master user has the following privileges in SSISDB:
+ alter on ssis\$1admin role
+ alter on ssis\$1logreader role
+ alter any user

Because the master user is a SQL-authenticated user, you can't use the master user for executing SSIS packages. The master user can use these privileges to create new SSISDB users and add them to the ssis\$1admin and ssis\$1logreader roles. Doing this is useful for giving access to your domain users for using SSIS.

## Setting up a Windows-authenticated user for SSIS
<a name="SSIS.Use.Auth"></a>

The master user can use the following code example to set up a Windows-authenticated login in SSISDB and grant the required procedure permissions. Doing this grants permissions to the domain user to deploy and run SSIS packages, use S3 file transfer procedures, create credentials, and work with the SQL Server Agent proxy. For more information, see [Credentials (database engine)](https://docs.microsoft.com/en-us/sql/relational-databases/security/authentication-access/credentials-database-engine?view=sql-server-ver15) and [Create a SQL Server Agent proxy](https://docs.microsoft.com/en-us/sql/ssms/agent/create-a-sql-server-agent-proxy?view=sql-server-ver15) in the Microsoft documentation.

**Note**  
You can grant some or all of the following permissions as needed to Windows-authenticated users.

**Example**  

```
-- Create a server-level SQL login for the domain user, if it doesn't already exist
USE [master]
GO
CREATE LOGIN [mydomain\user_name] FROM WINDOWS
GO						
						
-- Create a database-level account for the domain user, if it doesn't already exist						
USE [SSISDB]
GO
CREATE USER [mydomain\user_name] FOR LOGIN [mydomain\user_name]

-- Add SSIS role membership to the domain user
ALTER ROLE [ssis_admin] ADD MEMBER [mydomain\user_name]
ALTER ROLE [ssis_logreader] ADD MEMBER [mydomain\user_name]
GO

-- Add MSDB role membership to the domain user
USE [msdb]
GO
CREATE USER [mydomain\user_name] FOR LOGIN [mydomain\user_name]

-- Grant MSDB stored procedure privileges to the domain user
GRANT EXEC ON msdb.dbo.rds_msbi_task TO [mydomain\user_name] with grant option
GRANT SELECT ON msdb.dbo.rds_fn_task_status TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_task_status TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_cancel_task TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_download_from_s3 TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_upload_to_s3 TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_delete_from_filesystem TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.rds_gather_file_details TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_add_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_update_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_grant_login_to_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_revoke_login_from_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_delete_proxy TO [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_enum_login_for_proxy to [mydomain\user_name] with grant option
GRANT EXEC ON msdb.dbo.sp_enum_proxy_for_subsystem TO [mydomain\user_name]  with grant option
GRANT EXEC ON msdb.dbo.rds_sqlagent_proxy TO [mydomain\user_name] WITH GRANT OPTION


-- Add the SQLAgentUserRole privilege to the domain user
USE [msdb]
GO
ALTER ROLE [SQLAgentUserRole] ADD MEMBER [mydomain\user_name]
GO

-- Grant the ALTER ANY CREDENTIAL privilege to the domain user
USE [master]
GO
GRANT ALTER ANY CREDENTIAL TO [mydomain\user_name]
GO
```

# Deploying an SSIS project
<a name="SSIS.Deploy"></a>

On RDS, you can't deploy SSIS projects directly by using SQL Server Management Studio (SSMS) or SSIS procedures. To download project files from Amazon S3 and then deploy them, use RDS stored procedures.

To run the stored procedures, log in as any user that you granted permissions for running the stored procedures. For more information, see [Setting up a Windows-authenticated user for SSIS](SSIS.Permissions.md#SSIS.Use.Auth).

**To deploy the SSIS project**

1. Download the project (.ispac) file.

   ```
   exec msdb.dbo.rds_download_from_s3
   @s3_arn_of_file='arn:aws:s3:::bucket_name/ssisproject.ispac',
   @rds_file_path='D:\S3\ssisproject.ispac',
   @overwrite_file=1;
   ```

1. Submit the deployment task, making sure of the following:
   + The folder is present in the SSIS catalog.
   + The project name matches the project name that you used while developing the SSIS project.

   ```
   exec msdb.dbo.rds_msbi_task
   @task_type='SSIS_DEPLOY_PROJECT',
   @folder_name='DEMO',
   @project_name='ssisproject',
   @file_path='D:\S3\ssisproject.ispac';
   ```

# Monitoring the status of a deployment task
<a name="SSIS.Monitor"></a>

To track the status of your deployment task, call the `rds_fn_task_status` function. It takes two parameters. The first parameter should always be `NULL` because it doesn't apply to SSIS. The second parameter accepts a task ID. 

To see a list of all tasks, set the first parameter to `NULL` and the second parameter to `0`, as shown in the following example.

```
SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,0);
```

To get a specific task, set the first parameter to `NULL` and the second parameter to the task ID, as shown in the following example.

```
SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,42);
```

The `rds_fn_task_status` function returns the following information.


| Output parameter | Description | 
| --- | --- | 
| `task_id` | The ID of the task. | 
| `task_type` | `SSIS_DEPLOY_PROJECT` | 
| `database_name` | Not applicable to SSIS tasks. | 
| `% complete` | The progress of the task as a percentage. | 
| `duration (mins)` | The amount of time spent on the task, in minutes. | 
| `lifecycle` |  The status of the task. Possible statuses are the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SSIS.Monitor.html)  | 
| `task_info` | Additional information about the task. If an error occurs during processing, this column contains information about the error. | 
| `last_updated` | The date and time that the task status was last updated. | 
| `created_at` | The date and time that the task was created. | 
| `S3_object_arn` |  Not applicable to SSIS tasks.  | 
| `overwrite_S3_backup_file` | Not applicable to SSIS tasks. | 
| `KMS_master_key_arn` |  Not applicable to SSIS tasks.  | 
| `filepath` |  Not applicable to SSIS tasks.  | 
| `overwrite_file` |  Not applicable to SSIS tasks.  | 
| `task_metadata` | Metadata associated with the SSIS task. | 

# Using SSIS
<a name="SSIS.Use"></a>

After deploying the SSIS project into the SSIS catalog, you can run packages directly from SSMS or schedule them by using SQL Server Agent. You must use a Windows-authenticated login for executing SSIS packages. For more information, see [Setting up a Windows-authenticated user for SSIS](SSIS.Permissions.md#SSIS.Use.Auth).

**Topics**
+ [

## Setting database connection managers for SSIS projects
](#SSIS.Use.ConnMgrs)
+ [

## Creating an SSIS proxy
](#SSIS.Use.Proxy)
+ [

## Scheduling an SSIS package using SQL Server Agent
](#SSIS.Use.Schedule)
+ [

## Revoking SSIS access from the proxy
](#SSIS.Use.Revoke)

## Setting database connection managers for SSIS projects
<a name="SSIS.Use.ConnMgrs"></a>

When you use a connection manager, you can use these types of authentication:
+ For local database connections using AWS Managed Active Directory, you can use SQL authentication or Windows authentication. For Windows authentication, use `DB_instance_name.fully_qualified_domain_name` as the server name of the connection string.

  An example is `myssisinstance.corp-ad.example.com`, where `myssisinstance` is the DB instance name and `corp-ad.example.com` is the fully qualified domain name.
+ For remote connections, always use SQL authentication.
+ For local database connections using self-managed Active Directory, you can use SQL authentication or Windows authentication. For Windows authentication, use `.` or `LocalHost` as the server name of the connection string.

## Creating an SSIS proxy
<a name="SSIS.Use.Proxy"></a>

To be able to schedule SSIS packages using SQL Server Agent, create an SSIS credential and an SSIS proxy. Run these procedures as a Windows-authenticated user.

**To create the SSIS credential**
+ Create the credential for the proxy. To do this, you can use SSMS or the following SQL statement.

  ```
  USE [master]
  GO
  CREATE CREDENTIAL [SSIS_Credential] WITH IDENTITY = N'mydomain\user_name', SECRET = N'mysecret'
  GO
  ```
**Note**  
`IDENTITY` must be a domain-authenticated login. Replace `mysecret` with the password for the domain-authenticated login.  
Whenever the SSISDB primary host is changed, alter the SSIS proxy credentials to allow the new host to access them.

**To create the SSIS proxy**

1. Use the following SQL statement to create the proxy.

   ```
   USE [msdb]
   GO
   EXEC msdb.dbo.sp_add_proxy @proxy_name=N'SSIS_Proxy',@credential_name=N'SSIS_Credential',@description=N''
   GO
   ```

1. Use the following SQL statement to grant access to the proxy to other users.

   ```
   USE [msdb]
   GO
   EXEC msdb.dbo.sp_grant_login_to_proxy @proxy_name=N'SSIS_Proxy',@login_name=N'mydomain\user_name'
   GO
   ```

1. Use the following SQL statement to give the SSIS subsystem access to the proxy.

   ```
   USE [msdb]
   GO
   EXEC msdb.dbo.rds_sqlagent_proxy @task_type='GRANT_SUBSYSTEM_ACCESS',@proxy_name='SSIS_Proxy',@proxy_subsystem='SSIS'
   GO
   ```

**To view the proxy and grants on the proxy**

1. Use the following SQL statement to view the grantees of the proxy.

   ```
   USE [msdb]
   GO
   EXEC sp_help_proxy
   GO
   ```

1. Use the following SQL statement to view the subsystem grants.

   ```
   USE [msdb]
   GO
   EXEC msdb.dbo.sp_enum_proxy_for_subsystem
   GO
   ```

## Scheduling an SSIS package using SQL Server Agent
<a name="SSIS.Use.Schedule"></a>

After you create the credential and proxy and grant SSIS access to the proxy, you can create a SQL Server Agent job to schedule the SSIS package.

**To schedule the SSIS package**
+ You can use SSMS or T-SQL for creating the SQL Server Agent job. The following example uses T-SQL.

  ```
  USE [msdb]
  GO
  DECLARE @jobId BINARY(16)
  EXEC msdb.dbo.sp_add_job @job_name=N'MYSSISJob',
  @enabled=1,
  @notify_level_eventlog=0,
  @notify_level_email=2,
  @notify_level_page=2,
  @delete_level=0,
  @category_name=N'[Uncategorized (Local)]',
  @job_id = @jobId OUTPUT
  GO
  EXEC msdb.dbo.sp_add_jobserver @job_name=N'MYSSISJob',@server_name=N'(local)'
  GO
  EXEC msdb.dbo.sp_add_jobstep @job_name=N'MYSSISJob',@step_name=N'ExecuteSSISPackage',
  @step_id=1,
  @cmdexec_success_code=0,
  @on_success_action=1,
  @on_fail_action=2,
  @retry_attempts=0,
  @retry_interval=0,
  @os_run_priority=0,
  @subsystem=N'SSIS',
  @command=N'/ISSERVER "\"\SSISDB\MySSISFolder\MySSISProject\MySSISPackage.dtsx\"" /SERVER "\"my-rds-ssis-instance.corp-ad.company.com/\"" 
  /Par "\"$ServerOption::LOGGING_LEVEL(Int16)\"";1 /Par "\"$ServerOption::SYNCHRONIZED(Boolean)\"";True /CALLERINFO SQLAGENT /REPORTING E',
  @database_name=N'master',
  @flags=0,
  @proxy_name=N'SSIS_Proxy'
  GO
  ```

## Revoking SSIS access from the proxy
<a name="SSIS.Use.Revoke"></a>

You can revoke access to the SSIS subsystem and delete the SSIS proxy using the following stored procedures.

**To revoke access and delete the proxy**

1. Revoke subsystem access.

   ```
   USE [msdb]
   GO
   EXEC msdb.dbo.rds_sqlagent_proxy @task_type='REVOKE_SUBSYSTEM_ACCESS',@proxy_name='SSIS_Proxy',@proxy_subsystem='SSIS'
   GO
   ```

1. Revoke the grants on the proxy.

   ```
   USE [msdb]
   GO
   EXEC msdb.dbo.sp_revoke_login_from_proxy @proxy_name=N'SSIS_Proxy',@name=N'mydomain\user_name'
   GO
   ```

1. Delete the proxy.

   ```
   USE [msdb]
   GO
   EXEC dbo.sp_delete_proxy @proxy_name = N'SSIS_Proxy'
   GO
   ```

# Disable and drop SSIS database
<a name="SSIS.DisableDrop"></a>

Use the following steps to disable or drop SSIS databases:

**Topics**
+ [

## Disabling SSIS
](#SSIS.Disable)
+ [

## Dropping the SSISDB database
](#SSIS.Drop)

## Disabling SSIS
<a name="SSIS.Disable"></a>

To disable SSIS, remove the `SSIS` option from its option group.

**Important**  
Removing the option doesn't delete the SSISDB database, so you can safely remove the option without losing the SSIS projects.  
You can re-enable the `SSIS` option after removal to reuse the SSIS projects that were previously deployed to the SSIS catalog.

### Console
<a name="SSIS.Disable.Console"></a>

The following procedure removes the `SSIS` option.

**To remove the SSIS option from its option group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose the option group with the `SSIS` option (`ssis-se-2016` in the previous examples).

1. Choose **Delete option**.

1. Under **Deletion options**, choose **SSIS** for **Options to delete**.

1. Under **Apply immediately**, choose **Yes** to delete the option immediately, or **No** to delete it at the next maintenance window.

1. Choose **Delete**.

### CLI
<a name="SSIS.Disable.CLI"></a>

The following procedure removes the `SSIS` option.

**To remove the SSIS option from its option group**
+ Run one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds remove-option-from-option-group \
      --option-group-name ssis-se-2016 \
      --options SSIS \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds remove-option-from-option-group ^
      --option-group-name ssis-se-2016 ^
      --options SSIS ^
      --apply-immediately
  ```

## Dropping the SSISDB database
<a name="SSIS.Drop"></a>

After removing the SSIS option, the SSISDB database isn't deleted. To drop the SSISDB database, use the `rds_drop_ssis_database` stored procedure after removing the SSIS option.

**To drop the SSIS database**
+ Use the following stored procedure.

  ```
  USE [msdb]
  GO
  EXEC dbo.rds_drop_ssis_database
  GO
  ```

After dropping the SSISDB database, if you re-enable the SSIS option you get a fresh SSISDB catalog.

# Support for SQL Server Reporting Services in Amazon RDS for SQL Server
<a name="Appendix.SQLServer.Options.SSRS"></a>

Microsoft SQL Server Reporting Services (SSRS) is a server-based application used for report generation and distribution. It's part of a suite of SQL Server services that also includes SQL Server Analysis Services (SSAS) and SQL Server Integration Services (SSIS). SSRS is a service built on top of SQL Server. You can use it to collect data from various data sources and present it in a way that's easily understandable and ready for analysis.

Amazon RDS for SQL Server supports running SSRS directly on RDS DB instances. You can use SSRS with existing or new DB instances.

RDS supports SSRS for SQL Server Standard and Enterprise Editions on the following versions:
+ SQL Server 2022, all versions
+ SQL Server 2019, version 15.00.4043.16.v1 and higher
+ SQL Server 2017, version 14.00.3223.3.v1 and higher
+ SQL Server 2016, version 13.00.5820.21.v1 and higher

**Contents**
+ [

## Limitations and recommendations
](#SSRS.Limitations)
+ [

# Turning on SSRS
](SSRS.Enabling.md)
  + [

## Creating an option group for SSRS
](SSRS.Enabling.md#SSRS.OptionGroup)
  + [

## Adding the SSRS option to your option group
](SSRS.Enabling.md#SSRS.Add)
  + [

## Associating your option group with your DB instance
](SSRS.Enabling.md#SSRS.Apply)
  + [

## Allowing inbound access to your VPC security group
](SSRS.Enabling.md#SSRS.Inbound)
+ [

## Report server databases
](#SSRS.DBs)
+ [

## SSRS log files
](#SSRS.Logs)
+ [

# Accessing the SSRS web portal
](SSRS.Access.md)
  + [

## Using SSL on RDS
](SSRS.Access.md#SSRS.Access.SSL)
  + [

## Granting access to domain users
](SSRS.Access.md#SSRS.Access.Grant)
  + [

## Accessing the web portal
](SSRS.Access.md#SSRS.Access)
+ [

# Deploying reports and configuring report data sources
](SSRS.DeployConfig.md)
  + [

## Deploying reports to SSRS
](SSRS.DeployConfig.md#SSRS.Deploy)
  + [

## Configuring the report data source
](SSRS.DeployConfig.md#SSRS.ConfigureDataSource)
+ [

# Using SSRS Email to send reports
](SSRS.Email.md)
+ [

# Revoking system-level permissions
](SSRS.Access.Revoke.md)
+ [

# Monitoring the status of a task
](SSRS.Monitor.md)
+ [

# Disabling and deleting SSRS databases
](SSRS.DisableDelete.md)
  + [

## Turning off SSRS
](SSRS.DisableDelete.md#SSRS.Disable)
  + [

## Deleting the SSRS databases
](SSRS.DisableDelete.md#SSRS.Drop)

## Limitations and recommendations
<a name="SSRS.Limitations"></a>

The following limitations and recommendations apply to running SSRS on RDS for SQL Server:
+ You can't use SSRS on DB instances that have read replicas.
+ Instances must use self-managed Active Directory or AWS Directory Service for Microsoft Active Directory for SSRS web portal and web server authentication. For more information, see [Working with Active Directory with RDS for SQL Server](User.SQLServer.ActiveDirectoryWindowsAuth.md). 
+ You can't back up the reporting server databases that are created with the SSRS option.
+ Importing and restoring report server databases from other instances of SSRS isn't supported. For more information, see [Report server databases](#SSRS.DBs).
+ You can't configure SSRS to listen on the default SSL port (443). The allowed values are 1150–49511, except 1234, 1434, 3260, 3343, 3389, and 47001.
+ Subscriptions through a Microsoft Windows file share aren't supported.
+ Using Reporting Services Configuration Manager isn't supported.
+ Creating and modifying roles isn't supported.
+ Modifying report server properties isn't supported.
+ System administrator and system user roles aren't granted.
+ You can't edit system-level role assignments through the web portal.

# Turning on SSRS
<a name="SSRS.Enabling"></a>

Use the following process to turn on SSRS for your DB instance:

1. Create a new option group, or choose an existing option group.

1. Add the `SSRS` option to the option group.

1. Associate the option group with the DB instance.

1. Allow inbound access to the virtual private cloud (VPC) security group for the SSRS listener port.

## Creating an option group for SSRS
<a name="SSRS.OptionGroup"></a>

To work with SSRS, create an option group that corresponds to the SQL Server engine and version of the DB instance that you plan to use. To do this, use the AWS Management Console or the AWS CLI. 

**Note**  
You can also use an existing option group if it's for the correct SQL Server engine and version.

### Console
<a name="SSRS.OptionGroup.Console"></a>

The following procedure creates an option group for SQL Server Standard Edition 2017.

**To create the option group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose **Create group**.

1. In the **Create option group** pane, do the following:

   1. For **Name**, enter a name for the option group that is unique within your AWS account, such as **ssrs-se-2017**. The name can contain only letters, digits, and hyphens.

   1. For **Description**, enter a brief description of the option group, such as **SSRS option group for SQL Server SE 2017**. The description is used for display purposes.

   1. For **Engine**, choose **sqlserver-se**.

   1. For **Major engine version**, choose **14.00**.

1. Choose **Create**.

### CLI
<a name="SSRS.OptionGroup.CLI"></a>

The following procedure creates an option group for SQL Server Standard Edition 2017.

**To create the option group**
+ Run one of the following commands.

**Example**  
For Linux, macOS, or Unix:  

```
aws rds create-option-group \
    --option-group-name ssrs-se-2017 \
    --engine-name sqlserver-se \
    --major-engine-version 14.00 \
    --option-group-description "SSRS option group for SQL Server SE 2017"
```
For Windows:  

```
aws rds create-option-group ^
    --option-group-name ssrs-se-2017 ^
    --engine-name sqlserver-se ^
    --major-engine-version 14.00 ^
    --option-group-description "SSRS option group for SQL Server SE 2017"
```

## Adding the SSRS option to your option group
<a name="SSRS.Add"></a>

Next, use the AWS Management Console or the AWS CLI to add the `SSRS` option to your option group.

### Console
<a name="SSRS.Add.CON"></a>

**To add the SSRS option**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose the option group that you just created, then choose **Add option**.

1. Under **Option details**, choose **SSRS** for **Option name**.

1. Under **Option settings**, do the following:

   1. Enter the port for the SSRS service to listen on. The default is 8443. For a list of allowed values, see [Limitations and recommendations](Appendix.SQLServer.Options.SSRS.md#SSRS.Limitations).

   1. Enter a value for **Max memory**.

      **Max memory** specifies the upper threshold above which no new memory allocation requests are granted to report server applications. The number is a percentage of the total memory of the DB instance. The allowed values are 10–80.

   1. For **Security groups**, choose the VPC security group to associate with the option. Use the same security group that is associated with your DB instance.

1. To use SSRS Email to send reports, choose the **Configure email delivery options** check box under **Email delivery in reporting services**, and then do the following:

   1. For **Sender email address**, enter the email address to use in the **From** field of messages sent by SSRS Email.

      Specify a user account that has permission to send mail from the SMTP server.

   1. For **SMTP server**, specify the SMTP server or gateway to use.

      It can be an IP address, the NetBIOS name of a computer on your corporate intranet, or a fully qualified domain name.

   1. For **SMTP port**, enter the port to use to connect to the mail server. The default is 25.

   1. To use authentication:

      1. Select the **Use authentication** check box.

      1. For **Secret Amazon Resource Name (ARN)** enter the AWS Secrets Manager ARN for the user credentials.

         Use the following format:

         **arn:aws:secretsmanager:*Region*:*AccountId*:secret:*SecretName*-*6RandomCharacters***

         For example:

         **arn:aws:secretsmanager:*us-west-2*:*123456789012*:secret:*MySecret-a1b2c3***

         For more information on creating the secret, see [Using SSRS Email to send reports](SSRS.Email.md).

   1. Select the **Use Secure Sockets Layer (SSL)** check box to encrypt email messages using SSL.

1. Under **Scheduling**, choose whether to add the option immediately or at the next maintenance window.

1. Choose **Add option**.

### CLI
<a name="SSRS.Add.CLI"></a>

**To add the SSRS option**

1. Create a JSON file, for example `ssrs-option.json`.

   1. Set the following required parameters:
      + `OptionGroupName` – The name of option group that you created or chose previously (`ssrs-se-2017` in the following example).
      + `Port` – The port for the SSRS service to listen on. The default is 8443. For a list of allowed values, see [Limitations and recommendations](Appendix.SQLServer.Options.SSRS.md#SSRS.Limitations).
      + `VpcSecurityGroupMemberships` – VPC security group memberships for your RDS DB instance.
      + `MAX_MEMORY` – The upper threshold above which no new memory allocation requests are granted to report server applications. The number is a percentage of the total memory of the DB instance. The allowed values are 10–80.

   1. (Optional) Set the following parameters to use SSRS Email:
      + `SMTP_ENABLE_EMAIL` – Set to `true` to use SSRS Email. The default is `false`.
      + `SMTP_SENDER_EMAIL_ADDRESS` – The email address to use in the **From** field of messages sent by SSRS Email. Specify a user account that has permission to send mail from the SMTP server.
      + `SMTP_SERVER` – The SMTP server or gateway to use. It can be an IP address, the NetBIOS name of a computer on your corporate intranet, or a fully qualified domain name.
      + `SMTP_PORT` – The port to use to connect to the mail server. The default is 25.
      + `SMTP_USE_SSL` – Set to `true` to encrypt email messages using SSL. The default is `true`.
      + `SMTP_EMAIL_CREDENTIALS_SECRET_ARN` – The Secrets Manager ARN that holds the user credentials. Use the following format:

        **arn:aws:secretsmanager:*Region*:*AccountId*:secret:*SecretName*-*6RandomCharacters***

        For more information on creating the secret, see [Using SSRS Email to send reports](SSRS.Email.md).
      + `SMTP_USE_ANONYMOUS_AUTHENTICATION` – Set to `true` and don't include `SMTP_EMAIL_CREDENTIALS_SECRET_ARN` if you don't want to use authentication.

        The default is `false` when `SMTP_ENABLE_EMAIL` is `true`.

   The following example includes the SSRS Email parameters, using the secret ARN.

   ```
   {
   "OptionGroupName": "ssrs-se-2017",
   "OptionsToInclude": [
   	{
   	"OptionName": "SSRS",
   	"Port": 8443,
   	"VpcSecurityGroupMemberships": ["sg-0abcdef123"],
   	"OptionSettings": [
               {"Name": "MAX_MEMORY","Value": "60"},
               {"Name": "SMTP_ENABLE_EMAIL","Value": "true"}
               {"Name": "SMTP_SENDER_EMAIL_ADDRESS","Value": "nobody@example.com"},
               {"Name": "SMTP_SERVER","Value": "email-smtp.us-west-2.amazonaws.com"},
               {"Name": "SMTP_PORT","Value": "25"},
               {"Name": "SMTP_USE_SSL","Value": "true"},
               {"Name": "SMTP_EMAIL_CREDENTIALS_SECRET_ARN","Value": "arn:aws:secretsmanager:us-west-2:123456789012:secret:MySecret-a1b2c3"}
               ]
   	}],
   "ApplyImmediately": true
   }
   ```

1. Add the `SSRS` option to the option group.  
**Example**  

   For Linux, macOS, or Unix:

   ```
   aws rds add-option-to-option-group \
       --cli-input-json file://ssrs-option.json \
       --apply-immediately
   ```

   For Windows:

   ```
   aws rds add-option-to-option-group ^
       --cli-input-json file://ssrs-option.json ^
       --apply-immediately
   ```

## Associating your option group with your DB instance
<a name="SSRS.Apply"></a>

Use the AWS Management Console or the AWS CLI to associate your option group with your DB instance.

If you use an existing DB instance, it must already have an Active Directory domain and AWS Identity and Access Management (IAM) role associated with it. If you create a new instance, specify an existing Active Directory domain and IAM role. For more information, see [Working with Active Directory with RDS for SQL Server](User.SQLServer.ActiveDirectoryWindowsAuth.md).

### Console
<a name="SSRS.Apply.Console"></a>

You can associate your option group with a new or existing DB instance:
+ For a new DB instance, associate the option group when you launch the instance. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).
+ For an existing DB instance, modify the instance and associate the new option group. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

### CLI
<a name="SSRS.Apply.CLI"></a>

You can associate your option group with a new or existing DB instance.

**To create a DB instance that uses your option group**
+ Specify the same DB engine type and major version as you used when creating the option group.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-db-instance \
      --db-instance-identifier myssrsinstance \
      --db-instance-class db.m5.2xlarge \
      --engine sqlserver-se \
      --engine-version 14.00.3223.3.v1 \
      --allocated-storage 100 \
      --manage-master-user-password  \
      --master-username admin \
      --storage-type gp2 \
      --license-model li \
      --domain-iam-role-name my-directory-iam-role \
      --domain my-domain-id \
      --option-group-name ssrs-se-2017
  ```

  For Windows:

  ```
  aws rds create-db-instance ^
      --db-instance-identifier myssrsinstance ^
      --db-instance-class db.m5.2xlarge ^
      --engine sqlserver-se ^
      --engine-version 14.00.3223.3.v1 ^
      --allocated-storage 100 ^
      --manage-master-user-password ^
      --master-username admin ^
      --storage-type gp2 ^
      --license-model li ^
      --domain-iam-role-name my-directory-iam-role ^
      --domain my-domain-id ^
      --option-group-name ssrs-se-2017
  ```

**To modify a DB instance to use your option group**
+ Run one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-instance \
      --db-instance-identifier myssrsinstance \
      --option-group-name ssrs-se-2017 \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds modify-db-instance ^
      --db-instance-identifier myssrsinstance ^
      --option-group-name ssrs-se-2017 ^
      --apply-immediately
  ```

## Allowing inbound access to your VPC security group
<a name="SSRS.Inbound"></a>

To allow inbound access to the VPC security group associated with your DB instance, create an inbound rule for the specified SSRS listener port. For more information about setting up security groups, see [Provide access to your DB instance in your VPC by creating a security group](CHAP_SettingUp.md#CHAP_SettingUp.SecurityGroup).

## Report server databases
<a name="SSRS.DBs"></a>

When your DB instance is associated with the SSRS option, two new databases are created on your DB instance:
+ `rdsadmin_ReportServer`
+ `rdsadmin_ReportServerTempDB`

These databases act as the ReportServer and ReportServerTempDB databases. SSRS stores its data in the ReportServer database and caches its data in the ReportServerTempDB database. For more information, see [Report Server Database](https://learn.microsoft.com/en-us/sql/reporting-services/report-server/report-server-database-ssrs-native-mode?view=sql-server-ver15) in the Microsoft documentation.

RDS owns and manages these databases, so database operations on them such as ALTER and DROP aren't permitted. Access isn't permitted on the `rdsadmin_ReportServerTempDB` database. However, you can perform read operations on the `rdsadmin_ReportServer`database.

## SSRS log files
<a name="SSRS.Logs"></a>

You can list, view, and download SSRS log files. SSRS log files follow a naming convention of ReportServerService\$1*timestamp*.log. These report server logs are located in the `D:\rdsdbdata\Log\SSRS` directory. (The `D:\rdsdbdata\Log` directory is also the parent directory for error logs and SQL Server Agent logs.). For more information, see [Viewing and listing database log files](USER_LogAccess.Procedural.Viewing.md).

For existing SSRS instances, restarting the SSRS service might be necessary to access report server logs. You can restart the service by updating the `SSRS` option.

For more information, see [Working with Amazon RDS for Microsoft SQL Server logs](Appendix.SQLServer.CommonDBATasks.Logs.md).

# Accessing the SSRS web portal
<a name="SSRS.Access"></a>

Use the following process to access the SSRS web portal:

1. Turn on Secure Sockets Layer (SSL).

1. Grant access to domain users.

1. Access the web portal using a browser and the domain user credentials.

## Using SSL on RDS
<a name="SSRS.Access.SSL"></a>

SSRS uses the HTTPS SSL protocol for its connections. To work with this protocol, import an SSL certificate into the Microsoft Windows operating system on your client computer.

For more information on SSL certificates, see [Using SSL/TLS to encrypt a connection to a DB instance or cluster ](UsingWithRDS.SSL.md). For more information about using SSL with SQL Server, see [Using SSL with a Microsoft SQL Server DB instance](SQLServer.Concepts.General.SSL.Using.md).

## Granting access to domain users
<a name="SSRS.Access.Grant"></a>

In a new SSRS activation, there are no role assignments in SSRS. To give a domain user or user group access to the web portal, RDS provides a stored procedure.

**To grant access to a domain user on the web portal**
+ Use the following stored procedure.

  ```
  exec msdb.dbo.rds_msbi_task
  @task_type='SSRS_GRANT_PORTAL_PERMISSION',
  @ssrs_group_or_username=N'AD_domain\user';
  ```

The domain user or user group is granted the `RDS_SSRS_ROLE` system role. This role has the following system-level tasks granted to it:
+ Run reports
+ Manage jobs
+ Manage shared schedules
+ View shared schedules

The item-level role of `Content Manager` on the root folder is also granted.

## Accessing the web portal
<a name="SSRS.Access"></a>

After the `SSRS_GRANT_PORTAL_PERMISSION` task finishes successfully, you have access to the portal using a web browser. The web portal URL has the following format.

```
https://rds_endpoint:port/Reports
```

In this format, the following applies:
+ *`rds_endpoint`* – The endpoint for the RDS DB instance that you're using with SSRS.

  You can find the endpoint on the **Connectivity & security** tab for your DB instance. For more information, see [Connecting to your Microsoft SQL Server DB instance](USER_ConnectToMicrosoftSQLServerInstance.md).
+ `port` – The listener port for SSRS that you set in the `SSRS` option.

**To access the web portal**

1. Enter the web portal URL in your browser.

   ```
   https://myssrsinstance.cg034itsfake.us-east-1.rds.amazonaws.com:8443/Reports
   ```

1. Log in with the credentials for a domain user that you granted access with the `SSRS_GRANT_PORTAL_PERMISSION` task.

# Deploying reports and configuring report data sources
<a name="SSRS.DeployConfig"></a>

Use the following procedures to deploy reports to SSRS and configure the reporting data sources:

**Topics**
+ [

## Deploying reports to SSRS
](#SSRS.Deploy)
+ [

## Configuring the report data source
](#SSRS.ConfigureDataSource)

## Deploying reports to SSRS
<a name="SSRS.Deploy"></a>

After you have access to the web portal, you can deploy reports to it. You can use the Upload tool in the web portal to upload reports, or deploy directly from [SQL Server data tools (SSDT)](https://docs.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt). When deploying from SSDT, ensure the following:
+ The user who launched SSDT has access to the SSRS web portal.
+ The `TargetServerURL` value in the SSRS project properties is set to the HTTPS endpoint of the RDS DB instance suffixed with `ReportServer`, for example:

  ```
  https://myssrsinstance.cg034itsfake.us-east-1.rds.amazonaws.com:8443/ReportServer
  ```

## Configuring the report data source
<a name="SSRS.ConfigureDataSource"></a>

After you deploy a report to SSRS, you should configure the report data source. When configuring the report data source, ensure the following:
+ For RDS for SQL Server DB instances joined to AWS Directory Service for Microsoft Active Directory, use the fully qualified domain name (FQDN) as the data source name of the connection string. An example is `myssrsinstance.corp-ad.example.com`, where `myssrsinstance` is the DB instance name and `corp-ad.example.com` is the fully qualified domain name. 
+ For RDS for SQL Server DB instances joined to self-managed Active Directory, use `.`, or `LocalHost` as the data source name of the connection string.

# Using SSRS Email to send reports
<a name="SSRS.Email"></a>

SSRS includes the SSRS Email extension, which you can use to send reports to users.

To configure SSRS Email, use the `SSRS` option settings. For more information, see [Adding the SSRS option to your option group](SSRS.Enabling.md#SSRS.Add).

After configuring SSRS Email, you can subscribe to reports on the report server. For more information, see [Email delivery in Reporting Services](https://docs.microsoft.com/en-us/sql/reporting-services/subscriptions/e-mail-delivery-in-reporting-services) in the Microsoft documentation.

Integration with AWS Secrets Manager is required for SSRS Email to function on RDS. To integrate with Secrets Manager, you create a secret.

**Note**  
If you change the secret later, you also have to update the `SSRS` option in the option group.

**To create a secret for SSRS Email**

1. Follow the steps in [Create a secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html) in the *AWS Secrets Manager User Guide*.

   1. For **Select secret type**, choose **Other type of secrets**.

   1. For **Key/value pairs**, enter the following:
      + **SMTP\$1USERNAME** – Enter a user with permission to send mail from the SMTP server.
      + **SMTP\$1PASSWORD** – Enter a password for the SMTP user.

   1. For **Encryption key**, don't use the default AWS KMS key. Use your own existing key, or create a new one.

      The KMS key policy must allow the `kms:Decrypt` action, for example:

      ```
      {
          "Sid": "Allow use of the key",
          "Effect": "Allow",
          "Principal": {
              "Service": [
                  "rds.amazonaws.com"
              ]
          },
          "Action": [
              "kms:Decrypt"
          ],
          "Resource": "*"
      }
      ```

1. Follow the steps in [Attach a permissions policy to a secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_resource-policies.html) in the *AWS Secrets Manager User Guide*. The permissions policy gives the `secretsmanager:GetSecretValue` action to the `rds.amazonaws.com` service principal.

   We recommend that you use the `aws:sourceAccount` and `aws:sourceArn` conditions in the policy to avoid the *confused deputy* problem. Use your AWS account for `aws:sourceAccount` and the option group ARN for `aws:sourceArn`. For more information, see [Preventing cross-service confused deputy problems](cross-service-confused-deputy-prevention.md).

   The following example shows a permissions policy.

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement" : [ {
       "Effect" : "Allow",
       "Principal" : {
         "Service" : "rds.amazonaws.com"
       },
       "Action" : "secretsmanager:GetSecretValue",
       "Resource" : "*",
       "Condition" : {
         "StringEquals" : {
           "aws:sourceAccount" : "123456789012"
         },
         "ArnLike" : {
           "aws:sourceArn" : "arn:aws:rds:us-west-2:123456789012:og:ssrs-se-2017"
         }
       }
     } ]
   }
   ```

------

   For more examples, see [Permissions policy examples for AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_examples.html) in the *AWS Secrets Manager User Guide*.

# Revoking system-level permissions
<a name="SSRS.Access.Revoke"></a>

The `RDS_SSRS_ROLE` system role doesn't have sufficient permissions to delete system-level role assignments. To remove a user or user group from `RDS_SSRS_ROLE`, use the same stored procedure that you used to grant the role but use the `SSRS_REVOKE_PORTAL_PERMISSION` task type.

**To revoke access from a domain user for the web portal**
+ Use the following stored procedure.

  ```
  exec msdb.dbo.rds_msbi_task
  @task_type='SSRS_REVOKE_PORTAL_PERMISSION',
  @ssrs_group_or_username=N'AD_domain\user';
  ```

Doing this deletes the user from the `RDS_SSRS_ROLE` system role. It also deletes the user from the `Content Manager` item-level role if the user has it.

# Monitoring the status of a task
<a name="SSRS.Monitor"></a>

To track the status of your granting or revoking task, call the `rds_fn_task_status` function. It takes two parameters. The first parameter should always be `NULL` because it doesn't apply to SSRS. The second parameter accepts a task ID. 

To see a list of all tasks, set the first parameter to `NULL` and the second parameter to `0`, as shown in the following example.

```
SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,0);
```

To get a specific task, set the first parameter to `NULL` and the second parameter to the task ID, as shown in the following example.

```
SELECT * FROM msdb.dbo.rds_fn_task_status(NULL,42);
```

The `rds_fn_task_status` function returns the following information.


| Output parameter | Description | 
| --- | --- | 
| `task_id` | The ID of the task. | 
| `task_type` | For SSRS, tasks can have the following task types: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SSRS.Monitor.html)  | 
| `database_name` | Not applicable to SSRS tasks. | 
| `% complete` | The progress of the task as a percentage. | 
| `duration (mins)` | The amount of time spent on the task, in minutes. | 
| `lifecycle` |  The status of the task. Possible statuses are the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SSRS.Monitor.html)  | 
| `task_info` | Additional information about the task. If an error occurs during processing, this column contains information about the error.  | 
| `last_updated` | The date and time that the task status was last updated.  | 
| `created_at` | The date and time that the task was created. | 
| `S3_object_arn` |  Not applicable to SSRS tasks.  | 
| `overwrite_S3_backup_file` | Not applicable to SSRS tasks. | 
| `KMS_master_key_arn` |  Not applicable to SSRS tasks.  | 
| `filepath` |  Not applicable to SSRS tasks.  | 
| `overwrite_file` |  Not applicable to SSRS tasks.  | 
| `task_metadata` | Metadata associated with the SSRS task. | 

# Disabling and deleting SSRS databases
<a name="SSRS.DisableDelete"></a>

Use the following procedures to disable SSRS and delete SSRS databases:

**Topics**
+ [

## Turning off SSRS
](#SSRS.Disable)
+ [

## Deleting the SSRS databases
](#SSRS.Drop)

## Turning off SSRS
<a name="SSRS.Disable"></a>

To turn off SSRS, remove the `SSRS` option from its option group. Removing the option doesn't delete the SSRS databases. For more information, see [Deleting the SSRS databases](#SSRS.Drop).

You can turn SSRS on again by adding back the `SSRS` option. If you have also deleted the SSRS databases, readding the option on the same DB instance creates new report server databases.

### Console
<a name="SSRS.Disable.Console"></a>

**To remove the SSRS option from its option group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose the option group with the `SSRS` option (`ssrs-se-2017` in the previous examples).

1. Choose **Delete option**.

1. Under **Deletion options**, choose **SSRS** for **Options to delete**.

1. Under **Apply immediately**, choose **Yes** to delete the option immediately, or **No** to delete it at the next maintenance window.

1. Choose **Delete**.

### CLI
<a name="SSRS.Disable.CLI"></a>

**To remove the SSRS option from its option group**
+ Run one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds remove-option-from-option-group \
      --option-group-name ssrs-se-2017 \
      --options SSRS \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds remove-option-from-option-group ^
      --option-group-name ssrs-se-2017 ^
      --options SSRS ^
      --apply-immediately
  ```

## Deleting the SSRS databases
<a name="SSRS.Drop"></a>

Removing the `SSRS` option doesn't delete the report server databases. To delete them, use the following stored procedure. 

To delete the report server databases, be sure to remove the `SSRS` option first.

**To delete the SSRS databases**
+ Use the following stored procedure.

  ```
  exec msdb.dbo.rds_drop_ssrs_databases
  ```

# Support for Microsoft Distributed Transaction Coordinator in RDS for SQL Server
<a name="Appendix.SQLServer.Options.MSDTC"></a>

A *distributed transaction* is a database transaction in which two or more network hosts are involved. RDS for SQL Server supports distributed transactions among hosts, where a single host can be one of the following:
+ RDS for SQL Server DB instance
+ On-premises SQL Server host
+ Amazon EC2 host with SQL Server installed
+ Any other EC2 host or RDS DB instance with a database engine that supports distributed transactions

In RDS, starting with SQL Server 2012 (version 11.00.5058.0.v1 and later), all editions of RDS for SQL Server support distributed transactions. The support is provided using Microsoft Distributed Transaction Coordinator (MSDTC). For in-depth information about MSDTC, see [Distributed Transaction Coordinator](https://docs.microsoft.com/en-us/previous-versions/windows/desktop/ms684146(v=vs.85)) in the Microsoft documentation.

**Contents**
+ [

## Limitations
](#Appendix.SQLServer.Options.MSDTC.Limitations)
+ [

# Enabling MSDTC
](Appendix.SQLServer.Options.MSDTC.Enabling.md)
  + [

## Creating the option group for MSDTC
](Appendix.SQLServer.Options.MSDTC.Enabling.md#Appendix.SQLServer.Options.MSDTC.OptionGroup)
  + [

## Adding the MSDTC option to the option group
](Appendix.SQLServer.Options.MSDTC.Enabling.md#Appendix.SQLServer.Options.MSDTC.Add)
  + [

## Creating the parameter group for MSDTC
](Appendix.SQLServer.Options.MSDTC.Enabling.md#MSDTC.CreateParamGroup)
  + [

## Modifying the parameter for MSDTC
](Appendix.SQLServer.Options.MSDTC.Enabling.md#ModifyParam.MSDTC)
  + [

## Associating the option group and parameter group with the DB instance
](Appendix.SQLServer.Options.MSDTC.Enabling.md#MSDTC.Apply)
  + [

## Modifying the MSDTC option
](Appendix.SQLServer.Options.MSDTC.Enabling.md#Appendix.SQLServer.Options.MSDTC.Modify)
+ [

## Using transactions
](#Appendix.SQLServer.Options.MSDTC.Using)
  + [

### Using distributed transactions
](#Appendix.SQLServer.Options.MSDTC.UsingXA)
  + [

### Using XA transactions
](#MSDTC.XA)
  + [

### Using transaction tracing
](#MSDTC.Tracing)
+ [

# Disabling MSDTC
](Appendix.SQLServer.Options.MSDTC.Disable.md)
+ [

# Troubleshooting MSDTC for RDS for SQL Server
](Appendix.SQLServer.Options.MSDTC.Troubleshooting.md)

## Limitations
<a name="Appendix.SQLServer.Options.MSDTC.Limitations"></a>

The following limitations apply to using MSDTC on RDS for SQL Server:
+ MSDTC isn't supported on instances using SQL Server Database Mirroring. For more information, see [Transactions - availability groups and database mirroring](https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/transactions-always-on-availability-and-database-mirroring?view=sql-server-ver15#non-support-for-distributed-transactions).
+ The `in-doubt xact resolution` parameter must be set to 1 or 2. For more information, see [Modifying the parameter for MSDTC](Appendix.SQLServer.Options.MSDTC.Enabling.md#ModifyParam.MSDTC).
+ MSDTC requires all hosts participating in distributed transactions to be resolvable using their host names. RDS automatically maintains this functionality for domain-joined instances. However, for standalone instances make sure to configure the DNS server manually.
+ Java Database Connectivity (JDBC) XA transactions are supported for SQL Server 2017 version 14.00.3223.3 and higher, and SQL Server 2019.
+ Distributed transactions that depend on client dynamic link libraries (DLLs) on RDS instances aren't supported.
+ Using custom XA dynamic link libraries isn't supported.

# Enabling MSDTC
<a name="Appendix.SQLServer.Options.MSDTC.Enabling"></a>

Use the following process to enable MSDTC for your DB instance:

1. Create a new option group, or choose an existing option group.

1. Add the `MSDTC` option to the option group.

1. Create a new parameter group, or choose an existing parameter group.

1. Modify the parameter group to set the `in-doubt xact resolution` parameter to 1 or 2.

1. Associate the option group and parameter group with the DB instance.

## Creating the option group for MSDTC
<a name="Appendix.SQLServer.Options.MSDTC.OptionGroup"></a>

Use the AWS Management Console or the AWS CLI to create an option group that corresponds to the SQL Server engine and version of your DB instance.

**Note**  
You can also use an existing option group if it's for the correct SQL Server engine and version.

### Console
<a name="OptionGroup.MSDTC.Console"></a>

The following procedure creates an option group for SQL Server Standard Edition 2016.

**To create the option group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose **Create group**.

1. In the **Create option group** pane, do the following:

   1. For **Name**, enter a name for the option group that is unique within your AWS account, such as **msdtc-se-2016**. The name can contain only letters, digits, and hyphens.

   1. For **Description**, enter a brief description of the option group, such as **MSDTC option group for SQL Server SE 2016**. The description is used for display purposes. 

   1. For **Engine**, choose **sqlserver-se**.

   1. For **Major engine version**, choose **13.00**.

1. Choose **Create**.

### CLI
<a name="OptionGroup.MSDTC.CLI"></a>

The following example creates an option group for SQL Server Standard Edition 2016.

**To create the option group**
+ Use one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-option-group \
      --option-group-name msdtc-se-2016 \
      --engine-name sqlserver-se \
      --major-engine-version 13.00 \
      --option-group-description "MSDTC option group for SQL Server SE 2016"
  ```

  For Windows:

  ```
  aws rds create-option-group ^
      --option-group-name msdtc-se-2016 ^
      --engine-name sqlserver-se ^
      --major-engine-version 13.00 ^
      --option-group-description "MSDTC option group for SQL Server SE 2016"
  ```

## Adding the MSDTC option to the option group
<a name="Appendix.SQLServer.Options.MSDTC.Add"></a>

Next, use the AWS Management Console or the AWS CLI to add the `MSDTC` option to the option group.

The following option settings are required:
+ **Port** – The port that you use to access MSDTC. Allowed values are 1150–49151 except for 1234, 1434, 3260, 3343, 3389, and 47001. The default value is 5000.

  Make sure that the port you want to use is enabled in your firewall rules. Also, make sure as needed that this port is enabled in the inbound and outbound rules for the security group associated with your DB instance. For more information, see [Can't connect to Amazon RDS DB instance](CHAP_Troubleshooting.md#CHAP_Troubleshooting.Connecting). 
+ **Security groups** – The VPC security group memberships for your RDS DB instance.
+ **Authentication type** – The authentication mode between hosts. The following authentication types are supported:
  + Mutual – The RDS instances are mutually authenticated to each other using integrated authentication. If this option is selected, all instances associated with this option group must be domain-joined.
  + None – No authentication is performed between hosts. We don't recommend using this mode in production environments.
+ **Transaction log size** – The size of the MSDTC transaction log. Allowed values are 4–1024 MB. The default size is 4 MB.

The following option settings are optional:
+ **Enable inbound connections** – Whether to allow inbound MSDTC connections to instances associated with this option group.
+ **Enable outbound connections** – Whether to allow outbound MSDTC connections from instances associated with this option group.
+ **Enable XA** – Whether to allow XA transactions. For more information on the XA protocol, see [XA specification](https://publications.opengroup.org/c193).
+ **Enable SNA LU** – Whether to allow the SNA LU protocol to be used for distributed transactions. For more information on SNA LU protocol support, see [Managing IBM CICS LU 6.2 transactions](https://docs.microsoft.com/en-us/previous-versions/windows/desktop/ms685136(v=vs.85)) in the Microsoft documentation.

### Console
<a name="Options.MSDTC.Add.Console"></a>

**To add the MSDTC option**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose the option group that you just created.

1. Choose **Add option**.

1. Under **Option details**, choose **MSDTC** for **Option name**.

1. Under **Option settings**:

   1. For **Port**, enter the port number for accessing MSDTC. The default is **5000**.

   1. For **Security groups**, choose the VPC security group to associate with the option.

   1. For **Authentication type**, choose **Mutual** or **None**.

   1. For **Transaction log size**, enter a value from 4–1024. The default is **4**.

1. Under **Additional configuration**, do the following:

   1. For **Connections**, as needed choose **Enable inbound connections** and **Enable outbound connections**.

   1. For **Allowed protocols**, as needed choose **Enable XA** and **Enable SNA LU**.

1. Under **Scheduling**, choose whether to add the option immediately or at the next maintenance window.

1. Choose **Add option**.

   To add this option, no reboot is required.

### CLI
<a name="Options.MSDTC.Add.CLI"></a>

**To add the MSDTC option**

1. Create a JSON file, for example `msdtc-option.json`, with the following required parameters.

   ```
   {
   "OptionGroupName":"msdtc-se-2016",
   "OptionsToInclude": [
   	{
   	"OptionName":"MSDTC",
   	"Port":5000,
   	"VpcSecurityGroupMemberships":["sg-0abcdef123"],
   	"OptionSettings":[{"Name":"AUTHENTICATION","Value":"MUTUAL"},{"Name":"TRANSACTION_LOG_SIZE","Value":"4"}]
   	}],
   "ApplyImmediately": true
   }
   ```

1. Add the `MSDTC` option to the option group.  
**Example**  

   For Linux, macOS, or Unix:

   ```
   aws rds add-option-to-option-group \
       --cli-input-json file://msdtc-option.json \
       --apply-immediately
   ```

   For Windows:

   ```
   aws rds add-option-to-option-group ^
       --cli-input-json file://msdtc-option.json ^
       --apply-immediately
   ```

   No reboot is required.

## Creating the parameter group for MSDTC
<a name="MSDTC.CreateParamGroup"></a>

Create or modify a parameter group for the `in-doubt xact resolution` parameter that corresponds to the SQL Server edition and version of your DB instance.

### Console
<a name="CreateParamGroup.MSDTC.Console"></a>

The following example creates a parameter group for SQL Server Standard Edition 2016.

**To create the parameter group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Parameter groups**.

1. Choose **Create parameter group**.

1. In the **Create parameter group** pane, do the following:

   1. For **Parameter group family**, choose **sqlserver-se-13.0**.

   1. For **Group name**, enter an identifier for the parameter group, such as **msdtc-sqlserver-se-13**.

   1. For **Description**, enter **in-doubt xact resolution**.

1. Choose **Create**.

### CLI
<a name="CreateParamGroup.MSDTC.CLI"></a>

The following example creates a parameter group for SQL Server Standard Edition 2016.

**To create the parameter group**
+ Use one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-db-parameter-group \
      --db-parameter-group-name msdtc-sqlserver-se-13 \
      --db-parameter-group-family "sqlserver-se-13.0" \
      --description "in-doubt xact resolution"
  ```

  For Windows:

  ```
  aws rds create-db-parameter-group ^
      --db-parameter-group-name msdtc-sqlserver-se-13 ^
      --db-parameter-group-family "sqlserver-se-13.0" ^
      --description "in-doubt xact resolution"
  ```

## Modifying the parameter for MSDTC
<a name="ModifyParam.MSDTC"></a>

Modify the `in-doubt xact resolution` parameter in the parameter group that corresponds to the SQL Server edition and version of your DB instance.

For MSDTC, set the `in-doubt xact resolution` parameter to one of the following:
+ `1` – `Presume commit`. Any MSDTC in-doubt transactions are presumed to have committed.
+ `2` – `Presume abort`. Any MSDTC in-doubt transactions are presumed to have stopped.

For more information, see [in-doubt xact resolution server configuration option](https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/in-doubt-xact-resolution-server-configuration-option) in the Microsoft documentation.

### Console
<a name="ModifyParam.MSDTC.Console"></a>

The following example modifies the parameter group that you created for SQL Server Standard Edition 2016.

**To modify the parameter group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Parameter groups**.

1. Choose the parameter group, such as **msdtc-sqlserver-se-13**.

1. Under **Parameters**, filter the parameter list for **xact**.

1. Choose **in-doubt xact resolution**.

1. Choose **Edit parameters**.

1. Enter **1** or **2**.

1. Choose **Save changes**.

### CLI
<a name="ModifyParam.MSDTC.CLI"></a>

The following example modifies the parameter group that you created for SQL Server Standard Edition 2016.

**To modify the parameter group**
+ Use one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-parameter-group \
      --db-parameter-group-name msdtc-sqlserver-se-13 \
      --parameters "ParameterName='in-doubt xact resolution',ParameterValue=1,ApplyMethod=immediate"
  ```

  For Windows:

  ```
  aws rds modify-db-parameter-group ^
      --db-parameter-group-name msdtc-sqlserver-se-13 ^
      --parameters "ParameterName='in-doubt xact resolution',ParameterValue=1,ApplyMethod=immediate"
  ```

## Associating the option group and parameter group with the DB instance
<a name="MSDTC.Apply"></a>

You can use the AWS Management Console or the AWS CLI to associate the MSDTC option group and parameter group with the DB instance.

### Console
<a name="MSDTC.Apply.Console"></a>

You can associate the MSDTC option group and parameter group with a new or existing DB instance.
+ For a new DB instance, associate them when you launch the instance. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).
+ For an existing DB instance, associate them by modifying the instance. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).
**Note**  
If you use an domain-joined existing DB instance, it must already have an Active Directory domain and AWS Identity and Access Management (IAM) role associated with it. If you create a new domain-joined instance, specify an existing Active Directory domain and IAM role. For more information, see [Working with AWS Managed Active Directory with RDS for SQL Server](USER_SQLServerWinAuth.md).

### CLI
<a name="MSDTC.Apply.CLI"></a>

You can associate the MSDTC option group and parameter group with a new or existing DB instance.

**Note**  
If you use an existing domain-joined DB instance, it must already have an Active Directory domain and IAM role associated with it. If you create a new domain-joined instance, specify an existing Active Directory domain and IAM role. For more information, see [Working with AWS Managed Active Directory with RDS for SQL Server](USER_SQLServerWinAuth.md).

**To create a DB instance with the MSDTC option group and parameter group**
+ Specify the same DB engine type and major version as you used when creating the option group.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-db-instance \
      --db-instance-identifier mydbinstance \
      --db-instance-class db.m5.2xlarge \
      --engine sqlserver-se \
      --engine-version 13.00.5426.0.v1 \
      --allocated-storage 100 \
      --manage-master-user-password \
      --master-username admin \
      --storage-type gp2 \
      --license-model li \
      --domain-iam-role-name my-directory-iam-role \
      --domain my-domain-id \
      --option-group-name msdtc-se-2016 \
      --db-parameter-group-name msdtc-sqlserver-se-13
  ```

  For Windows:

  ```
  aws rds create-db-instance ^
      --db-instance-identifier mydbinstance ^
      --db-instance-class db.m5.2xlarge ^
      --engine sqlserver-se ^
      --engine-version 13.00.5426.0.v1 ^
      --allocated-storage 100 ^
      --manage-master-user-password ^
      --master-username admin ^
      --storage-type gp2 ^
      --license-model li ^
      --domain-iam-role-name my-directory-iam-role ^
      --domain my-domain-id ^
      --option-group-name msdtc-se-2016 ^
      --db-parameter-group-name msdtc-sqlserver-se-13
  ```

**To modify a DB instance and associate the MSDTC option group and parameter group**
+ Use one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-instance \
      --db-instance-identifier mydbinstance \
      --option-group-name msdtc-se-2016 \
      --db-parameter-group-name msdtc-sqlserver-se-13 \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds modify-db-instance ^
      --db-instance-identifier mydbinstance ^
      --option-group-name msdtc-se-2016 ^
      --db-parameter-group-name msdtc-sqlserver-se-13 ^
      --apply-immediately
  ```

## Modifying the MSDTC option
<a name="Appendix.SQLServer.Options.MSDTC.Modify"></a>

After you enable the `MSDTC` option, you can modify its settings. For information about how to modify option settings, see [Modifying an option setting](USER_WorkingWithOptionGroups.md#USER_WorkingWithOptionGroups.ModifyOption).

**Note**  
Some changes to MSDTC option settings require the MSDTC service to be restarted. This requirement can affect running distributed transactions.

## Using transactions
<a name="Appendix.SQLServer.Options.MSDTC.Using"></a>

### Using distributed transactions
<a name="Appendix.SQLServer.Options.MSDTC.UsingXA"></a>

In Amazon RDS for SQL Server, you run distributed transactions in the same way as distributed transactions running on-premises:
+ Using .NET framework `System.Transactions` promotable transactions, which optimizes distributed transactions by deferring their creation until they're needed.

  In this case, promotion is automatic and doesn't require you to make any intervention. If there's only one resource manager within the transaction, no promotion is performed. For more information about implicit transaction scopes, see [Implementing an implicit transaction using transaction scope](https://docs.microsoft.com/en-us/dotnet/framework/data/transactions/implementing-an-implicit-transaction-using-transaction-scope) in the Microsoft documentation.

  Promotable transactions are supported with these .NET implementations:
  + Starting with ADO.NET 2.0, `System.Data.SqlClient` supports promotable transactions with SQL Server. For more information, see [System.Transactions integration with SQL Server](https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/system-transactions-integration-with-sql-server) in the Microsoft documentation.
  + ODP.NET supports `System.Transactions`. A local transaction is created for the first connection opened in the `TransactionsScope` scope to Oracle Database 11g release 1 (version 11.1) and later. When a second connection is opened, this transaction is automatically promoted to a distributed transaction. For more information about distributed transaction support in ODP.NET, see [Microsoft Distributed Transaction Coordinator integration](https://docs.oracle.com/en/database/oracle/oracle-data-access-components/18.3/ntmts/using-mts-with-oracledb.html) in the Microsoft documentation.
+ Using the `BEGIN DISTRIBUTED TRANSACTION` statement. For more information, see [BEGIN DISTRIBUTED TRANSACTION (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/language-elements/begin-distributed-transaction-transact-sql) in the Microsoft documentation.

### Using XA transactions
<a name="MSDTC.XA"></a>

Starting from RDS for SQL Server 2017 version14.00.3223.3, you can control distributed transactions using JDBC. When you set the `Enable XA` option setting to `true` in the `MSDTC` option, RDS automatically enables JDBC transactions and grants the `SqlJDBCXAUser` role to the `guest` user. This allows executing distributed transactions through JDBC. For more information, including a code example, see [Understanding XA transactions](https://docs.microsoft.com/en-us/sql/connect/jdbc/understanding-xa-transactions) in the Microsoft documentation.

### Using transaction tracing
<a name="MSDTC.Tracing"></a>

RDS supports controlling MSDTC transaction traces and downloading them from the RDS DB instance for troubleshooting. You can control transaction tracing sessions by running the following RDS stored procedure.

```
exec msdb.dbo.rds_msdtc_transaction_tracing 'trace_action',
[@traceall='0|1'],
[@traceaborted='0|1'],
[@tracelong='0|1'];
```

The following parameter is required:
+ `trace_action` – The tracing action. It can be `START`, `STOP`, or `STATUS`.

The following parameters are optional:
+ `@traceall` – Set to 1 to trace all distributed transactions. The default is 0.
+ `@traceaborted` – Set to 1 to trace canceled distributed transactions. The default is 0.
+ `@tracelong` – Set to 1 to trace long-running distributed transactions. The default is 0.

**Example of START tracing action**  
To start a new transaction tracing session, run the following example statement.  

```
exec msdb.dbo.rds_msdtc_transaction_tracing 'START',
@traceall='0',
@traceaborted='1',
@tracelong='1';
```
Only one transaction tracing session can be active at one time. If a new tracing session `START` command is issued while a tracing session is active, an error is returned and the active tracing session remains unchanged.

**Example of STOP tracing action**  
To stop a transaction tracing session, run the following statement.  

```
exec msdb.dbo.rds_msdtc_transaction_tracing 'STOP'
```
This statement stops the active transaction tracing session and saves the transaction trace data into the log directory on the RDS DB instance. The first row of the output contains the overall result, and the following lines indicate details of the operation.  
The following is an example of a successful tracing session stop.  

```
OK: Trace session has been successfully stopped.
Setting log file to: D:\rdsdbdata\MSDTC\Trace\dtctrace.log
Examining D:\rdsdbdata\MSDTC\Trace\msdtctr.mof for message formats,  8 found.
Searching for TMF files on path: (null)
Logfile D:\rdsdbdata\MSDTC\Trace\dtctrace.log:
 OS version    10.0.14393  (Currently running on 6.2.9200)
 Start Time    <timestamp>
 End Time      <timestamp>
 Timezone is   @tzres.dll,-932 (Bias is 0mins)
 BufferSize            16384 B
 Maximum File Size     10 MB
 Buffers  Written      Not set (Logger may not have been stopped).
 Logger Mode Settings (11000002) ( circular paged
 ProcessorCount         1 
Processing completed   Buffers: 1, Events: 3, EventsLost: 0 :: Format Errors: 0, Unknowns: 3
Event traces dumped to d:\rdsdbdata\Log\msdtc_<timestamp>.log
```
You can use the detailed information to query the name of the generated log file. For more information about downloading log files from the RDS DB instance, see [Monitoring Amazon RDS log files](USER_LogAccess.md).  
The trace session logs remain on the instance for 35 days. Any older trace session logs are automatically deleted.

**Example of STATUS tracing action**  
To trace the status of a transaction tracing session, run the following statement.  

```
exec msdb.dbo.rds_msdtc_transaction_tracing 'STATUS'
```
This statement outputs the following as separate rows of the result set.  

```
OK
SessionStatus: <Started|Stopped>
TraceAll: <True|False>
TraceAborted: <True|False>
TraceLongLived: <True|False>
```
The first line indicates the overall result of the operation: `OK` or `ERROR` with details, if applicable. The subsequent lines indicate details about the tracing session status:   
+ `SessionStatus` can be one of the following:
  + `Started` if a tracing session is running.
  + `Stopped` if no tracing session is running.
+ The tracing session flags can be `True` or `False` depending on how they were set in the `START` command.

# Disabling MSDTC
<a name="Appendix.SQLServer.Options.MSDTC.Disable"></a>

To disable MSDTC, remove the `MSDTC` option from its option group.

## Console
<a name="Options.MSDTC.Disable.Console"></a>

**To remove the MSDTC option from its option group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose the option group with the `MSDTC` option (`msdtc-se-2016` in the previous examples).

1. Choose **Delete option**.

1. Under **Deletion options**, choose **MSDTC** for **Options to delete**.

1. Under **Apply immediately**, choose **Yes** to delete the option immediately, or **No** to delete it at the next maintenance window.

1. Choose **Delete**.

## CLI
<a name="Options.MSDTC.Disable.CLI"></a>

**To remove the MSDTC option from its option group**
+ Use one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds remove-option-from-option-group \
      --option-group-name msdtc-se-2016 \
      --options MSDTC \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds remove-option-from-option-group ^
      --option-group-name msdtc-se-2016 ^
      --options MSDTC ^
      --apply-immediately
  ```

# Troubleshooting MSDTC for RDS for SQL Server
<a name="Appendix.SQLServer.Options.MSDTC.Troubleshooting"></a>

In some cases, you might have trouble establishing a connection between MSDTC running on a client computer and the MSDTC service running on an RDS for SQL Server DB instance. If so, make sure of the following:
+ The inbound rules for the security group associated with the DB instance are configured correctly. For more information, see [Can't connect to Amazon RDS DB instance](CHAP_Troubleshooting.md#CHAP_Troubleshooting.Connecting).
+ Your client computer is configured correctly.
+ The MSDTC firewall rules on your client computer are enabled.

**To configure the client computer**

1. Open **Component Services**.

   Or, in **Server Manager**, choose **Tools**, and then choose **Component Services**.

1. Expand **Component Services**, expand **Computers**, expand **My Computer**, and then expand **Distributed Transaction Coordinator**.

1. Open the context (right-click) menu for **Local DTC** and choose **Properties**.

1. Choose the **Security** tab.

1. Choose all of the following:
   + **Network DTC Access**
   + **Allow Inbound**
   + **Allow Outbound**

1. Make sure that the correct authentication mode is chosen:
   + **Mutual Authentication Required** – The client machine is joined to the same domain as other nodes participating in distributed transaction, or there is a trust relationship configured between domains.
   + **No Authentication Required** – All other cases.

1. Choose **OK** to save your changes.

1. If prompted to restart the service, choose **Yes**.

**To enable MSDTC firewall rules**

1. Open Windows Firewall, then choose **Advanced settings**.

   Or, in **Server Manager**, choose **Tools**, and then choose **Windows Firewall with Advanced Security**.
**Note**  
Depending on your operating system, Windows Firewall might be called Windows Defender Firewall.

1. Choose **Inbound Rules** in the left pane.

1. Enable the following firewall rules, if they are not already enabled:
   + **Distributed Transaction Coordinator (RPC)**
   + **Distributed Transaction Coordinator (RPC)-EPMAP**
   + **Distributed Transaction Coordinator (TCP-In)**

1. Close Windows Firewall.

# Microsoft SQL Server resource governor with RDS for SQL Server
<a name="Appendix.SQLServer.Options.ResourceGovernor"></a>

Resource governor is a SQL Server Enterprise Edition feature that gives you precise control over your instance resources. It enables you to set specific limits on how workloads use CPU, memory, and physical I/O resources. With resource governor, you can:
+ Prevent resource monopolization in multi-tenant environments by managing how different workloads share instance resources
+ Deliver predictable performance by setting specific resource limits and priorities for different users and applications

You can enable resource governor on either an existing or new RDS for SQL Server DB instance.

Resource governor uses three fundamental concepts:
+ **Resource pool** - A container that manages your instance physical resources (CPU, memory, and I/O). You get two built-in pools (internal and default) and you can create additional custom pools.
+ **Workload group** - A container for database sessions with similar characteristics. Every workload group belongs to a resource pool. You get two built-in workload groups (internal and default) and you can create additional custom workload groups.
+ **Classification** - The process that determines which workload group handles incoming sessions based on user name, application name, database name or host name.

For additional details about resource governor functionality in SQL Server, see [Resource Governor](https://learn.microsoft.com/en-us/sql/relational-databases/resource-governor/resource-governor?view=sql-server-ver16) in the Microsoft documentation.

**Contents**
+ [

## Supported versions and Regions
](#ResourceGovernor.SupportedVersions)
+ [

## Limitations and recommendations
](#ResourceGovernor.Limitations)
+ [

# Enabling Microsoft SQL Server resource governor for your RDS for SQL Server instance
](ResourceGovernor.Enabling.md)
  + [

## Creating the option group for `RESOURCE_GOVERNOR`
](ResourceGovernor.Enabling.md#ResourceGovernor.OptionGroup)
  + [

## Adding the `RESOURCE_GOVERNOR` option to the option group
](ResourceGovernor.Enabling.md#ResourceGovernor.Add)
  + [

## Associating the option group with your DB instance
](ResourceGovernor.Enabling.md#ResourceGovernor.Apply)
+ [

# Using Microsoft SQL Server resource governor for your RDS for SQL Server instance
](ResourceGovernor.Using.md)
  + [

## Manage resource pool
](ResourceGovernor.Using.md#ResourceGovernor.ManageResourcePool)
    + [

### Create resource Pool
](ResourceGovernor.Using.md#ResourceGovernor.CreateResourcePool)
    + [

### Alter resource pool
](ResourceGovernor.Using.md#ResourceGovernor.AlterResourcePool)
    + [

### Drop resource pool
](ResourceGovernor.Using.md#ResourceGovernor.DropResourcePool)
  + [

## Manage workload groups
](ResourceGovernor.Using.md#ResourceGovernor.ManageWorkloadGroups)
    + [

### Create workload group
](ResourceGovernor.Using.md#ResourceGovernor.CreateWorkloadGroup)
    + [

### Alter workload group
](ResourceGovernor.Using.md#ResourceGovernor.AlterWorkloadGroup)
    + [

### Drop workload group
](ResourceGovernor.Using.md#ResourceGovernor.DropWorkloadGroup)
  + [

## Create and register classifier function
](ResourceGovernor.Using.md#ResourceGovernor.ClassifierFunction)
  + [

## Drop classifier function
](ResourceGovernor.Using.md#ResourceGovernor.DropClassifier)
  + [

## De-register classifier function
](ResourceGovernor.Using.md#ResourceGovernor.DeregisterClassifier)
  + [

## Reset statistics
](ResourceGovernor.Using.md#ResourceGovernor.ResetStats)
  + [

## Resource governor configuration changes
](ResourceGovernor.Using.md#ResourceGovernor.ConfigChanges)
  + [

## Bind TempDB to a resource pool
](ResourceGovernor.Using.md#ResourceGovernor.BindTempDB)
  + [

## Unbind TempDB from a resource pool
](ResourceGovernor.Using.md#ResourceGovernor.UnbindTempDB)
  + [

## Cleanup resource governor
](ResourceGovernor.Using.md#ResourceGovernor.Cleanup)
+ [

## Considerations for Multi-AZ deployment
](#ResourceGovernor.Considerations)
+ [

## Considerations for read replicas
](#ResourceGovernor.ReadReplica)
+ [

# Monitor Microsoft SQL Server resource governor using system views for your RDS for SQL Server instance
](ResourceGovernor.Monitoring.md)
  + [

## Resource pool runtime statistics
](ResourceGovernor.Monitoring.md#ResourceGovernor.ResourcePoolStats)
+ [

# Disabling Microsoft SQL Server resource governor for your RDS for SQL Server instance
](ResourceGovernor.Disabling.md)
+ [

# Best practices for configuring resource governor on RDS for SQL Server
](ResourceGovernor.BestPractices.md)

## Supported versions and Regions
<a name="ResourceGovernor.SupportedVersions"></a>

Amazon RDS supports resource governor for the following SQL Server versions and editions in all AWS Regions where RDS for SQL Server is available:
+ SQL Server 2022 Developer and Enterprise Editions
+ SQL Server 2019 Enterprise Edition
+ SQL Server 2017 Enterprise Edition
+ SQL Server 2016 Enterprise Edition

## Limitations and recommendations
<a name="ResourceGovernor.Limitations"></a>

The following limitations and recommendations apply to resource governor:
+ Edition and service restrictions:
  + Available only in SQL Server Enterprise Edition.
  + Resource management is limited to the SQL Server Database Engine. Resource governor for Analysis Services, Integration Services, and Reporting Services are not supported.
+ Configuration restrictions:
  + Must use Amazon RDS stored procedures for all configurations.
  + Native DDL statements and SQL Server Management Studio GUI configurations aren't supported.
+ Resource pool parameters:
  + Pool names starting with `rds_` aren't supported.
  + Internal and default resource pool modifications aren't permitted.
  + For the user-defined resource pools the following resource pool parameters aren't supported:
    + `MIN_MEMORY_PERCENT`
    + `MIN_CPU_PERCENT`
    + `MIN_IOPS_PER_VOLUME`
    + `AFFINITY`
+ Workload group parameters:
  + Workload group names starting with `rds_` aren't supported.
  + Internal workload group modification isn't permitted.
  + For the default workload group:
    + Only the `REQUEST_MAX_MEMORY_GRANT_PERCENT` parameter can be modified.
    + For the default workload group, `REQUEST_MAX_MEMORY_GRANT_PERCENT` must be between 1 and 70.
    + All other parameters are locked and can't be changed.
  + User-defined workload groups allow modification of all parameters.
+ Classifier function limitations:
  + Classifier function routes connections to custom workload groups based on specified criteria (user name, database, host, or application name).
  + Supports up to two user-defined workload groups with their respective routing conditions.
  + Combines criterion with `AND` conditions within each group.
  + Requires at least one routing criterion per workload group.
  + Only the classification methods listed above are supported.
  + Function name must start with `rg_classifier_`.
  + Default group assignment if no conditions match.

# Enabling Microsoft SQL Server resource governor for your RDS for SQL Server instance
<a name="ResourceGovernor.Enabling"></a>

Enable resource governor by adding the `RESOURCE_GOVERNOR` option to your RDS for SQL Server DB instance. Use the following process:

1. Create a new option group, or choose an existing option group.

1. Add the `RESOURCE_GOVERNOR` option to the option group.

1. Associate the option group with the DB instance.

**Note**  
Enabling resource governor through an option group doesn't require a reboot.

## Creating the option group for `RESOURCE_GOVERNOR`
<a name="ResourceGovernor.OptionGroup"></a>

To enable resource governor, create an option group or modify an option group that corresponds to the SQL Server edition and version of the DB instance that you plan to use. To complete this procedure, use the AWS Management Console or the AWS CLI.

### Console
<a name="ResourceGovernor.OptionGroup.Console"></a>

Use the following procedure to create an option group for SQL Server Enterprise Edition 2022.

**To create the option group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose **Create group**.

1. In the **Create option group** window, do the following:

   1. For **Name**, enter a name for the option group that is unique within your AWS account, such as **resource-governor-ee-2022**. The name can contain only letters, digits, and hyphens.

   1. For **Description**, enter a brief description of the option group, such as **RESOURCE\$1GOVERNOR option group for SQL Server EE 2022**. The description is used for display purposes.

   1. For **Engine**, choose **sqlserver-ee**.

   1. For **Major engine version**, choose **16.00**.

1. Choose **Create**.

### CLI
<a name="ResourceGovernor.OptionGroup.CLI"></a>

The following procedure creates an option group for SQL Server Enterprise Edition 2022.

**To create the option group**
+ Run one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-option-group \
      --option-group-name resource-governor-ee-2022 \
      --engine-name sqlserver-ee \
      --major-engine-version 16.00 \
      --option-group-description "RESOURCE_GOVERNOR option group for SQL Server EE 2022"
  ```

  For Windows:

  ```
  aws rds create-option-group ^
      --option-group-name resource-governor-ee-2022 ^
      --engine-name sqlserver-ee ^
      --major-engine-version 16.00 ^
      --option-group-description "RESOURCE_GOVERNOR option group for SQL Server EE 2022"
  ```

## Adding the `RESOURCE_GOVERNOR` option to the option group
<a name="ResourceGovernor.Add"></a>

Next, use the AWS Management Console or the AWS CLI to add the `RESOURCE_GOVERNOR` option to your option group.

### Console
<a name="ResourceGovernor.Add.Console"></a>

**To add the RESOURCE\$1GOVERNOR option**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose the option group that you just created, **resource-governor-ee-2022** in this example.

1. Choose **Add option**.

1. Under **Option details**, choose **RESOURCE\$1GOVERNOR** for **Option name**.

1. Under **Scheduling**, choose whether to add the option immediately or at the next maintenance window.

1. Choose **Add option**.

### CLI
<a name="ResourceGovernor.Add.CLI"></a>

**To add the `RESOURCE_GOVERNOR` option**
+ Add the `RESOURCE_GOVERNOR` option to the option group.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds add-option-to-option-group \
      --option-group-name resource-governor-ee-2022 \
      --options "OptionName=RESOURCE_GOVERNOR" \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds add-option-to-option-group ^
      --option-group-name resource-governor-ee-2022 ^
      --options "OptionName=RESOURCE_GOVERNOR" ^
      --apply-immediately
  ```

## Associating the option group with your DB instance
<a name="ResourceGovernor.Apply"></a>

To associate the `RESOURCE_GOVERNOR` option group with your DB instance, use the AWS Management Console or the AWS CLI.

### Console
<a name="ResourceGovernor.Apply.Console"></a>

To finish activating resource governor, associate your `RESOURCE_GOVERNOR` option group with a new or existing DB instance:
+ For a new DB instance, associate them when you launch the instance. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).
+ For an existing DB instance, associate them by modifying the instance. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

### CLI
<a name="ResourceGovernor.Apply.CLI"></a>

You can associate the `RESOURCE_GOVERNOR` option group with a new or existing DB instance.

**To create an instance with the `RESOURCE_GOVERNOR` option group**
+ Specify the same DB engine type and major version that you used when creating the option group.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds create-db-instance \
      --db-instance-identifier mytestsqlserverresourcegovernorinstance \
      --db-instance-class db.m5.2xlarge \
      --engine sqlserver-ee \
      --engine-version 16.00 \
      --license-model license-included \
      --allocated-storage 100 \
      --master-username admin \
      --master-user-password password \
      --storage-type gp2 \
      --option-group-name resource-governor-ee-2022
  ```

  For Windows:

  ```
  aws rds create-db-instance ^
      --db-instance-identifier mytestsqlserverresourcegovernorinstance ^
      --db-instance-class db.m5.2xlarge ^
      --engine sqlserver-ee ^
      --engine-version 16.00 ^
      --license-model license-included ^
      --allocated-storage 100 ^
      --master-username admin ^
      --master-user-password password ^
      --storage-type gp2 ^
      --option-group-name resource-governor-ee-2022
  ```

**To modify an instance and associate the `RESOURCE_GOVERNOR` option group**
+ Run one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-instance \
      --db-instance-identifier mytestinstance \
      --option-group-name resource-governor-ee-2022 \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds modify-db-instance ^
      --db-instance-identifier mytestinstance ^
      --option-group-name resource-governor-ee-2022 ^
      --apply-immediately
  ```

# Using Microsoft SQL Server resource governor for your RDS for SQL Server instance
<a name="ResourceGovernor.Using"></a>

After adding the resource governor option to your option group, resource governor is not yet active at the database engine level. To fully enable resource governor, you must use RDS for SQL Server stored procedures to enable it and create the necessary resource governor objects. For more information, see [Connecting to your Microsoft SQL Server DB instance](USER_ConnectToMicrosoftSQLServerInstance.md).

First, connect to your SQL Server database, then call the appropriate RDS for SQL Server stored procedures to complete the configuration. For instructions on connecting to your database, see [Connecting to your Microsoft SQL Server DB instance](USER_ConnectToMicrosoftSQLServerInstance.md).

For instructions on how to call each stored procedure, see the following topics:

**Topics**
+ [

## Manage resource pool
](#ResourceGovernor.ManageResourcePool)
+ [

## Manage workload groups
](#ResourceGovernor.ManageWorkloadGroups)
+ [

## Create and register classifier function
](#ResourceGovernor.ClassifierFunction)
+ [

## Drop classifier function
](#ResourceGovernor.DropClassifier)
+ [

## De-register classifier function
](#ResourceGovernor.DeregisterClassifier)
+ [

## Reset statistics
](#ResourceGovernor.ResetStats)
+ [

## Resource governor configuration changes
](#ResourceGovernor.ConfigChanges)
+ [

## Bind TempDB to a resource pool
](#ResourceGovernor.BindTempDB)
+ [

## Unbind TempDB from a resource pool
](#ResourceGovernor.UnbindTempDB)
+ [

## Cleanup resource governor
](#ResourceGovernor.Cleanup)

## Manage resource pool
<a name="ResourceGovernor.ManageResourcePool"></a>

### Create resource Pool
<a name="ResourceGovernor.CreateResourcePool"></a>

Once resource governor is enabled on the option group, you can create custom resource pools using `rds_create_resource_pool`. These pools let you allocate specific percentages of CPU, memory, and IOPS to different workloads.

**Usage**

```
USE [msdb]
EXEC dbo.rds_create_resource_pool    
    @pool_name=value,
    @MAX_CPU_PERCENT=value,
    @CAP_CPU_PERCENT=value,
    @MAX_MEMORY_PERCENT=value,
    @MAX_IOPS_PER_VOLUME=value
```

The following parameters are required:
+ `@group_name` - Is the name of an existing user-defined workload group.
+ `@pool_name` - Is the user-defined name for the resource pool. *pool\$1name* is alphanumeric, can be up to 128 characters, must be unique within a Database Engine instance, and must comply with the rules for database identifiers.

The following parameters are optional:
+ `@MAX_CPU_PERCENT` - Specifies the maximum average CPU bandwidth that all requests in resource pool receive when there's CPU contention. *value* is an integer with a default setting of 100. The allowed range for *value* is from 1 through 100.
+ `@CAP_CPU_PERCENT` - Specifies a hard cap on the CPU bandwidth that all requests in the resource pool receive. Limits the maximum CPU bandwidth level to be the same as the specified value. *value* is an integer with a default setting of 100. The allowed range for *value* is from 1 through 100.
+ `@MAX_MEMORY_PERCENT` - Specifies the maximum amount of query workspace memory that requests in this resource pool can use. *value* is an integer with a default setting of 100. The allowed range for *value* is from 1 through 100.
+ `@MAX_IOPS_PER_VOLUME` - Specifies the maximum I/O operations per second (IOPS) per disk volume to allow for the resource pool. The allowed range for *value* is from 0 through 2^31-1 (2,147,483,647). Specify 0 to remove an IOPS limit for the pool. The default is 0.

**Examples**

Example of creating resource pool with all default values:

```
--This creates resource pool 'SalesPool' with all default values
USE [msdb]
EXEC rds_create_resource_pool @pool_name = 'SalesPool';
     
--Apply changes
USE [msdb]
EXEC msdb.dbo.rds_alter_resource_governor_configuration;
     
--Validate configuration
select * from sys.resource_governor_resource_pools
```

Example of creating resource pool with different parameters specified:

```
--creates resource pool
USE [msdb]
EXEC dbo.rds_create_resource_pool    
@pool_name='analytics',
@MAX_CPU_PERCENT = 30,
@CAP_CPU_PERCENT = 40,
@MAX_MEMORY_PERCENT = 20;
            
--Apply changes
EXEC msdb.dbo.rds_alter_resource_governor_configuration;
    
--Validate configuration
select * from sys.resource_governor_resource_pools
```

### Alter resource pool
<a name="ResourceGovernor.AlterResourcePool"></a>

**Usage**

```
USE [msdb]
EXEC dbo.rds_alter_resource_pool    
    @pool_name=value,
    @MAX_CPU_PERCENT=value,
    @CAP_CPU_PERCENT=value,
    @MAX_MEMORY_PERCENT=value,
    @MAX_IOPS_PER_VOLUME=value;
```

The following parameters are required:
+ `@pool_name` - Is the name of an existing user-defined resource pool. Altering default resource pool isn't allowed in Amazon RDS SQL Server.

At least one of the optional parameter must be specified:
+ `@MAX_CPU_PERCENT` - Specifies the maximum average CPU bandwidth that all requests in resource pool receive when there's CPU contention. *value* is an integer with a default setting of 100. The allowed range for *value* is from 1 through 100.
+ `@CAP_CPU_PERCENT` - Specifies a hard cap on the CPU bandwidth that all requests in the resource pool receive. Limits the maximum CPU bandwidth level to be the same as the specified value. *value* is an integer with a default setting of 100. The allowed range for *value* is from 1 through 100.
+ `@MAX_MEMORY_PERCENT` - Specifies the maximum amount of query workspace memory that requests in this resource pool can use. *value* is an integer with a default setting of 100. The allowed range for *value* is from 1 through 100.
+ `@MAX_IOPS_PER_VOLUME` - Specifies the maximum I/O operations per second (IOPS) per disk volume to allow for the resource pool. The allowed range for *value* is from 0 through 2^31-1 (2,147,483,647). Specify 0 to remove an IOPS limit for the pool. The default is 0.

**Examples**

```
--This alters resource pool
USE [msdb]
EXEC dbo.rds_alter_resource_pool    
    @pool_name='analytics',
    @MAX_CPU_PERCENT = 10,
    @CAP_CPU_PERCENT = 20,
    @MAX_MEMORY_PERCENT = 50;

--Apply changes
EXEC msdb.dbo.rds_alter_resource_governor_configuration;

--Validate configuration.
select * from sys.resource_governor_resource_pools
```

### Drop resource pool
<a name="ResourceGovernor.DropResourcePool"></a>

**Usage**

```
USE [msdb]
EXEC dbo.rds_drop_resource_pool    
@pool_name=value;
```

The following parameter is required:
+ `@pool_name` - Is the name of an existing user-defined resource pool.

**Note**  
Dropping Internal or default resource pool isn't allowed in SQL Server.

**Examples**

```
--This drops resource pool
USE [msdb]
EXEC dbo.rds_drop_resource_pool    
@pool_name='analytics'

--Apply changes
EXEC msdb.dbo.rds_alter_resource_governor_configuration;

--Validate configuration
select * from sys.resource_governor_resource_pools
```

## Manage workload groups
<a name="ResourceGovernor.ManageWorkloadGroups"></a>

Workload groups, created and managed with `rds_create_workload_group` and `rds_alter_workload_group`, allow you to set importance levels, memory grants, and other parameters for groups of queries.

### Create workload group
<a name="ResourceGovernor.CreateWorkloadGroup"></a>

**Usage**

```
USE [msdb]
EXEC dbo.rds_create_workload_group 
@group_name = value, 
@IMPORTANCE ={ LOW | MEDIUM | HIGH }, 
@REQUEST_MAX_MEMORY_GRANT_PERCENT =value, 
@REQUEST_MAX_CPU_TIME_SEC = value , 
@REQUEST_MEMORY_GRANT_TIMEOUT_SEC = value, 
@MAX_DOP = value, 
@GROUP_MAX_REQUESTS = value, 
@pool_name = value
```

The following parameters are required:
+ `@pool_name` - Is the name of an existing user-defined resource pool.
+ `@group_name` - Is the name of an existing user-defined workload group.

The following parameters are optional:
+ `@IMPORTANCE` - Specifies the relative importance of a request in the workload group. The default value is `MEDIUM`.
+ `@REQUEST_MAX_MEMORY_GRANT_PERCENT` - Specifies the maximum amount of query workspace memory that a single request can take from the pool. *value* is a percentage of the resource pool size defined by `MAX_MEMORY_PERCENT`. Default value is 25.
+ `@REQUEST_MAX_CPU_TIME_SEC` - Specifies the maximum amount of CPU time, in seconds, that a batch request can use. *value* must be 0 or a positive integer. The default setting for *value* is 0, which means unlimited.
+ `@REQUEST_MEMORY_GRANT_TIMEOUT_SEC` - Specifies the maximum time, in seconds, that a query can wait for a memory grant from the query workspace memory to become available. *value* must be 0 or a positive integer. The default setting for *value*, 0, uses an internal calculation based on query cost to determine the maximum time.
+ `@MAX_DOP` - Specifies the maximum degree of parallelism (`MAXDOP`) for parallel query execution. The allowed range for *value* is from 0 through 64. The default setting for *value*, 0, uses the global setting.
+ `@GROUP_MAX_REQUESTS` = Specifies the maximum number of simultaneous requests that are allowed to execute in the workload group. *value* must be 0 or a positive integer. The default setting for *value* is 0, and allows unlimited requests.
+ `@pool_name` = Associates the workload group with the user-defined resource pool identified by *pool\$1name*, or with the `default` resource pool. If *pool\$1name* isn't provided, the workload group is associated with the built-in `default` pool.

**Examples**

```
--This creates workload group named 'analytics'
USE msdb;
EXEC dbo.rds_create_workload_group 
    @group_name = 'analytics',
    @IMPORTANCE = 'HIGH',
    @REQUEST_MAX_MEMORY_GRANT_PERCENT = 25, 
    @REQUEST_MAX_CPU_TIME_SEC = 0, 
    @REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 0, 
    @MAX_DOP = 0, 
    @GROUP_MAX_REQUESTS = 0, 
    @pool_name = 'analytics';

--Apply changes
EXEC msdb.dbo.rds_alter_resource_governor_configuration;
  
--Validate configuration
select * from sys.resource_governor_workload_groups
```

### Alter workload group
<a name="ResourceGovernor.AlterWorkloadGroup"></a>

**Usage**

```
EXEC msdb.dbo.rds_alter_workload_group
    @group_name = value,
    @IMPORTANCE = 'LOW|MEDIUM|HIGH',
    @REQUEST_MAX_MEMORY_GRANT_PERCENT = value,
    @REQUEST_MAX_CPU_TIME_SEC = value,
    @REQUEST_MEMORY_GRANT_TIMEOUT_SEC = value,
    @MAX_DOP = value,
    @GROUP_MAX_REQUESTS = value,
    @pool_name = value
```

The following parameters are required:
+ `@group_name` - Is the name of default or an existing user-defined workload group.

**Note**  
Changing only `REQUEST_MAX_MEMORY_GRANT_PERCENT` parameter on the default workload group is supported. For default workload group the `REQUEST_MAX_MEMORY_GRANT_PERCENT` must be between 1 and 70. No other parameters can be modified in default workload group. All parameters can be modified in the user-defined workload group.

The following parameters are optional:
+ `@IMPORTANCE` - Specifies the relative importance of a request in the workload group. The default value is MEDIUM.
+ `@REQUEST_MAX_MEMORY_GRANT_PERCENT` - Specifies the maximum amount of query workspace memory that a single request can take from the pool. *value* is a percentage of the resource pool size defined by `MAX_MEMORY_PERCENT`. Default value is 25. On Amazon RDS, `REQUEST_MAX_MEMORY_GRANT_PERCENT` must be between 1 and 70.
+ `@REQUEST_MAX_CPU_TIME_SEC` - Specifies the maximum amount of CPU time, in seconds, that a batch request can use. *value* must be 0 or a positive integer. The default setting for *value* is 0, which means unlimited.
+ `@REQUEST_MEMORY_GRANT_TIMEOUT_SEC` - Specifies the maximum time, in seconds, that a query can wait for a memory grant from the query workspace memory to become available. *value* must be 0 or a positive integer. The default setting for *value*, 0, uses an internal calculation based on query cost to determine the maximum time.
+ `@MAX_DOP` - Specifies the maximum degree of parallelism (MAXDOP) for parallel query execution. The allowed range for *value* is from 0 through 64. The default setting for *value*, 0, uses the global setting.
+ `@GROUP_MAX_REQUESTS` - Specifies the maximum number of simultaneous requests that are allowed to execute in the workload group. *value* must be 0 or a positive integer. The default setting for *value* is 0, and allows unlimited requests.
+ `@pool_name` - Associates the workload group with the user-defined resource pool identified by *pool\$1name*.

**Examples**

Example to Modify default workload group change REQUEST\$1MAX\$1MEMORY\$1GRANT\$1PERCENT:

```
--Modify default workload group (set memory grant cap to 10%)
USE msdb
EXEC dbo.rds_alter_workload_group    
    @group_name = 'default',
    @REQUEST_MAX_MEMORY_GRANT_PERCENT=10;
    
--Apply changes
EXEC msdb.dbo.rds_alter_resource_governor_configuration;

--Validate configuration
SELECT * FROM sys.resource_governor_workload_groups WHERE name='default';
```

Example to modify non-default workload group:

```
EXEC msdb.dbo.rds_alter_workload_group    
    @group_name = 'analytics',
    @IMPORTANCE = 'HIGH',
    @REQUEST_MAX_MEMORY_GRANT_PERCENT = 30,
    @REQUEST_MAX_CPU_TIME_SEC = 3600,
    @REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 60,
    @MAX_DOP = 4,
    @GROUP_MAX_REQUESTS = 100;

--Apply changes
EXEC msdb.dbo.rds_alter_resource_governor_configuration;
```

Example to move a Non-Default Workload Group to another resource pool:

```
EXEC msdb.dbo.rds_alter_workload_group    
@group_name = 'analytics',
@pool_name='abc'

--Apply changes
EXEC msdb.dbo.rds_alter_resource_governor_configuration;

--Validate configuration
select * from sys.resource_governor_workload_groups
```

### Drop workload group
<a name="ResourceGovernor.DropWorkloadGroup"></a>

**Usage**

```
EXEC msdb.dbo.rds_drop_workload_group    
@group_name = value
```

The following parameters are required:
+ `@group_name` - Is the name of an existing user-defined workload group.

**Examples**

```
--Drops a Workload Group:
EXEC msdb.dbo.rds_drop_workload_group    
@group_name = 'analytics';

--Apply changes
EXEC msdb.dbo.rds_alter_resource_governor_configuration;

--Validate configuration
select * from sys.resource_governor_workload_groups
```

## Create and register classifier function
<a name="ResourceGovernor.ClassifierFunction"></a>

This procedure creates a resource governor classifier function in master database that routes connections to custom workload groups based on specified criteria (user name, database, host, or application name). If resource governor is enabled and a classifier function is specified in the resource governor configuration, then the function output determines the workload group used for new sessions. In the absence of a classifier function, all sessions are classified into the `default` group.

**Features:**
+ Supports up to two workload groups with their respective routing conditions.
+ Combines criterion with `AND` conditions within each group.
+ Requires at least one routing criterion per workload group.
+ Function name must start with `rg_classifier_`.
+ Default group assignment if no conditions match.

The classifier function has the following characteristics and behaviors:
+ The function is defined in the server scope (in the master database).
+ The function is defined with schema binding.
+ The function is evaluated for every new session, even when connection pooling is enabled.
+ The function returns the workload group context for the session. The session is assigned to the workload group returned by the classifier for the lifetime of the session.
+ If the function returns NULL, default, or the name of a nonexistent workload group, the session is given the default workload group context. The session is also given the default context if the function fails for any reason.
+ You can create multiple classifier functions. However, SQL Server allows only one classifier function to be registered at a time.
+ The classifier function can't be dropped unless its classifier status is removed using the de-register procedure (`EXEC dbo.msdb.rds_alter_resource_governor_configuration @deregister_function = 1;`) that sets the function name to NULL or another classifier function is registered using (`EXEC dbo.msdb.rds_alter_resource_governor_configuration @classifier_function = <function_name>;`)
+ In the absence of a classifier function, all sessions are classified into the default group.
+ You can't modify a classifier function while it is referenced in the resource governor configuration. However, you can modify the configuration to use a different classifier function. If you want to make changes to the classifier, consider creating a pair of classifier functions. For example, you might create `rg_classifier_a` and `rg_classifier_b`.

**Usage**

```
EXEC msdb.dbo.rds_create_classifier_function 
@function_name = value,
@workload_group1 = value, 
@user_name1 = value,
@db_name1 = value,
@host_name1 = value, 
@app_name1 = value, 
@workload_group2 = value,
@user_name2 = value,
@db_name2 = value,
@host_name2 = value,
@app_name2 = value
```

The following parameters are required:
+ `@function_name` - Name of the classifier function. Must start with `rg_classifier_`
+ `@workload_group1` - Name of the first workload group

The following parameters are optional:

(At least one of these criteria must be specified for group 1)
+ `@user_name1` - Login name for group 1
+ `@db_name1` - Database name for group 1
+ `@host_name1` - Host name for group 1
+ `@app_name1` - Application name for group 1

(If group 2 is specified, at least one criterion must be provided)
+ `@workload_group2` - Name of the second workload group
+ `@user_name2` - Login name for group 2
+ `@db_name2` - Database name for group 2
+ `@host_name2` - Host name for group 2
+ `@app_name2` - Application name for group 2

**Note**  
System accounts, databases, applications and host are restricted.

**Examples**

Basic Example with One Workload Group:

```
/*Create a classifier to route all requests from 'PowerBI' app to workload group 
'reporting_group'*/

EXEC msdb.dbo.rds_create_classifier_function
@function_name = 'rg_classifier_a',
@workload_group1 = 'reporting_group',
@app_name1 = 'PowerBI';

--Register the classifier
EXEC msdb.dbo.rds_alter_resource_governor_configuration
@classifier_function = 'rg_classifier_a';

-- Apply changes
EXEC msdb.dbo.rds_alter_resource_governor_configuration

/*Query sys.resource_governor_configuration to validate that resource governor is enabled and is using the classifier function we created and registered*/

use master
go
SELECT OBJECT_SCHEMA_NAME(classifier_function_id) AS classifier_schema_name,
       OBJECT_NAME(classifier_function_id) AS classifier_object_name,
       is_enabled
FROM sys.resource_governor_configuration;
```

## Drop classifier function
<a name="ResourceGovernor.DropClassifier"></a>

**Usage**

```
USE [msdb]
EXEC dbo.rds_drop_classifier_function
@function_name = value;
```

The following parameter is required:
+ `@function_name` - Is the name of an existing user-defined classifier function

**Example**

```
EXEC msdb.dbo.rds_drop_classifier_function
@function_name = 'rg_classifier_b';
```

## De-register classifier function
<a name="ResourceGovernor.DeregisterClassifier"></a>

Use this procedure to de-register classifier function. After the function is de-registered, new sessions are automatically assigned to the default workload group.

**Usage**

```
USE [msdb]
EXEC dbo.rds_alter_resource_governor_configuration    
@deregister_function = 1;
```

For de-registration the following parameter is required:
+ `@deregister_function` must be 1

**Example**

```
EXEC msdb.dbo.rds_alter_resource_governor_configuration 
    @deregister_function = 1;
GO

-- Apply changes
EXEC msdb.dbo.rds_alter_resource_governor_configuration;
```

## Reset statistics
<a name="ResourceGovernor.ResetStats"></a>

Resource governor statistics are cumulative since the last server restart. If you need to collect statistics starting from a certain time, you can reset statistics using the following Amazon RDS stored procedure.

**Usage**

```
USE [msdb]
EXEC dbo.rds_alter_resource_governor_configuration  
@reset_statistics = 1;
```

For reset stats the following parameter is required:
+ `@reset_statistics` must be 1

## Resource governor configuration changes
<a name="ResourceGovernor.ConfigChanges"></a>

When resource governor isn’t enabled, `rds_alter_resource_governor_configuration` enables resource governor. Enabling resource governor has the following results:
+ The classifier function, if any, is executed for new sessions, assigning them to workload groups.
+ The resource limits that are specified in resource governor configuration are honored and enforced.
+ The resource limits that are specified in resource governor configuration are honored and enforced.
+ Requests that existed before enabling resource governor might be affected by any configuration changes made when resource governor is enabled.
+ Existing requests, before enabling resource governor, might be affected by any configuration changes made when resource governor is enabled.
+ On RDS for SQL Server, `EXEC msdb.dbo.rds_alter_resource_governor_configuration` must be executed for any resource governor configuration changes to take effect. 

**Usage**

```
USE [msdb]
EXEC dbo.rds_alter_resource_governor_configuration
```

## Bind TempDB to a resource pool
<a name="ResourceGovernor.BindTempDB"></a>

You can bind tempdb memory optimized metadata to a specific resource pool using `rds_bind_tempdb_metadata_to_resource_pool` in Amazon RDS SQL Server version 2019 and above.

**Note**  
Memory-optimized tempdb metadata feature must be enabled before binding tempdb metadata to resource pool. To enable this feature on Amazon RDS its a static parameter `tempdb metadata memory-optimized`.

Enable the static parameter on Amazon RDS and perform a reboot without failover for the parameter to take effect:

```
aws rds modify-db-parameter-group \
    --db-parameter-group-name test-sqlserver-ee-2022 \
    --parameters "ParameterName='tempdb metadata memory-optimized',ParameterValue=True,ApplyMethod=pending-reboot"
```

**Usage**

```
USE [msdb]
EXEC dbo.rds_bind_tempdb_metadata_to_resource_pool  
@pool_name=value;
```

The following parameter is required:
+ `@pool_name` - Is the name of an existing user-defined resource pool.

**Note**  
This change also requires sql service reboot without failover to take effect, even if Memory-optimized TempDB metadata feature is already enabled.

## Unbind TempDB from a resource pool
<a name="ResourceGovernor.UnbindTempDB"></a>

Unbind tempdb memory optimized metadata from a resource pool.

**Note**  
This change also requires sql service reboot without failover to take effect

**Usage**

```
USE [msdb]
EXEC dbo.rds_unbind_tempdb_metadata_from_resource_pool
```

## Cleanup resource governor
<a name="ResourceGovernor.Cleanup"></a>

This procedure is to clean up all associated objects after you have removed the resource governor option from the option group. This disables resource governor, reverts default workload group to default settings, remove custom workload groups, resource pools, and classifier functions.

**Key features**
+ Reverts default workload group to default settings
+ Disables resource governor
+ Removes custom workload groups
+ Removes custom resource pools
+ Drops classifier functions
+ Removes tempdb resource pool binding if enabled

**Important**  
This cleanup can error out if there are active sessions on the workload group. Either wait for the active sessions to finish or terminate the active sessions as per your business requirement. It's recommended to run this during the maintenance window.   
This cleanup can error out if a resource pool was bound to tempdb and reboot without failover hasn't been taken place yet. If you bound a resource pool to tempdb or unbound a resource pool from tempdb earlier, perform a reboot without failover to make the change effective. It's recommended to run this during the maintenance window.

**Usage**

```
USE [msdb]
EXEC dbo.rds_cleanup_resource_governor
```

## Considerations for Multi-AZ deployment
<a name="ResourceGovernor.Considerations"></a>

RDS for SQL Server replicates resource governor to secondary instance in a Multi-AZ deployment. You can verify when modified and new resource governor last synchronized with the secondary instance.

Use the following query to check the `last_sync_time` of the replication:

```
SELECT * from msdb.dbo.rds_fn_server_object_last_sync_time();
```

In the query results, if the sync time is past the resource governor updated or creation time, then the resource governor syncs with the secondary.

To perform a manual DB failover to confirm that the resource governor replicate, wait for the `last_sync_time` to update first. Then, proceed with the Multi-AZ failover.

## Considerations for read replicas
<a name="ResourceGovernor.ReadReplica"></a>
+ For SQL Server replicas in the same Region as the source DB instance, use the same option group as the source. Changes to the option group propagate to replicas immediately, regardless of their maintenance windows.
+ When you create a SQL Server cross-Region replica, RDS creates a dedicated option group for it.
+ You can't remove an SQL Server cross-Region replica from its dedicated option group. No other DB instances can use the dedicated option group for a SQL Server cross-Region replica.
+ Resource governor option is non-replicated options. You can add or remove non-replicated options from a dedicated option group.
+ When you promote a SQL Server cross-Region read replica, the promoted replica behaves the same as other SQL Server DB instances, including the management of its options.

**Note**  
When using Resource governor on a read replica, you must manually ensure that resource governor has been configured on your read replica using Amazon RDS stored procedures after the option is added to the option group. Resource governor configurations do not automatically replicate to the read replica. Also, the workload on read replica is typically different than the primary instance. Hence, it's recommended to apply the resource configuration on the replica based on your workload and instance type. You can run these Amazon RDS stored procedures on read replica independently to configure resource governor on read replica.

# Monitor Microsoft SQL Server resource governor using system views for your RDS for SQL Server instance
<a name="ResourceGovernor.Monitoring"></a>

Resource Governor statistics are cumulative since the last server restart. If you need to collect statistics starting from a certain time, you can reset statistics using the following Amazon RDS stored procedure:

```
EXEC msdb.dbo.rds_alter_resource_governor_configuration  
@reset_statistics = 1;
```

## Resource pool runtime statistics
<a name="ResourceGovernor.ResourcePoolStats"></a>

For each resource pool, resource governor tracks CPU and memory utilization, out-of-memory events, memory grants, I/O, and other statistics. For more information, see [ sys.dm\$1resource\$1governor\$1resource\$1pools](https://learn.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-resource-pools-transact-sql?view=sql-server-ver17).

The following query returns a subset of available statistics for all resource pools:

```
SELECT rp.pool_id,
       rp.name AS resource_pool_name,
       wg.workload_group_count,
       rp.statistics_start_time,
       rp.total_cpu_usage_ms,
       rp.target_memory_kb,
       rp.used_memory_kb,
       rp.out_of_memory_count,
       rp.active_memgrant_count,
       rp.total_memgrant_count,
       rp.total_memgrant_timeout_count,
       rp.read_io_completed_total,
       rp.write_io_completed_total,
       rp.read_bytes_total,
       rp.write_bytes_total,
       rp.read_io_stall_total_ms,
       rp.write_io_stall_total_ms
FROM sys.dm_resource_governor_resource_pools AS rp
OUTER APPLY (
            SELECT COUNT(1) AS workload_group_count
            FROM sys.dm_resource_governor_workload_groups AS wg
            WHERE wg.pool_id = rp.pool_id
            ) AS wg;
```

# Disabling Microsoft SQL Server resource governor for your RDS for SQL Server instance
<a name="ResourceGovernor.Disabling"></a>

When you disable resource governor on RDS for SQL Server, the service stops managing workload resources. Before you disable resource governor, review how this affects your database connections and configurations.

Disabling resource governor has the following results:
+ The classifier function isn't executed when a new connection is opened.
+ New connections are automatically classified into the default workload group.
+ All existing workload group and resource pool settings are reset to their default values.
+ No events are fired when limits are reached.
+ Resource governor configuration changes can be made, but the changes don't take effect until resource governor is enabled.

To disable resource governor, remove the `RESOURCE_GOVERNOR` option from its option group.

## Console
<a name="ResourceGovernor.Disabling.Console"></a>

The following procedure removes the `RESOURCE_GOVERNOR` option.

**To remove the RESOURCE\$1GOVERNOR option from its option group**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Option groups**.

1. Choose the option group with the `RESOURCE_GOVERNOR` option (`resource-governor-ee-2022` in the previous examples).

1. Choose **Delete option**.

1. Under **Deletion options**, choose **RESOURCE\$1GOVERNOR** for **Options to delete**.

1. Under **Apply immediately**, choose **Yes** to delete the option immediately, or **No** to delete it during the next maintenance window.

1. Choose **Delete**.

## CLI
<a name="ResourceGovernor.Disabling.CLI"></a>

The following procedure removes the `RESOURCE_GOVERNOR` option.

**To remove the RESOURCE\$1GOVERNOR option from its option group**
+ Run one of the following commands.  
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds remove-option-from-option-group \
      --option-group-name resource-governor-ee-2022 \
      --options RESOURCE_GOVERNOR \
      --apply-immediately
  ```

  For Windows:

  ```
  aws rds remove-option-from-option-group ^
      --option-group-name resource-governor-ee-2022 ^
      --options RESOURCE_GOVERNOR ^
      --apply-immediately
  ```

# Best practices for configuring resource governor on RDS for SQL Server
<a name="ResourceGovernor.BestPractices"></a>

To control resource consumption, RDS for SQL Server supports Microsoft SQL Server resource governor. The following best practices help you avoid common configuration issues and optimize database performance.

1. Resource governor configuration is stored in the `master` database. We recommend that you always save a copy of resource governor configuration scripts separately.

1. The classifier function extends login processing time hence it's recommended to avoid complex logic in the classifier. An overly complex function can cause login delays or connection timeouts including Amazon RDS automation sessions. This can impact the ability of Amazon RDS automation to monitor the instance health. Hence, it's always recommended to test the classifier function in a pre-production environment before implementing in production environments.

1. Avoid setting high values (above 70) for `REQUEST_MAX_MEMORY_GRANT_PERCENT` in workload groups, as this can prevent the database instance from allocating sufficient memory for other concurrent queries, potentially resulting in memory grant timeout errors (Error 8645). Conversely, setting this value too low (less than 1) or to 0 might prevent queries that need memory workspace (like those involving sort or hash operations) from executing properly in user-defined workload groups. RDS enforces these limits by restricting values to between 1 and 70 on default workload groups.

1. For binding tempdb to resource pool, after binding memory optimized tempdb metadata to a pool, the pool might reach its maximum setting, and any queries that use `tempdb` might fail with out-of-memory errors. Under certain circumstances, the SQL Server could potentially stop if an out-of-memory error occurs. To reduce the chance of this happening, set the memory pool's `MAX_MEMORY_PERCENT` to a high value.

# Common DBA tasks for Amazon RDS for Microsoft SQL Server
<a name="Appendix.SQLServer.CommonDBATasks"></a>

This section describes the Amazon RDS-specific implementations of some common DBA tasks for DB instances that are running the Microsoft SQL Server database engine. In order to deliver a managed service experience, Amazon RDS does not provide shell access to DB instances, and it restricts access to certain system procedures and tables that require advanced privileges. 

**Note**  
When working with a SQL Server DB instance, you can run scripts to modify a newly created database, but you cannot modify the [model] database, the database used as the model for new databases. 

**Topics**
+ [

# Accessing the tempdb database on Microsoft SQL Server DB instances on Amazon RDS
](SQLServer.TempDB.md)
+ [

# Analyzing your database workload on an Amazon RDS for SQL Server DB instance with Database Engine Tuning Advisor
](Appendix.SQLServer.CommonDBATasks.Workload.md)
+ [

# Changing the `db_owner` to the `rdsa` account for your Amazon RDS for SQL Server database
](Appendix.SQLServer.CommonDBATasks.ChangeDBowner.md)
+ [

# Managing collations and character sets for Amazon RDS for Microsoft SQL Server
](Appendix.SQLServer.CommonDBATasks.Collation.md)
+ [

# Creating a database user for Amazon RDS for SQL Server
](Appendix.SQLServer.CommonDBATasks.CreateUser.md)
+ [

# Determining a recovery model for your Amazon RDS for SQL Server database
](Appendix.SQLServer.CommonDBATasks.DatabaseRecovery.md)
+ [

# Determining the last failover time for Amazon RDS for SQL Server
](Appendix.SQLServer.CommonDBATasks.LastFailover.md)
+ [

# Troubleshooting point-in-time-recovery failures due to a log sequence number gap
](Appendix.SQLServer.CommonDBATasks.PITR-LSN-Gaps.md)
+ [

# Deny or allow viewing database names for Amazon RDS for SQL Server
](Appendix.SQLServer.CommonDBATasks.ManageView.md)
+ [

# Disabling fast inserts during bulk loading for Amazon RDS for SQL Server
](Appendix.SQLServer.CommonDBATasks.DisableFastInserts.md)
+ [

# Dropping a database in an Amazon RDS for Microsoft SQL Server DB instance
](Appendix.SQLServer.CommonDBATasks.DropMirrorDB.md)
+ [

# Renaming a Amazon RDS for Microsoft SQL Server database in a Multi-AZ deployment
](Appendix.SQLServer.CommonDBATasks.RenamingDB.md)
+ [

# Resetting the db\$1owner role membership for master user for Amazon RDS for SQL Server
](Appendix.SQLServer.CommonDBATasks.ResetPassword.md)
+ [

# Restoring license-terminated DB instances for Amazon RDS for SQL Server
](Appendix.SQLServer.CommonDBATasks.RestoreLTI.md)
+ [

# Transitioning a Amazon RDS for SQL Server database from OFFLINE to ONLINE
](Appendix.SQLServer.CommonDBATasks.TransitionOnline.md)
+ [

# Using change data capture for Amazon RDS for SQL Server
](Appendix.SQLServer.CommonDBATasks.CDC.md)
+ [

# Using SQL Server Agent for Amazon RDS
](Appendix.SQLServer.CommonDBATasks.Agent.md)
+ [

# Working with Amazon RDS for Microsoft SQL Server logs
](Appendix.SQLServer.CommonDBATasks.Logs.md)
+ [

# Working with trace and dump files for Amazon RDS for SQL Server
](Appendix.SQLServer.CommonDBATasks.TraceFiles.md)

# Accessing the tempdb database on Microsoft SQL Server DB instances on Amazon RDS
<a name="SQLServer.TempDB"></a>

You can access the `tempdb` database on your Microsoft SQL Server DB instances on Amazon RDS. You can run code on `tempdb` by using Transact-SQL through Microsoft SQL Server Management Studio (SSMS), or any other standard SQL client application. For more information about connecting to your DB instance, see [Connecting to your Microsoft SQL Server DB instance](USER_ConnectToMicrosoftSQLServerInstance.md). 

The master user for your DB instance is granted `CONTROL` access to `tempdb` so that this user can modify the `tempdb` database options. The master user isn't the database owner of the `tempdb` database. If necessary, the master user can grant `CONTROL` access to other users so that they can also modify the `tempdb` database options. 

**Note**  
You can't run Database Console Commands (DBCC) on the `tempdb` database. 

# Modifying tempdb database options
<a name="SQLServer.TempDB.Modifying"></a>

You can modify the database options on the `tempdb` database on your Amazon RDS DB instances. For more information about which options can be modified, see [tempdb database](https://msdn.microsoft.com/en-us/library/ms190768%28v=sql.120%29.aspx) in the Microsoft documentation.

Database options such as the maximum file size options are persistent after you restart your DB instance. You can modify the database options to optimize performance when importing data, and to prevent running out of storage.

## Optimizing performance when importing data
<a name="SQLServer.TempDB.Modifying.Import"></a>

To optimize performance when importing large amounts of data into your DB instance, set the `SIZE` and `FILEGROWTH` properties of the tempdb database to large numbers. For more information about how to optimize `tempdb`, see [Optimizing tempdb performance](https://technet.microsoft.com/en-us/library/ms175527%28v=sql.120%29.aspx) in the Microsoft documentation.

The following example demonstrates setting the size to 100 GB and file growth to 10 percent. 

```
1. alter database[tempdb] modify file (NAME = N'templog', SIZE=100GB, FILEGROWTH = 10%)
```

## Preventing storage problems
<a name="SQLServer.TempDB.Modifying.Full"></a>

To prevent the `tempdb` database from using all available disk space, set the `MAXSIZE` property. The following example demonstrates setting the property to 2048 MB. 

```
1. alter database [tempdb] modify file (NAME = N'templog', MAXSIZE = 2048MB)
```

# Shrinking the tempdb database
<a name="SQLServer.TempDB.Shrinking"></a>

There are two ways to shrink the `tempdb` database on your Amazon RDS DB instance. You can use the `rds_shrink_tempdbfile` procedure, or you can set the `SIZE` property, 

## Using the rds\$1shrink\$1tempdbfile procedure
<a name="SQLServer.TempDB.Shrinking.Proc"></a>

You can use the Amazon RDS procedure `msdb.dbo.rds_shrink_tempdbfile` to shrink the `tempdb` database. You can only call `rds_shrink_tempdbfile` if you have `CONTROL` access to `tempdb`. When you call `rds_shrink_tempdbfile`, there is no downtime for your DB instance. 

The `rds_shrink_tempdbfile` procedure has the following parameters.


****  

| Parameter name | Data type | Default | Required | Description | 
| --- | --- | --- | --- | --- | 
| `@temp_filename` | SYSNAME | — | required | The logical name of the file to shrink. | 
| `@target_size` | int | null | optional | The new size for the file, in megabytes. | 

The following example gets the names of the files for the `tempdb` database.

```
1. use tempdb;
2. GO
3. 
4. select name, * from sys.sysfiles;
5. GO
```

The following example shrinks a `tempdb` database file named `test_file`, and requests a new size of `10` megabytes: 

```
1. exec msdb.dbo.rds_shrink_tempdbfile @temp_filename = N'test_file', @target_size = 10;
```

## Setting the SIZE property
<a name="SQLServer.TempDB.Shrinking.Size"></a>

You can also shrink the `tempdb` database by setting the `SIZE` property and then restarting your DB instance. For more information about restarting your DB instance, see [Rebooting a DB instance](USER_RebootInstance.md).

The following example demonstrates setting the `SIZE` property to 1024 MB. 

```
1. alter database [tempdb] modify file (NAME = N'templog', SIZE = 1024MB)
```

# TempDB configuration for Multi-AZ deployments
<a name="SQLServer.TempDB.MAZ"></a>

If your RDS for SQL Server DB instance is in a Multi-AZ Deployment using Database Mirroring (DBM) or Always On Availability Groups (AGs), keep in mind the following considerations for using the `tempdb` database.

You can't replicate `tempdb` data from your primary DB instance to your secondary DB instance. When you fail over to a secondary DB instance, `tempdb` on that secondary DB instance will be empty.

You can synchronize the configuration of the `tempdb` database options, including its file sizing and autogrowth settings, from your primary DB instance to your secondary DB instance. Synchronizing the `tempDB` configuration is supported on all RDS for SQL Server versions. You can turn on automatic synchronization of the `tempdb` configuration by using the following stored procedure:

```
EXECUTE msdb.dbo.rds_set_system_database_sync_objects @object_types = 'TempDbFile';
```

**Important**  
Before using the `rds_set_system_database_sync_objects` stored procedure, ensure you've set your preferred `tempdb` configuration on your primary DB instance, rather than on your secondary DB instance. If you made the configuration change on your secondary DB instance, your preferred `tempdb` configuration could be deleted when you turn on automatic synchronization.

You can use the following function to confirm whether automatic synchronization of the `tempdb` configuration is turned on:

```
SELECT * from msdb.dbo.rds_fn_get_system_database_sync_objects();
```

When automatic synchronization of the `tempdb` configuration is turned on, there will be a return value for the `object_class` field. When it's turned off, no value is returned.

You can use the following function to find the last time objects were synchronized, in UTC time:

```
SELECT * from msdb.dbo.rds_fn_server_object_last_sync_time();
```

For example, if you modified the `tempdb` configuration at 01:00 and then run the `rds_fn_server_object_last_sync_time` function, the value returned for `last_sync_time` should be after 01:00, indicating that an automatic synchronization occurred.

If you are also using SQL Server Agent job replication, you can enable replication for both SQL Agent jobs and the `tempdb` configuration by providing them in the `@object_type` parameter:

```
EXECUTE msdb.dbo.rds_set_system_database_sync_objects @object_types = 'SQLAgentJob,TempDbFile';
```

For more information on SQL Server Agent job replication, see [Turning on SQL Server Agent job replication](Appendix.SQLServer.CommonDBATasks.Agent.md#SQLServerAgent.Replicate).

As an alternative to using the `rds_set_system_database_sync_objects` stored procedure to ensure that `tempdb` configuration changes are automatically synchronized, you can use one of the following manual methods:

**Note**  
We recommend turning on automatic synchronization of the `tempdb` configuration by using the `rds_set_system_database_sync_objects` stored procedure. Using automatic synchronization prevents the need to perform these manual tasks each time you change your `tempdb` configuration.
+ First modify your DB instance and turn Multi-AZ off, then modify tempdb, and finally turn Multi-AZ back on. This method doesn't involve any downtime.

  For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md). 
+ First modify `tempdb` in the original primary instance, then fail over manually, and finally modify `tempdb` in the new primary instance. This method involves downtime. 

  For more information, see [Rebooting a DB instance](USER_RebootInstance.md).

# Analyzing your database workload on an Amazon RDS for SQL Server DB instance with Database Engine Tuning Advisor
<a name="Appendix.SQLServer.CommonDBATasks.Workload"></a>

Database Engine Tuning Advisor is a client application provided by Microsoft that analyzes database workload and recommends an optimal set of indexes for your Microsoft SQL Server databases based on the kinds of queries you run. Like SQL Server Management Studio, you run Tuning Advisor from a client computer that connects to your Amazon RDS DB instance that is running SQL Server. The client computer can be a local computer that you run on premises within your own network or it can be an Amazon EC2 Windows instance that is running in the same region as your Amazon RDS DB instance.

This section shows how to capture a workload for Tuning Advisor to analyze. This is the preferred process for capturing a workload because Amazon RDS restricts host access to the SQL Server instance. For more information, see [Database Engine Tuning Advisor](https://docs.microsoft.com/en-us/sql/relational-databases/performance/database-engine-tuning-advisor) in the Microsoft documentation.

To use Tuning Advisor, you must provide what is called a workload to the advisor. A workload is a set of Transact-SQL statements that run against a database or databases that you want to tune. Database Engine Tuning Advisor uses trace files, trace tables, Transact-SQL scripts, or XML files as workload input when tuning databases. When working with Amazon RDS, a workload can be a file on a client computer or a database table on an Amazon RDS for SQL Server DB accessible to your client computer. The file or the table must contain queries against the databases you want to tune in a format suitable for replay.

For Tuning Advisor to be most effective, a workload should be as realistic as possible. You can generate a workload file or table by performing a trace against your DB instance. While a trace is running, you can either simulate a load on your DB instance or run your applications with a normal load.

There are two types of traces: client-side and server-side. A client-side trace is easier to set up and you can watch trace events being captured in real-time in SQL Server Profiler. A server-side trace is more complex to set up and requires some Transact-SQL scripting. In addition, because the trace is written to a file on the Amazon RDS DB instance, storage space is consumed by the trace. It is important to track of how much storage space a running server-side trace uses because the DB instance could enter a storage-full state and would no longer be available if it runs out of storage space.

For a client-side trace, when a sufficient amount of trace data has been captured in the SQL Server Profiler, you can then generate the workload file by saving the trace to either a file on your local computer or in a database table on a DB instance that is available to your client computer. The main disadvantage of using a client-side trace is that the trace may not capture all queries when under heavy loads. This could weaken the effectiveness of the analysis performed by the Database Engine Tuning Advisor. If you need to run a trace under heavy loads and you want to ensure that it captures every query during a trace session, you should use a server-side trace.

For a server-side trace, you must get the trace files on the DB instance into a suitable workload file or you can save the trace to a table on the DB instance after the trace completes. You can use the SQL Server Profiler to save the trace to a file on your local computer or have the Tuning Advisor read from the trace table on the DB instance.

# Running a client-side trace on a SQL Server DB instance
<a name="Appendix.SQLServer.CommonDBATasks.TuningAdvisor.ClientSide"></a>

 **To run a client-side trace on a SQL Server DB instance** 

1. Start SQL Server Profiler. It is installed in the Performance Tools folder of your SQL Server instance folder. You must load or define a trace definition template to start a client-side trace.

1. In the SQL Server Profiler File menu, choose **New Trace**. In the **Connect to Server** dialog box, enter the DB instance endpoint, port, master user name, and password of the database you would like to run a trace on.

1. In the **Trace Properties** dialog box, enter a trace name and choose a trace definition template. A default template, TSQL\$1Replay, ships with the application. You can edit this template to define your trace. Edit events and event information under the **Events Selection** tab of the **Trace Properties** dialog box.

   For more information about trace definition templates and using the SQL Server Profiler to specify a client-side trace, see [Database Engine Tuning Advisor](https://docs.microsoft.com/en-us/sql/relational-databases/performance/database-engine-tuning-advisor) in the Microsoft documentation.

1. Start the client-side trace and watch SQL queries in real-time as they run against your DB instance.

1. Select **Stop Trace** from the **File** menu when you have completed the trace. Save the results as a file or as a trace table on you DB instance.

# Running a server-side trace on a SQL Server DB instance
<a name="Appendix.SQLServer.CommonDBATasks.TuningAdvisor.ServerSide"></a>

Writing scripts to create a server-side trace can be complex and is beyond the scope of this document. This section contains sample scripts that you can use as examples. As with a client-side trace, the goal is to create a workload file or trace table that you can open using the Database Engine Tuning Advisor.

The following is an abridged example script that starts a server-side trace and captures details to a workload file. The trace initially saves to the file RDSTrace.trc in the D:\$1RDSDBDATA\$1Log directory and rolls-over every 100 MB so subsequent trace files are named RDSTrace\$11.trc, RDSTrace\$12.trc, etc.

```
DECLARE @file_name NVARCHAR(245) = 'D:\RDSDBDATA\Log\RDSTrace';
DECLARE @max_file_size BIGINT = 100;
DECLARE @on BIT = 1
DECLARE @rc INT
DECLARE @traceid INT

EXEC @rc = sp_trace_create @traceid OUTPUT, 2, @file_name, @max_file_size
IF (@rc = 0) BEGIN
   EXEC sp_trace_setevent @traceid, 10, 1, @on
   EXEC sp_trace_setevent @traceid, 10, 2, @on
   EXEC sp_trace_setevent @traceid, 10, 3, @on
 . . .
   EXEC sp_trace_setfilter @traceid, 10, 0, 7, N'SQL Profiler'
   EXEC sp_trace_setstatus @traceid, 1
   END
```

The following example is a script that stops a trace. Note that a trace created by the previous script continues to run until you explicitly stop the trace or the process runs out of disk space.

```
DECLARE @traceid INT
SELECT @traceid = traceid FROM ::fn_trace_getinfo(default) 
WHERE property = 5 AND value = 1 AND traceid <> 1 

IF @traceid IS NOT NULL BEGIN
   EXEC sp_trace_setstatus @traceid, 0
   EXEC sp_trace_setstatus @traceid, 2
END
```

You can save server-side trace results to a database table and use the database table as the workload for the Tuning Advisor by using the fn\$1trace\$1gettable function. The following commands load the results of all files named RDSTrace.trc in the D:\$1rdsdbdata\$1Log directory, including all rollover files like RDSTrace\$11.trc, into a table named RDSTrace in the current database.

```
SELECT * INTO RDSTrace
FROM fn_trace_gettable('D:\rdsdbdata\Log\RDSTrace.trc', default);
```

To save a specific rollover file to a table, for example the RDSTrace\$11.trc file, specify the name of the rollover file and substitute 1 instead of default as the last parameter to fn\$1trace\$1gettable.

```
SELECT * INTO RDSTrace_1
FROM fn_trace_gettable('D:\rdsdbdata\Log\RDSTrace_1.trc', 1);
```

# Running Tuning Advisor with a trace
<a name="Appendix.SQLServer.CommonDBATasks.TuningAdvisor.Running"></a>

Once you create a trace, either as a local file or as a database table, you can then run Tuning Advisor against your DB instance. Using Tuning Advisor with Amazon RDS is the same process as when working with a standalone, remote SQL Server instance. You can either use the Tuning Advisor UI on your client machine or use the dta.exe utility from the command line. In both cases, you must connect to the Amazon RDS DB instance using the endpoint for the DB instance and provide your master user name and master user password when using Tuning Advisor. 

The following code example demonstrates using the dta.exe command line utility against an Amazon RDS DB instance with an endpoint of **dta.cnazcmklsdei.us-east-1.rds.amazonaws.com**. The example includes the master user name **admin** and the master user password **test**, the example database to tune is named machine named **C:\$1RDSTrace.trc**. The example command line code also specifies a trace session named **RDSTrace1** and specifies output files to the local machine named **RDSTrace.sql** for the SQL output script, **RDSTrace.txt** for a result file, and **RDSTrace.xml** for an XML file of the analysis. There is also an error table specified on the RDSDTA database named **RDSTraceErrors**.

```
dta -S dta.cnazcmklsdei.us-east-1.rds.amazonaws.com -U admin -P test -D RDSDTA -if C:\RDSTrace.trc -s RDSTrace1 -of C:\ RDSTrace.sql -or C:\ RDSTrace.txt -ox C:\ RDSTrace.xml -e RDSDTA.dbo.RDSTraceErrors 
```

Here is the same example command line code except the input workload is a table on the remote Amazon RDS instance named **RDSTrace** which is on the **RDSDTA** database.

```
dta -S dta.cnazcmklsdei.us-east-1.rds.amazonaws.com -U admin -P test -D RDSDTA -it RDSDTA.dbo.RDSTrace -s RDSTrace1 -of C:\ RDSTrace.sql -or C:\ RDSTrace.txt -ox C:\ RDSTrace.xml -e RDSDTA.dbo.RDSTraceErrors
```

For a full list of dta utility command-line parameters, see [dta Utility](https://docs.microsoft.com/en-us/sql/tools/dta/dta-utility) in the Microsoft documentation.

# Changing the `db_owner` to the `rdsa` account for your Amazon RDS for SQL Server database
<a name="Appendix.SQLServer.CommonDBATasks.ChangeDBowner"></a>

When you create or restore a database in an RDS for SQL Server DB instance, Amazon RDS sets the owner of the database to `rdsa`. If you have a Multi-AZ deployment using SQL Server Database Mirroring (DBM) or Always On Availability Groups (AGs), Amazon RDS sets the owner of the database on the secondary DB instance to `NT AUTHORITY\SYSTEM`. The owner of the secondary database can't be changed until the secondary DB instance is promoted to the primary role. In most cases, setting the owner of the database to `NT AUTHORITY\SYSTEM` isn't problematic when executing queries, however, can throw errors when executing system stored procedures such as `sys.sp_updatestats` that require elevated permissions to execute.

You can use the following query to identify the owner of the databases owned by `NT AUTHORITY\SYSTEM`:

```
SELECT name FROM sys.databases WHERE SUSER_SNAME(owner_sid) = 'NT AUTHORITY\SYSTEM';
```

You can use the Amazon RDS stored procedure `rds_changedbowner_to_rdsa` to change the owner of the database to `rdsa`. The following databases are not allowed to be used with `rds_changedbowner_to_rdsa`: `master, model, msdb, rdsadmin, rdsadmin_ReportServer, rdsadmin_ReportServerTempDB, SSISDB`.

To change the owner of the database to `rdsa`, call the `rds_changedbowner_to_rdsa` stored procedure and provide the name of the database.

**Example usage:**  

```
exec msdb.dbo.rds_changedbowner_to_rdsa 'TestDB1';
```

The following parameter is required:
+ `@db_name` – The name of the database to change the owner of the database to `rdsa`.

**Important**  
You can't use `rds_changedbowner_to_rdsa` to change ownership of a database to a login other than `rdsa`. For example, you can't change the ownership to the login with which you created the database. To restore lost membership in the `db_owner` role for your master user when no other database user can be used to grant the membership, reset the master user password to obtain membership in the `db_owner` role. For more information, see [Resetting the db\$1owner role membership for master user for Amazon RDS for SQL Server](Appendix.SQLServer.CommonDBATasks.ResetPassword.md).

# Managing collations and character sets for Amazon RDS for Microsoft SQL Server
<a name="Appendix.SQLServer.CommonDBATasks.Collation"></a>

This topic provide guidance on how to manage collations and character sets for Microsoft SQL Server in Amazon RDS. It explains how to configure collations during database creation and modify them later, ensuring proper handling of text data based on language and locale requirements. Additionally, it covers best practices for maintaining compatibility and performance in SQL Server environments in Amazon RDS.

SQL Server supports collations at multiple levels. You set the default server collation when you create the DB instance. You can override the collation in the database, table, or column level.

**Topics**
+ [

## Server-level collation for Microsoft SQL Server
](#Appendix.SQLServer.CommonDBATasks.Collation.Server)
+ [

## Database-level collation for Microsoft SQL Server
](#Appendix.SQLServer.CommonDBATasks.Collation.Database-Table-Column)

## Server-level collation for Microsoft SQL Server
<a name="Appendix.SQLServer.CommonDBATasks.Collation.Server"></a>

When you create a Microsoft SQL Server DB instance, you can set the server collation that you want to use. If you don't choose a different collation, the server-level collation defaults to SQL\$1Latin1\$1General\$1CP1\$1CI\$1AS. The server collation is applied by default to all databases and database objects.

**Note**  
You can't change the collation when you restore from a DB snapshot.

Currently, Amazon RDS supports the following server collations:


| Collation | Description | 
| --- | --- | 
|  Arabic\$1CI\$1AS  |  Arabic, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Chinese\$1PRC\$1BIN2  |  Chinese-PRC, binary code point sort order  | 
|  Chinese\$1PRC\$1CI\$1AS  |  Chinese-PRC, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Chinese\$1Taiwan\$1Stroke\$1CI\$1AS  |  Chinese-Taiwan-Stroke, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Danish\$1Norwegian\$1CI\$1AS  |  Danish-Norwegian, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Danish\$1Norwegian\$1CI\$1AS\$1KS  |  Danish-Norwegian, case-insensitive, accent-sensitive, kanatype-sensitive, width-insensitive  | 
|  Danish\$1Norwegian\$1CI\$1AS\$1KS\$1WS  |  Danish-Norwegian, case-insensitive, accent-sensitive, kanatype-sensitive, width-sensitive  | 
|  Danish\$1Norwegian\$1CI\$1AS\$1WS  |  Danish-Norwegian, case-insensitive, accent-sensitive, kanatype-insensitive, width-sensitive  | 
|  Danish\$1Norwegian\$1CS\$1AI  |  Danish-Norwegian, case-sensitive, accent-insensitive, kanatype-insensitive, width-insensitive  | 
|  Danish\$1Norwegian\$1CS\$1AI\$1KS  |  Danish-Norwegian, case-sensitive, accent-insensitive, kanatype-sensitive, width-insensitive  | 
|  Finnish\$1Swedish\$1100\$1BIN  |  Finnish-Swedish-100, binary sort  | 
|  Finnish\$1Swedish\$1100\$1BIN2  |  Finnish-Swedish-100, binary code point comparison sort  | 
|  Finnish\$1Swedish\$1100\$1CI\$1AI  |  Finnish-Swedish-100, case-insensitive, accent-insensitive, kanatype-insensitive, width-insensitive  | 
|  Finnish\$1Swedish\$1100\$1CI\$1AS  |  Finnish-Swedish-100, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Finnish\$1Swedish\$1CI\$1AS  |  Finnish, Swedish, and Swedish (Finland), case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  French\$1CI\$1AS  |  French, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Greek\$1CI\$1AS  |  Greek, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Greek\$1CS\$1AS  |  Greek, case-sensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Hebrew\$1BIN  |  Hebrew, binary sort  | 
|  Hebrew\$1CI\$1AS  |  Hebrew, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Japanese\$1BIN  | Japanese, binary sort | 
|  Japanese\$1CI\$1AS  |  Japanese, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Japanese\$1CS\$1AS  |  Japanese, case-sensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Japanese\$1XJIS\$1140\$1CI\$1AS  |  Japanese, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive, supplementary characters, variation selector insensitive  | 
|  Japanese\$1XJIS\$1140\$1CI\$1AS\$1KS\$1VSS  |  Japanese, case-insensitive, accent-sensitive, kanatype-sensitive, width-insensitive, supplementary characters, variation selector sensitive  | 
|  Japanese\$1XJIS\$1140\$1CI\$1AS\$1VSS  |  Japanese, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive, supplementary characters, variation selector sensitive  | 
|  Japanese\$1XJIS\$1140\$1CS\$1AS\$1KS\$1WS  |  Japanese, case-sensitive, accent-sensitive, kanatype-sensitive, width-sensitive, supplementary characters, variation selector insensitive  | 
|  Korean\$1Wansung\$1CI\$1AS  |  Korean-Wansung, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Latin1\$1General\$1100\$1BIN  |  Latin1-General-100, binary sort  | 
|  Latin1\$1General\$1100\$1BIN2  |  Latin1-General-100, binary code point sort order  | 
|  Latin1\$1General\$1100\$1BIN2\$1UTF8  |  Latin1-General-100, binary code point sort order, UTF-8 encoded  | 
|  Latin1\$1General\$1100\$1CI\$1AS  |  Latin1-General-100, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Latin1\$1General\$1100\$1CI\$1AS\$1SC\$1UTF8  |  Latin1-General-100, case-insensitive, accent-sensitive, supplementary characters, UTF-8 encoded  | 
|  Latin1\$1General\$1BIN  |  Latin1-General, binary sort  | 
|  Latin1\$1General\$1BIN2  |  Latin1-General, binary code point sort order  | 
|  Latin1\$1General\$1CI\$1AI  |  Latin1-General, case-insensitive, accent-insensitive, kanatype-insensitive, width-insensitive  | 
|  Latin1\$1General\$1CI\$1AS  |  Latin1-General, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Latin1\$1General\$1CI\$1AS\$1KS  |  Latin1-General, case-insensitive, accent-sensitive, kanatype-sensitive, width-insensitive  | 
|  Latin1\$1General\$1CS\$1AS  |  Latin1-General, case-sensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Modern\$1Spanish\$1CI\$1AS  |  Modern-Spanish, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Polish\$1CI\$1AS  |  Polish, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  SQL\$11xCompat\$1CP850\$1CI\$1AS  |  Latin1-General, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive for Unicode Data, SQL Server Sort Order 49 on Code Page 850 for non-Unicode Data  | 
|  SQL\$1Latin1\$1General\$1CP1\$1CI\$1AI  |  Latin1-General, case-insensitive, accent-insensitive, kanatype-insensitive, width-insensitive for Unicode Data, SQL Server Sort Order 54 on Code Page 1252 for non-Unicode Data  | 
|  **SQL\$1Latin1\$1General\$1CP1\$1CI\$1AS (default)**  |  Latin1-General, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive for Unicode Data, SQL Server Sort Order 52 on Code Page 1252 for non-Unicode Data  | 
|  SQL\$1Latin1\$1General\$1CP1\$1CS\$1AS  |  Latin1-General, case-sensitive, accent-sensitive, kanatype-insensitive, width-insensitive for Unicode Data, SQL Server Sort Order 51 on Code Page 1252 for non-Unicode Data  | 
|  SQL\$1Latin1\$1General\$1CP437\$1CI\$1AI  |  Latin1-General, case-insensitive, accent-insensitive, kanatype-insensitive, width-insensitive for Unicode Data, SQL Server Sort Order 34 on Code Page 437 for non-Unicode Data  | 
|  SQL\$1Latin1\$1General\$1CP850\$1BIN  |  Latin1-General, binary sort order for Unicode Data, SQL Server Sort Order 40 on Code Page 850 for non-Unicode Data  | 
|  SQL\$1Latin1\$1General\$1CP850\$1BIN2  |  Latin1-General, binary code point sort order for Unicode Data, SQL Server Sort Order 40 on Code Page 850 for non-Unicode Data  | 
|  SQL\$1Latin1\$1General\$1CP850\$1CI\$1AI  |  Latin1-General, case-insensitive, accent-insensitive, kanatype-insensitive, width-insensitive for Unicode Data, SQL Server Sort Order 44 on Code Page 850 for non-Unicode Data  | 
|  SQL\$1Latin1\$1General\$1CP850\$1CI\$1AS  |  Latin1-General, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive for Unicode Data, SQL Server Sort Order 42 on Code Page 850 for non-Unicode Data  | 
|  SQL\$1Latin1\$1General\$1Pref\$1CP850\$1CI\$1AS  |  Latin1-General-Pref, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive for Unicode Data, SQL Server Sort Order 183 on Code Page 850 for non-Unicode Data  | 
|  SQL\$1Latin1\$1General\$1CP1256\$1CI\$1AS  |  Latin1-General, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive for Unicode Data, SQL Server Sort Order 146 on Code Page 1256 for non-Unicode Data  | 
|  SQL\$1Latin1\$1General\$1CP1255\$1CS\$1AS  |  Latin1-General, case-sensitive, accent-sensitive, kanatype-insensitive, width-insensitive for Unicode Data, SQL Server Sort Order 137 on Code Page 1255 for non-Unicode Data  | 
|  Thai\$1CI\$1AS  |  Thai, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 
|  Turkish\$1CI\$1AS  |  Turkish, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive  | 

You can also retrieve the list of supported collations programmatically using the AWS CLI:

```
aws rds describe-db-engine-versions --engine sqlserver-ee --list-supported-character-sets --query 'DBEngineVersions[].SupportedCharacterSets[].CharacterSetName' | sort -u
```

To choose the collation:
+ If you're using the Amazon RDS console, when creating a new DB instance choose **Additional configuration**, then enter the collation in the **Collation** field. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md). 
+ If you're using the AWS CLI, use the `--character-set-name` option with the `create-db-instance` command. For more information, see [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html).
+ If you're using the Amazon RDS API, use the `CharacterSetName` parameter with the `CreateDBInstance` operation. For more information, see [CreateDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html).

## Database-level collation for Microsoft SQL Server
<a name="Appendix.SQLServer.CommonDBATasks.Collation.Database-Table-Column"></a>

You can change the default collation at the database, table, or column level by overriding the collation when creating a new database or database object. For example, if your default server collation is SQL\$1Latin1\$1General\$1CP1\$1CI\$1AS, you can change it to Mohawk\$1100\$1CI\$1AS for Mohawk collation support. Even arguments in a query can be type-cast to use a different collation if necessary.

For example, the following query would change the default collation for the AccountName column to Mohawk\$1100\$1CI\$1AS

```
CREATE TABLE [dbo].[Account]
	(
	    [AccountID] [nvarchar](10) NOT NULL,
	    [AccountName] [nvarchar](100) COLLATE Mohawk_100_CI_AS NOT NULL 
	) ON [PRIMARY];
```

The Microsoft SQL Server DB engine supports Unicode by the built-in NCHAR, NVARCHAR, and NTEXT data types. For example, if you need CJK support, use these Unicode data types for character storage and override the default server collation when creating your databases and tables. Here are several links from Microsoft covering collation and Unicode support for SQL Server:
+ [Working with collations](http://msdn.microsoft.com/en-us/library/ms187582%28v=sql.105%29.aspx) 
+ [Collation and international terminology](http://msdn.microsoft.com/en-us/library/ms143726%28v=sql.105%29) 
+ [Using SQL Server collations](http://msdn.microsoft.com/en-us/library/ms144260%28v=sql.105%29.aspx) 
+ [International considerations for databases and database engine applications](http://msdn.microsoft.com/en-us/library/ms190245%28v=sql.105%29.aspx)

# Creating a database user for Amazon RDS for SQL Server
<a name="Appendix.SQLServer.CommonDBATasks.CreateUser"></a>

You can create a database user for your Amazon RDS for Microsoft SQL Server DB instance by running a T-SQL script like the following example. Use an application such as SQL Server Management Suite (SSMS). You log into the DB instance as the master user that was created when you created the DB instance.

```
--Initially set context to master database
USE [master];
GO
--Create a server-level login named theirname with password theirpassword
CREATE LOGIN [theirname] WITH PASSWORD = 'theirpassword';
GO
--Set context to msdb database
USE [msdb];
GO
--Create a database user named theirname and link it to server-level login theirname
CREATE USER [theirname] FOR LOGIN [theirname];
GO
```

For an example of adding a database user to a role, see [Adding a user to the SQLAgentUser role](SQLServerAgent.AddUser.md).

**Note**  
If you get permission errors when adding a user, you can restore privileges by modifying the DB instance master user password. For more information, see [Resetting the db\$1owner role membership for master user for Amazon RDS for SQL Server](Appendix.SQLServer.CommonDBATasks.ResetPassword.md).   
It is not a best practice to clone master user permissions in your applications. For more information, see [How to clone master user permissions in Amazon RDS for SQL Server](https://aws.amazon.com/blogs/database/how-to-clone-master-user-permissions-in-amazon-rds-for-sql-server/).

# Determining a recovery model for your Amazon RDS for SQL Server database
<a name="Appendix.SQLServer.CommonDBATasks.DatabaseRecovery"></a>

In Amazon RDS, the recovery model, retention period, and database status are linked.

It's important to understand the consequences before making a change to one of these settings. Each setting can affect the others. For example:
+ If you change a database's recovery model to SIMPLE or BULK\$1LOGGED while backup retention is enabled, Amazon RDS resets the recovery model to FULL within five minutes. This also results in RDS taking a snapshot of the DB instance.
+ If you set backup retention to `0` days, RDS sets the recovery mode to SIMPLE.
+ If you change a database's recovery model from SIMPLE to any other option while backup retention is set to `0` days, RDS resets the recovery model to SIMPLE.

**Important**  
Never change the recovery model on Multi-AZ instances, even if it seems you can do so—for example, by using ALTER DATABASE. Backup retention, and therefore FULL recovery mode, is required for Multi-AZ. If you alter the recovery model, RDS immediately changes it back to FULL.  
This automatic reset forces RDS to completely rebuild the mirror. During this rebuild, the availability of the database is degraded for about 30-90 minutes until the mirror is ready for failover. The DB instance also experiences performance degradation in the same way it does during a conversion from Single-AZ to Multi-AZ. How long performance is degraded depends on the database storage size—the bigger the stored database, the longer the degradation.

For more information on SQL Server recovery models, see [Recovery models (SQL Server)](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/recovery-models-sql-server) in the Microsoft documentation.

# Determining the last failover time for Amazon RDS for SQL Server
<a name="Appendix.SQLServer.CommonDBATasks.LastFailover"></a>

To determine the last failover time, use the following stored procedure:

```
execute msdb.dbo.rds_failover_time;
```

This procedure returns the following information.


****  

| Output parameter | Description | 
| --- | --- | 
|  errorlog\$1available\$1from  |  Shows the time from when error logs are available in the log directory.  | 
|  recent\$1failover\$1time  |  Shows the last failover time if it's available from the error logs. Otherwise it shows `null`.  | 

**Note**  
The stored procedure searches all of the available SQL Server error logs in the log directory to retrieve the most recent failover time. If the failover messages have been overwritten by SQL Server, then the procedure doesn't retrieve the failover time.

**Example of no recent failover**  
This example shows the output when there is no recent failover in the error logs. No failover has happened since 2020-04-29 23:59:00.01.  


| errorlog\$1available\$1from | recent\$1failover\$1time | 
| --- | --- | 
|  2020-04-29 23:59:00.0100000  |  null  | 

**Example of recent failover**  
This example shows the output when there is a failover in the error logs. The most recent failover was at 2020-05-05 18:57:51.89.  


| errorlog\$1available\$1from | recent\$1failover\$1time | 
| --- | --- | 
|  2020-04-29 23:59:00.0100000  |  2020-05-05 18:57:51.8900000  | 

# Troubleshooting point-in-time-recovery failures due to a log sequence number gap
<a name="Appendix.SQLServer.CommonDBATasks.PITR-LSN-Gaps"></a>

When attempting point-in-time-recovery (PITR) in RDS for SQL Server, you might encounter failures due to gaps in log sequence numbers (LSNs). These gaps prevent RDS from restoring your database to the requested time and RDS places your restoring instance in `incompatible-restore` state.

Common causes for this issue are:
+ Manual changes to the database recovery model.
+ Automatic recovery model changes by RDS due to insufficient resources for completing transaction log backups.

To identify LSN gaps in your database, run this query:

```
SELECT * FROM msdb.dbo.rds_fn_list_tlog_backup_metadata(database_name)
ORDER BY backup_file_time_utc desc;
```

If you discover an LSN gap, you can:
+ Choose a restore point before the LSN gap.
+ Wait and restore to a point after the next instance backup completes.

To prevent this issue, we recommend you don't manually change the recovery model of your RDS for SQL Server databases, as it interrupts instance durability. We also recommend you choose an instance type with sufficient resources for your workload to ensure regular transaction log backups.

For more information about transaction log management, see [ SQL Server transaction log architecture and management guide](https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-log-architecture-and-management-guide?view=sql-server-ver16) in the Microsoft SQL Server documentation.

# Deny or allow viewing database names for Amazon RDS for SQL Server
<a name="Appendix.SQLServer.CommonDBATasks.ManageView"></a>

The master user cannot set `DENY VIEW ANY DATABASE TO LOGIN`   to hide databases from a user.   To change this permission, use the following stored procedure instead:
+ Denying database view access to *LOGIN*:

  ```
  EXEC msdb.dbo.rds_manage_view_db_permission @permission=‘DENY’, @server_principal=‘LOGIN’  
  go
  ```
+ Allowing database view access to *LOGIN*:

  ```
  EXEC msdb.dbo.rds_manage_view_db_permission @permission='GRANT', @server_principal='LOGIN' 
   go
  ```

Consider the following when using this stored procedure:
+ Database names are hidden from the SSMS and internal DMV (dynamic management views). However, database names are still visible from audit, logs and metadata tables. These are secured `VIEW ANY DATABASE` server permissions. For more information, see [  DENY Server Permissions](https://learn.microsoft.com/en-us/sql/t-sql/statements/deny-server-permissions-transact-sql?view=sql-server-ver16#permissions).
+ Once the permission is reverted to `GRANT` (allowed), the *LOGIN* can view all databases.
+ If you delete and recreate *LOGIN*, the view permission related to the LOGIN is reset to `ALLOW`.
+ For Multi-AZ instances, set the `DENY` or `GRANT` permission only for the *LOGIN* on the primary host. The changes are propagated to the secondary host automatically.
+ This permission only changes whether a login can view the database names. However, access to databases and objects within are managed separately.

# Disabling fast inserts during bulk loading for Amazon RDS for SQL Server
<a name="Appendix.SQLServer.CommonDBATasks.DisableFastInserts"></a>

Starting with SQL Server 2016, fast inserts are enabled by default. Fast inserts leverage the minimal logging that occurs while the database is in the simple or bulk logged recovery model to optimize insert performance. With fast inserts, each bulk load batch acquires new extents, bypassing the allocation lookup for existing extents with available free space to optimize insert performance.

However, with fast inserts bulk loads with small batch sizes can lead to increased unused space consumed by objects. If increasing batch size isn't feasible, enabling trace flag 692 can help reduce unused reserved space, but at the expense of performance. Enabling this trace flag disables fast inserts while bulk loading data into heap or clustered indexes.

You enable trace flag 692 as a startup parameter using DB parameter groups. For more information, see [Parameter groups for Amazon RDS](USER_WorkingWithParamGroups.md).

Trace flag 692 is supported for Amazon RDS on SQL Server 2016 and later. For more information on trace flags, see [DBCC TRACEON - trace flags](https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql) in the Microsoft documentation.

# Dropping a database in an Amazon RDS for Microsoft SQL Server DB instance
<a name="Appendix.SQLServer.CommonDBATasks.DropMirrorDB"></a>

You can drop a database on an Amazon RDS DB instance running Microsoft SQL Server in a Single-AZ or Multi-AZ deployment. To drop the database, use the following command:

```
--replace your-database-name with the name of the database you want to drop
EXECUTE msdb.dbo.rds_drop_database  N'your-database-name'
```

**Note**  
Use straight single quotes in the command. Smart quotes will cause an error.

After you use this procedure to drop the database, Amazon RDS drops all existing connections to the database and removes the database's backup history.

To grant backup and restore allowance to other users, follow this procedure:

```
USE master
GO
CREATE LOGIN user1 WITH PASSWORD=N'changeThis', DEFAULT_DATABASE=master, CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO
USE msdb
GO
CREATE USER user1 FOR LOGIN user1
GO
use msdb
GO
GRANT EXECUTE ON msdb.dbo.rds_backup_database TO user1
GO
GRANT EXECUTE ON msdb.dbo.rds_restore_database TO user1
GO
```

# Renaming a Amazon RDS for Microsoft SQL Server database in a Multi-AZ deployment
<a name="Appendix.SQLServer.CommonDBATasks.RenamingDB"></a>

To rename a Microsoft SQL Server database instance that uses Multi-AZ, use the following procedure:

1. First, turn off Multi-AZ for the DB instance.

1. Rename the database by running `rdsadmin.dbo.rds_modify_db_name`.

1. Then, turn on Multi-AZ Mirroring or Always On Availability Groups for the DB instance, to return it to its original state.

For more information, see [Adding Multi-AZ to a Microsoft SQL Server DB instance](USER_SQLServerMultiAZ.md#USER_SQLServerMultiAZ.Adding). 

**Note**  
If your instance doesn't use Multi-AZ, you don't need to change any settings before or after running `rdsadmin.dbo.rds_modify_db_name`.  
You can't rename a database on a read replica source instance.

**Example: **In the following example, the `rdsadmin.dbo.rds_modify_db_name` stored procedure renames a database from **MOO** to **ZAR**. This is similar to running the statement `DDL ALTER DATABASE [MOO] MODIFY NAME = [ZAR]`. 

```
EXEC rdsadmin.dbo.rds_modify_db_name N'MOO', N'ZAR'
GO
```

# Resetting the db\$1owner role membership for master user for Amazon RDS for SQL Server
<a name="Appendix.SQLServer.CommonDBATasks.ResetPassword"></a>

If you lock your master user out of the `db_owner` role membership on your RDS for SQL Server database and no other database user can grant the membership, you can restore lost membership by modifying the DB instance master user password. 

By changing the DB instance master user password, RDS grants the `db_owner` membership to the databases in the DB instance that might have been accidentally revoked. You can change the DB instance password by using the Amazon RDS console, the AWS CLI command [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html), or by using the [ModifyDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) API operation. For more information about modifying a DB instance, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

# Restoring license-terminated DB instances for Amazon RDS for SQL Server
<a name="Appendix.SQLServer.CommonDBATasks.RestoreLTI"></a>

Microsoft has requested that some Amazon RDS customers who did not report their Microsoft License Mobility information terminate their DB instance. Amazon RDS takes snapshots of these DB instances, and you can restore from the snapshot to a new DB instance that has the License Included model. 

You can restore from a snapshot of Standard Edition to either Standard Edition or Enterprise Edition. 

You can restore from a snapshot of Enterprise Edition to either Standard Edition or Enterprise Edition. 

**To restore from a SQL Server snapshot after Amazon RDS has created a final snapshot of your instance**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Snapshots**.

1. Choose the snapshot of your SQL Server DB instance. Amazon RDS creates a final snapshot of your DB instance. The name of the terminated instance snapshot is in the format `instance_name-final-snapshot`. For example, if your DB instance name is **mytest.cdxgahslksma.us-east-1.rds.com**, the final snapshot is called** mytest-final-snapshot** and is located in the same AWS Region as the original DB instance. 

1. For **Actions**, choose **Restore Snapshot**.

   The **Restore DB Instance** window appears.

1. For **License Model**, choose **license-included**. 

1. Choose the SQL Server DB engine that you want to use. 

1. For **DB Instance Identifier**, enter the name for the restored DB instance. 

1. Choose **Restore DB Instance**.

For more information about restoring from a snapshot, see [Restoring to a DB instance](USER_RestoreFromSnapshot.md). 

# Transitioning a Amazon RDS for SQL Server database from OFFLINE to ONLINE
<a name="Appendix.SQLServer.CommonDBATasks.TransitionOnline"></a>

You can transition your Microsoft SQL Server database on an Amazon RDS DB instance from `OFFLINE` to `ONLINE`. 


****  

| SQL Server method | Amazon RDS method | 
| --- | --- | 
| ALTER DATABASE *db\$1name* SET ONLINE; | EXEC rdsadmin.dbo.rds\$1set\$1database\$1online *db\$1name* | 

# Using change data capture for Amazon RDS for SQL Server
<a name="Appendix.SQLServer.CommonDBATasks.CDC"></a>

Amazon RDS supports change data capture (CDC) for your DB instances running Microsoft SQL Server. CDC captures changes that are made to the data in your tables. It stores metadata about each change, which you can access later. For more information about how CDC works, see [Change data capture](https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/track-data-changes-sql-server#Capture) in the Microsoft documentation. Before you use CDC with your Amazon RDS DB instances, enable it in the database by running `msdb.dbo.rds_cdc_enable_db`. After CDC is enabled, any user who is `db_owner` of that database can enable or disable CDC on tables in that database.

**Important**  
During restores, CDC will be disabled. All of the related metadata is automatically removed from the database. This applies to snapshot restores and point-in-time restores. After performing one of these types of restores, you can re-enable CDC and re-specify tables to track.

To enable CDC for a DB instance, run the `msdb.dbo.rds_cdc_enable_db` stored procedure.

```
1. exec msdb.dbo.rds_cdc_enable_db 'database_name'
```

To disable CDC for a DB instance, run the `msdb.dbo.rds_cdc_disable_db` stored procedure.

```
1. exec msdb.dbo.rds_cdc_disable_db 'database_name'
```

To grant CDC permissions to a user, use the following procedure:

```
1. go
2. 		GRANT EXECUTE ON msdb.dbo.rds_cdc_enable_db TO User1
3. 		GRANT EXECUTE ON msdb.dbo.rds_cdc_disable_db TO User1
```

**Topics**
+ [

## Tracking tables with change data capture
](#Appendix.SQLServer.CommonDBATasks.CDC.tables)
+ [

## Change data capture jobs
](#Appendix.SQLServer.CommonDBATasks.CDC.jobs)
+ [

## Change data capture for Multi-AZ instances
](#Appendix.SQLServer.CommonDBATasks.CDC.Multi-AZ)

## Tracking tables with change data capture
<a name="Appendix.SQLServer.CommonDBATasks.CDC.tables"></a>

After CDC is enabled on the database, you can start tracking specific tables. You can choose the tables to track by running [sys.sp\$1cdc\$1enable\$1table](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sys-sp-cdc-enable-table-transact-sql).

```
 1. --Begin tracking a table
 2. exec sys.sp_cdc_enable_table   
 3.    @source_schema           = N'source_schema'
 4. ,  @source_name             = N'source_name'
 5. ,  @role_name               = N'role_name'
 6. 
 7. --The following parameters are optional:
 8.  
 9. --, @capture_instance       = 'capture_instance'
10. --, @supports_net_changes   = supports_net_changes
11. --, @index_name             = 'index_name'
12. --, @captured_column_list   = 'captured_column_list'
13. --, @filegroup_name         = 'filegroup_name'
14. --, @allow_partition_switch = 'allow_partition_switch'
15. ;
```

To view the CDC configuration for your tables, run [sys.sp\$1cdc\$1help\$1change\$1data\$1capture](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sys-sp-cdc-help-change-data-capture-transact-sql).

```
1. --View CDC configuration
2. exec sys.sp_cdc_help_change_data_capture 
3. 
4. --The following parameters are optional and must be used together.
5. --  'schema_name', 'table_name'
6. ;
```

For more information on CDC tables, functions, and stored procedures in SQL Server documentation, see the following:
+ [Change data capture stored procedures (Transact-SQL)](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/change-data-capture-stored-procedures-transact-sql)
+ [Change data capture functions (Transact-SQL)](https://docs.microsoft.com/en-us/sql/relational-databases/system-functions/change-data-capture-functions-transact-sql)
+ [Change data capture tables (Transact-SQL)](https://docs.microsoft.com/en-us/sql/relational-databases/system-tables/change-data-capture-tables-transact-sql)

## Change data capture jobs
<a name="Appendix.SQLServer.CommonDBATasks.CDC.jobs"></a>

When you enable CDC, SQL Server creates the CDC jobs. Database owners (`db_owner`) can view, create, modify, and delete the CDC jobs. However, the RDS system account owns them. Therefore, the jobs aren't visible from native views, procedures, or in SQL Server Management Studio.

To control behavior of CDC in a database, use native SQL Server procedures such as [sp\$1cdc\$1enable\$1table](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sys-sp-cdc-enable-table-transact-sql) and [sp\$1cdc\$1start\$1job](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sys-sp-cdc-start-job-transact-sql). To change CDC job parameters, like `maxtrans` and `maxscans`, you can use [sp\$1cdc\$1change\$1job.](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sys-sp-cdc-change-job-transact-sql).

To get more information regarding the CDC jobs, you can query the following dynamic management views: 
+ sys.dm\$1cdc\$1errors
+ sys.dm\$1cdc\$1log\$1scan\$1sessions
+ sysjobs
+ sysjobhistory

## Change data capture for Multi-AZ instances
<a name="Appendix.SQLServer.CommonDBATasks.CDC.Multi-AZ"></a>

If you use CDC on a Multi-AZ instance, make sure the mirror's CDC job configuration matches the one on the principal. CDC jobs are mapped to the `database_id`. If the database IDs on the secondary are different from the principal, then the jobs won't be associated with the correct database. To try to prevent errors after failover, RDS drops and recreates the jobs on the new principal. The recreated jobs use the parameters that the principal recorded before failover.

Although this process runs quickly, it's still possible that the CDC jobs might run before RDS can correct them. Here are three ways to force parameters to be consistent between primary and secondary replicas:
+ Use the same job parameters for all the databases that have CDC enabled. 
+ Before you change the CDC job configuration, convert the Multi-AZ instance to Single-AZ.
+ Manually transfer the parameters whenever you change them on the principal.

To view and define the CDC parameters that are used to recreate the CDC jobs after a failover, use `rds_show_configuration` and `rds_set_configuration`.

The following example returns the value set for `cdc_capture_maxtrans`. For any parameter that is set to `RDS_DEFAULT`, RDS automatically configures the value.

```
-- Show configuration for each parameter on either primary and secondary replicas. 
exec rdsadmin.dbo.rds_show_configuration 'cdc_capture_maxtrans';
```

To set the configuration on the secondary, run `rdsadmin.dbo.rds_set_configuration`. This procedure sets the parameter values for all of the databases on the secondary server. These settings are used only after a failover. The following example sets the `maxtrans` for all CDC capture jobs to *1000*:

```
--To set values on secondary. These are used after failover.
exec rdsadmin.dbo.rds_set_configuration 'cdc_capture_maxtrans', 1000;
```

To set the CDC job parameters on the principal, use [sys.sp\$1cdc\$1change\$1job](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sys-sp-cdc-change-job-transact-sql) instead.

# Using SQL Server Agent for Amazon RDS
<a name="Appendix.SQLServer.CommonDBATasks.Agent"></a>

With Amazon RDS, you can use SQL Server Agent on a DB instance running Microsoft SQL Server Enterprise Edition, Standard Edition, or Web Edition. SQL Server Agent is a Microsoft Windows service that runs scheduled administrative tasks that are called jobs. You can use SQL Server Agent to run T-SQL jobs to rebuild indexes, run corruption checks, and aggregate data in a SQL Server DB instance.

When you create a SQL Server DB instance, the master user is enrolled in the `SQLAgentUserRole` role.

SQL Server Agent can run a job on a schedule, in response to a specific event, or on demand. For more information, see [SQL Server Agent](http://msdn.microsoft.com/en-us/library/ms189237) in the Microsoft documentation.

**Note**  
Avoid scheduling jobs to run during the maintenance and backup windows for your DB instance. The maintenance and backup processes that are launched by AWS could interrupt a job or cause it to be canceled.  
In Multi-AZ deployments, SQL Server Agent jobs are replicated from the primary host to the secondary host when the job replication feature is turned on. For more information, see [Turning on SQL Server Agent job replication](#SQLServerAgent.Replicate).  
Multi-AZ deployments have a limit of 10,000 SQL Server Agent jobs. If you need a higher limit, request an increase by contacting Support. Open the [AWS Support Center](https://console.aws.amazon.com/support/home#/) page, sign in if necessary, and choose **Create case**. Choose **Service limit increase**. Complete and submit the form.

To view the history of an individual SQL Server Agent job in SQL Server Management Studio (SSMS), open Object Explorer, right-click the job, and then choose **View History**.

Because SQL Server Agent is running on a managed host in a DB instance, some actions aren't supported:
+ Running replication jobs and running command-line scripts by using ActiveX, Windows command shell, or Windows PowerShell aren't supported.
+ You can't manually start, stop, or restart SQL Server Agent.
+ Email notifications through SQL Server Agent aren't available from a DB instance.
+ SQL Server Agent alerts and operators aren't supported.
+ Using SQL Server Agent to create backups isn't supported. Use Amazon RDS to back up your DB instance.
+ Currently, RDS for SQL Server does not support the use SQL Server Agent tokens.

## Turning on SQL Server Agent job replication
<a name="SQLServerAgent.Replicate"></a>

You can turn on SQL Server Agent job replication by using the following stored procedure:

```
EXECUTE msdb.dbo.rds_set_system_database_sync_objects @object_types = 'SQLAgentJob';
```

You can run the stored procedure on all SQL Server versions supported by Amazon RDS for SQL Server. Jobs in the following categories are replicated:
+ [Uncategorized (Local)]
+ [Uncategorized (Multi-Server)]
+ [Uncategorized]
+ Data Collector
+ Database Engine Tuning Advisor
+ Database Maintenance
+ Full-Text

Only jobs that use T-SQL job steps are replicated. Jobs with step types such as SQL Server Integration Services (SSIS), SQL Server Reporting Services (SSRS), Replication, and PowerShell aren't replicated. Jobs that use Database Mail and server-level objects aren't replicated.

**Important**  
The primary host is the source of truth for replication. Before turning on job replication, make sure that your SQL Server Agent jobs are on the primary. If you don't do this, it could lead to the deletion of your SQL Server Agent jobs if you turn on the feature when newer jobs are on the secondary host.

You can use the following function to confirm whether replication is turned on.

```
SELECT * from msdb.dbo.rds_fn_get_system_database_sync_objects();
```

 The T-SQL query returns the following if SQL Server Agent jobs are replicating. If they're not replicating, it returns nothing for `object_class`.

![\[SQL Server Agent jobs are replicating\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/SQLAgentJob.png)


You can use the following function to find the last time objects were synchronized in UTC time.

```
SELECT * from msdb.dbo.rds_fn_server_object_last_sync_time();
```

For example, suppose that you modify a SQL Server Agent job at 01:00. You expect the most recent synchronization time to be after 01:00, indicating that synchronization has taken place.

After synchronization, the values returned for `date_created` and `date_modified` on the secondary node are expected to match.

![\[Last time server objects were synchronized was 01:21:23\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/SQLAgentJob_last_sync_time.png)


If you are also using `tempdb` replication, you can enable replication for both SQL Agent jobs and the `tempdb` configuration by providing them in the `@object_type` parameter:

```
EXECUTE msdb.dbo.rds_set_system_database_sync_objects @object_types = 'SQLAgentJob,TempDbFile';
```

For more information on `tempdb` replication, see [TempDB configuration for Multi-AZ deployments](SQLServer.TempDB.MAZ.md).

# SQL Server Agent roles
<a name="SQLServerAgent.AgentRoles"></a>

RDS for SQL Server supports the following SQL Server Agent roles with different levels of permissions for managing jobs:
+ **SQLAgentUserRole**

  Permissions
  + Create and manage their own jobs, schedules, and operators
  + View the properties of their own jobs and schedules
  + Cannot view or manage jobs created by other users

  This role is suitable for users who need to create and manage their own jobs but do not require access to jobs created by other users.
+ **SQLAgentReaderRole**

  Permissions
  + All permissions of SQLAgentUserRole
  + View a list of all jobs and schedules, including those created by others
  + View the properties of all jobs
  + Review job history

  This role is suitable for users who need to monitor the status of all jobs but do not need to manage them.
+ **SQLAgentOperatorRole**

  Permissions
  + All permissions of SQLAgentUserRole and SQLAgentReaderRole
  + Execute, stop, or start jobs
  + Manage job history
  + Enable/disable jobs and schedules
  + View operators and proxies

  This role provides the most comprehensive permissions and is suitable for users who need to have full control over all jobs.

Use the following command to assign the roles to your SQL Server login:

```
USE msdb;
EXEC sp_addrolemember 'SQLAgentOperatorRole', 'username';
```

## Managing SQLAgentOperatorRole in RDS for SQL Server
<a name="SQLServerAgent.AgentRoles.ManageSQLAgentOperatorRole"></a>

To view the current jobs, you must add the SQLAgentOperatorRole to your SQL Server login and remove it before disconnecting from your database.

To visualize the SQL Server Agent tree in the SQL Server Management Studio, follow these instructions:

**View SQL Server Agent on SQL Server Management Studio (SSMS)**

1. Using the RDS master credentials, login into the RDS SQL Server instance and grant the desired user the SQLAgentUserRole.

   ```
   USE msdb
   GO
   IF NOT EXISTS(SELECT name FROM sys.database_principals WHERE name = 'UserName')
   BEGIN
   CREATE USER UserName FROM LOGIN UserName
   END
   GO
   ALTER ROLE SQLAgentUserRole ADD MEMBER UserName
   GO
   GRANT ALTER ON ROLE::[SQLAgentOperatorRole] to UserName
   GO
   ```

   These commands create the user on the `msdb` database, in case it doesn’t exists. It also adds the user on the SQLAgentUserRole, so the SQL Server Agent tree on SSMS can be seen. Finally, it grants alter permissions on the SQLAgentOperatorRole to the user. This allows the user to add/remove itself from that role. 

1. To add yourself to the above-mentioned role, connect to the RDS SQL Server instance, with the user that needs to see the jobs, and run the following script.

   ```
   use msdb
   go
   ALTER ROLE SQLAgentOperatorRole ADD MEMBER UserName
   GO
   ```

   After this, right click on the **Jobs** folder, and choose **Refresh**.

1. When you perform this action, the **Jobs** tab displays a **\$1 ** (plus) button. Click to expand the the list of SQL Server Agent Jobs.

1. 
**Important**  
Before you disconnect from the RDS SQL Server instance, you need to remove yourself from the SQLAgentOperatorRole.

   To remove your login from the SQLAgentOperatorRole, run the following query before disconnecting or closing the Management Studio:

   ```
   USE msdb
   GO
   ALTER ROLE SQLAgentOperatorRole DROP MEMBER UserName
   GO
   ```

For more information, see [Leveraging SQLAgentOperatorRole in RDS SQL Server](https://aws.amazon.com/blogs/database/leveraging-sqlagentoperatorrole-in-rds-sql-server/).

# Adding a user to the SQLAgentUser role
<a name="SQLServerAgent.AddUser"></a>

To allow an additional login or user to use SQL Server Agent, log in as the master user and do the following:

1. Create another server-level login by using the `CREATE LOGIN` command.

1. Create a user in `msdb` using `CREATE USER` command, and then link this user to the login that you created in the previous step.

1. Add the user to the `SQLAgentUserRole` using the `sp_addrolemember` system stored procedure.

For example, suppose that your master user name is **admin** and you want to give access to SQL Server Agent to a user named **theirname** with a password **theirpassword**. In that case, you can use the following procedure.

**To add a user to the SQLAgentUser role**

1. Log in as the master user.

1. Run the following commands:

   ```
   --Initially set context to master database
   USE [master];
   GO
   --Create a server-level login named theirname with password theirpassword
   CREATE LOGIN [theirname] WITH PASSWORD = 'theirpassword';
   GO
   --Set context to msdb database
   USE [msdb];
   GO
   --Create a database user named theirname and link it to server-level login theirname
   CREATE USER [theirname] FOR LOGIN [theirname];
   GO
   --Added database user theirname in msdb to SQLAgentUserRole in msdb
   EXEC sp_addrolemember [SQLAgentUserRole], [theirname];
   ```

# Deleting a SQL Server Agent job
<a name="SQLServerAgent.DeleteJob"></a>

You use the `sp_delete_job` stored procedure to delete SQL Server Agent jobs on Amazon RDS for Microsoft SQL Server.

You can't use SSMS to delete SQL Server Agent jobs. If you try to do so, you get an error message similar to the following:

```
The EXECUTE permission was denied on the object 'xp_regread', database 'mssqlsystemresource', schema 'sys'.
```

As a managed service, RDS is restricted from running procedures that access the Windows registry. When you use SSMS, it tries to run a process (`xp_regread`) for which RDS isn't authorized.

**Note**  
On RDS for SQL Server, only members of the sysadmin role are allowed to update or delete jobs owned by a different login. For more information, see [ Leveraging SQLAgentOperatorRole in RDS SQL Server](https://aws.amazon.com/blogs/database/leveraging-sqlagentoperatorrole-in-rds-sql-server/).

**To delete a SQL Server Agent job**
+ Run the following T-SQL statement:

  ```
  EXEC msdb..sp_delete_job @job_name = 'job_name';
  ```

# Working with Amazon RDS for Microsoft SQL Server logs
<a name="Appendix.SQLServer.CommonDBATasks.Logs"></a>

You can use the Amazon RDS console to view, watch, and download SQL Server Agent logs, Microsoft SQL Server error logs, and SQL Server Reporting Services (SSRS) logs.

## Watching log files
<a name="Appendix.SQLServer.CommonDBATasks.Logs.Watch"></a>

If you view a log in the Amazon RDS console, you can see its contents as they exist at that moment. Watching a log in the console opens it in a dynamic state so that you can see updates to it in near-real time.

Only the latest log is active for watching. For example, suppose you have the following logs shown:

![\[An image of the Logs section from the Amazon RDS console with an error log selected.\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/logs_sqlserver.png)


Only log/ERROR, as the most recent log, is being actively updated. You can choose to watch others, but they are static and will not update.

## Archiving log files
<a name="Appendix.SQLServer.CommonDBATasks.Logs.Archive"></a>

The Amazon RDS console shows logs for the past week through the current day. You can download and archive logs to keep them for reference past that time. One way to archive logs is to load them into an Amazon S3 bucket. For instructions on how to set up an Amazon S3 bucket and upload a file, see [Amazon S3 basics](https://docs.aws.amazon.com/AmazonS3/latest/userguide/AmazonS3Basics.html) in the *Amazon Simple Storage Service Getting Started Guide* and click **Get Started**. 

## Viewing error and agent logs
<a name="Appendix.SQLServer.CommonDBATasks.Logs.SP"></a>

To view Microsoft SQL Server error and agent logs, use the Amazon RDS stored procedure `rds_read_error_log` with the following parameters: 
+ **`@index`** – the version of the log to retrieve. The default value is 0, which retrieves the current error log. Specify 1 to retrieve the previous log, specify 2 to retrieve the one before that, and so on. 
+ **`@type`** – the type of log to retrieve. Specify 1 to retrieve an error log. Specify 2 to retrieve an agent log. 

**Example**  
The following example requests the current error log.  

```
EXEC rdsadmin.dbo.rds_read_error_log @index = 0, @type = 1;
```

For more information on SQL Server errors, see [Database engine errors](https://docs.microsoft.com/en-us/sql/relational-databases/errors-events/database-engine-events-and-errors) in the Microsoft documentation.

# Working with trace and dump files for Amazon RDS for SQL Server
<a name="Appendix.SQLServer.CommonDBATasks.TraceFiles"></a>

This section describes working with trace files and dump files for your Amazon RDS DB instances running Microsoft SQL Server. 

## Generating a trace SQL query
<a name="Appendix.SQLServer.CommonDBATasks.TraceFiles.TraceSQLQuery"></a>

```
1. declare @rc int 
2. declare @TraceID int 
3. declare @maxfilesize bigint 
4. 
5. set @maxfilesize = 5
6. 
7. exec @rc = sp_trace_create @TraceID output,  0, N'D:\rdsdbdata\log\rdstest', @maxfilesize, NULL
```

## Viewing an open trace
<a name="Appendix.SQLServer.CommonDBATasks.TraceFiles.ViewOpenTrace"></a>

```
1. select * from ::fn_trace_getinfo(default)
```

## Viewing trace contents
<a name="Appendix.SQLServer.CommonDBATasks.TraceFiles.ViewTraceContents"></a>

```
1. select * from ::fn_trace_gettable('D:\rdsdbdata\log\rdstest.trc', default)
```

## Setting the retention period for trace and dump files
<a name="Appendix.SQLServer.CommonDBATasks.TraceFiles.PurgeTraceFiles"></a>

Trace and dump files can accumulate and consume disk space. By default, Amazon RDS purges trace and dump files that are older than seven days. 

To view the current trace and dump file retention period, use the `rds_show_configuration` procedure, as shown in the following example. 

```
1. exec rdsadmin..rds_show_configuration;
```

To modify the retention period for trace files, use the `rds_set_configuration` procedure and set the `tracefile retention` in minutes. The following example sets the trace file retention period to 24 hours. 

```
1. exec rdsadmin..rds_set_configuration 'tracefile retention', 1440; 
```

To modify the retention period for dump files, use the `rds_set_configuration` procedure and set the `dumpfile retention` in minutes. The following example sets the dump file retention period to 3 days. 

```
1. exec rdsadmin..rds_set_configuration 'dumpfile retention', 4320; 
```

For security reasons, you cannot delete a specific trace or dump file on a SQL Server DB instance. To delete all unused trace or dump files, set the retention period for the files to 0. 