

# Security in Amazon RDS
<a name="UsingWithRDS"></a>

Cloud security at AWS is the highest priority. As an AWS customer, you benefit from a data center and network architecture that are built to meet the requirements of the most security-sensitive organizations.

Security is a shared responsibility between AWS and you. The [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/) describes this as security *of* the cloud and security *in* the cloud:
+  **Security of the cloud** – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. AWS also provides you with services that you can use securely. Third-party auditors regularly test and verify the effectiveness of our security as part of the [AWS compliance programs](https://aws.amazon.com/compliance/programs/). To learn about the compliance programs that apply to Amazon RDS , see [AWS services in scope by compliance program](https://aws.amazon.com/compliance/services-in-scope/). 
+  **Security in the cloud** – Your responsibility is determined by the AWS service that you use. You are also responsible for other factors including the sensitivity of your data, your organization's requirements, and applicable laws and regulations. 

This documentation helps you understand how to apply the shared responsibility model when using Amazon RDS . The following topics show you how to configure Amazon RDS to meet your security and compliance objectives. You also learn how to use other AWS services that help you monitor and secure your Amazon RDS resources. 

You can manage access to your Amazon RDS resources and your databases on a DB instance . The method you use to manage access depends on what type of task the user needs to perform with Amazon RDS : 
+ Run your DB instance in a virtual private cloud (VPC) based on the Amazon VPC service for the greatest possible network access control. For more information about creating a DB instance in a VPC, see [Amazon VPC and Amazon RDS](USER_VPC.md) .
+ Use AWS Identity and Access Management (IAM) policies to assign permissions that determine who is allowed to manage Amazon RDS resources. For example, you can use IAM to determine who is allowed to create, describe, modify, and delete DB instances , tag resources, or modify security groups.
+ Use security groups to control what IP addresses or Amazon EC2 instances can connect to your databases on a DB instance . When you first create a DB instance , its firewall prevents any database access except through rules specified by an associated security group. 
+  Use Secure Socket Layer (SSL) or Transport Layer Security (TLS) connections with DB instances running the Db2, MySQL, MariaDB, PostgreSQL, Oracle, or Microsoft SQL Server database engines. For more information on using SSL/TLS with a DB instance , see [Using SSL/TLS to encrypt a connection to a DB instance or cluster ](UsingWithRDS.SSL.md) .
+ Use Amazon RDS encryption to secure your DB instances and snapshots at rest. Amazon RDS encryption uses the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your DB instance . For more information, see [Encrypting Amazon RDS resources](Overview.Encryption.md) .
+ Use network encryption and transparent data encryption with Oracle DB instances; for more information, see [Oracle native network encryption](Appendix.Oracle.Options.NetworkEncryption.md) and [Oracle Transparent Data Encryption](Appendix.Oracle.Options.AdvSecurity.md) 
+ Use the security features of your DB engine to control who can log in to the databases on a DB instance . These features work just as if the database was on your local network. 

**Note**  
You have to configure security only for your use cases. You don't have to configure security access for processes that Amazon RDS manages. These include creating backups, replicating data between a primary DB instance and a read replica, and other processes.

For more information on managing access to Amazon RDS resources and your databases on a DB instance , see the following topics.

**Topics**
+ [

# Database authentication with Amazon RDS
](database-authentication.md)
+ [

# Password management with Amazon RDS and AWS Secrets Manager
](rds-secrets-manager.md)
+ [

# Data protection in Amazon RDS
](DataDurability.md)
+ [

# Identity and access management for Amazon RDS
](UsingWithRDS.IAM.md)
+ [

# Logging and monitoring in Amazon RDS
](Overview.LoggingAndMonitoring.md)
+ [

# Compliance validation for Amazon RDS
](RDS-compliance.md)
+ [

# Resilience in Amazon RDS
](disaster-recovery-resiliency.md)
+ [

# Infrastructure security in Amazon RDS
](infrastructure-security.md)
+ [

# Amazon RDS API and interface VPC endpoints (AWS PrivateLink)
](vpc-interface-endpoints.md)
+ [

# Security best practices for Amazon RDS
](CHAP_BestPractices.Security.md)
+ [

# Controlling access with security groups
](Overview.RDSSecurityGroups.md)
+ [

# Master user account privileges
](UsingWithRDS.MasterAccounts.md)
+ [

# Using service-linked roles for Amazon RDS
](UsingWithRDS.IAM.ServiceLinkedRoles.md)
+ [

# Amazon VPC and Amazon RDS
](USER_VPC.md)

# Database authentication with Amazon RDS
<a name="database-authentication"></a>

 Amazon RDS supports several ways to authenticate database users.

Password, Kerberos, and IAM database authentication use different methods of authenticating to the database. Therefore, a specific user can log in to a database using only one authentication method. 

For PostgreSQL, use only one of the following role settings for a user of a specific database: 
+ To use IAM database authentication, assign the `rds_iam` role to the user.
+ To use Kerberos authentication, assign the `rds_ad` role to the user.
+ To use password authentication, don't assign either the `rds_iam` or `rds_ad` roles to the user.

Don't assign both the `rds_iam` and `rds_ad` roles to a user of a PostgreSQL database either directly or indirectly by nested grant access. If the `rds_iam` role is added to the master user, IAM authentication takes precedence over password authentication so the master user has to log in as an IAM user.

**Important**  
We strongly recommend that you do not use the master user directly in your applications. Instead, adhere to the best practice of using a database user created with the minimal privileges required for your application.

**Topics**
+ [

## Password authentication
](#password-authentication)
+ [

## IAM database authentication
](#iam-database-authentication)
+ [

## Kerberos authentication
](#kerberos-authentication)

## Password authentication
<a name="password-authentication"></a>

With *password authentication,* your database performs all administration of user accounts. You create users with SQL statements such as `CREATE USER`, with the appropriate clause required by the DB engine for specifying passwords. For example, in MySQL the statement is `CREATE USER` *name* `IDENTIFIED BY` *password*, while in PostgreSQL, the statement is `CREATE USER` *name* `WITH PASSWORD` *password*. 

With password authentication, your database controls and authenticates user accounts. If a DB engine has strong password management features, they can enhance security. Database authentication might be easier to administer using password authentication when you have small user communities. Because clear text passwords are generated in this case, integrating with AWS Secrets Manager can enhance security.

For information about using Secrets Manager with Amazon RDS , see [Creating a basic secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_create-basic-secret.html) and [Rotating secrets for supported Amazon RDS databases](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets-rds.html) in the *AWS Secrets Manager User Guide*. For information about programmatically retrieving your secrets in your custom applications, see [Retrieving the secret value](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_retrieve-secret.html) in the *AWS Secrets Manager User Guide*. 

## IAM database authentication
<a name="iam-database-authentication"></a>

You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. With this authentication method, you don't need to use a password when you connect to a DB instance . Instead, you use an authentication token.

For more information about IAM database authentication, including information about availability for specific DB engines, see [IAM database authentication for MariaDB, MySQL, and PostgreSQL](UsingWithRDS.IAMDBAuth.md) .

## Kerberos authentication
<a name="kerberos-authentication"></a>

 Amazon RDS supports external authentication of database users using Kerberos and Microsoft Active Directory. Kerberos is a network authentication protocol that uses tickets and symmetric-key cryptography to eliminate the need to transmit passwords over the network. Kerberos has been built into Active Directory and is designed to authenticate users to network resources, such as databases.

 Amazon RDS support for Kerberos and Active Directory provides the benefits of single sign-on and centralized authentication of database users. You can keep your user credentials in Active Directory. Active Directory provides a centralized place for storing and managing credentials for multiple DB instances . 

To use credentials from your self-managed Active Directory, you need to setup a trust relationship to the Directory Service for Microsoft Active Directory that the DB instance is joined to.

 RDS for PostgreSQL and RDS for MySQL support one-way and two-way forest trust relationships with forest-wide authentication or selective authentication.

In some scenarios, you can configure Kerberos authentication over an external trust relationship. This requires your self-managed Active Directory to have additional settings. This includes but is not limited to [ Kerberos Forest Search Order](https://learn.microsoft.com/en-us/troubleshoot/windows-server/active-directory/kfso-not-work-in-external-trust-event-is-17). 

Microsoft SQL Server and PostgreSQL DB instances support one-way and two-way forest trust relationships. Oracle DB instances support one-way and two-way external and forest trust relationships. For more information, see [When to create a trust relationship](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/setup_trust.html) in the *Directory Service Administration Guide*. 

For information about Kerberos authentication with a specific DB engine, see the following:
+  [Working with AWS Managed Active Directory with RDS for SQL Server](USER_SQLServerWinAuth.md) 
+  [Using Kerberos authentication for Amazon RDS for MySQL](mysql-kerberos.md) 
+  [Configuring Kerberos authentication for Amazon RDS for Oracle](oracle-kerberos.md) 
+  [Using Kerberos authentication with Amazon RDS for PostgreSQL](postgresql-kerberos.md) 
+  [Using Kerberos authentication for Amazon RDS for Db2](db2-kerberos.md) .

**Note**  
Currently, Kerberos authentication isn't supported for MariaDB DB instances.

# Password management with Amazon RDS and AWS Secrets Manager
<a name="rds-secrets-manager"></a>

Amazon RDS integrates with Secrets Manager to manage master user passwords for your DB instances and Multi-AZ DB clusters.

**Topics**
+ [

## Limitations for Secrets Manager integration with Amazon RDS
](#rds-secrets-manager-limitations)
+ [

## Overview of managing master user passwords with AWS Secrets Manager
](#rds-secrets-manager-overview)
+ [

## Benefits of managing master user passwords with Secrets Manager
](#rds-secrets-manager-benefits)
+ [

## Permissions required for Secrets Manager integration
](#rds-secrets-manager-permissions)
+ [

## Enforcing RDS management of the master user password in AWS Secrets Manager
](#rds-secrets-manager-auth)
+ [

## Managing the master user password for a DB instance with Secrets Manager
](#rds-secrets-manager-db-instance)
+ [

## Managing the master user password for an RDS for Oracle tenant database with Secrets Manager
](#rds-secrets-manager-tenant)
+ [

## Managing the master user password for a Multi-AZDB cluster with Secrets Manager
](#rds-secrets-manager-db-cluster)
+ [

## Rotating the master user password secret for a DB instance
](#rds-secrets-manager-rotate-db-instance)
+ [

## Rotating the master user password secret for a Multi-AZDB cluster
](#rds-secrets-manager-rotate-db-cluster)
+ [

## Viewing the details about a secret for a DB instance
](#rds-secrets-manager-view-db-instance)
+ [

## Viewing the details about a secret for a Multi-AZDB cluster
](#rds-secrets-manager-view-db-cluster)
+ [

## Viewing the details about a secret for a tenant database
](#rds-secrets-manager-view-tenant)
+ [

## Region and version availability
](#rds-secrets-manager-availability)

## Limitations for Secrets Manager integration with Amazon RDS
<a name="rds-secrets-manager-limitations"></a>

Managing master user passwords with Secrets Manager isn't supported for the following features:
+ Creating a read replica when the source DB or DB cluster manages credentials with Secrets Manager. This applies to all DB engines except RDS for SQL Server.
+ Amazon RDS Blue/Green Deployments
+ Amazon RDS Custom
+ Oracle Data Guard switchover

## Overview of managing master user passwords with AWS Secrets Manager
<a name="rds-secrets-manager-overview"></a>

With AWS Secrets Manager, you can replace hard-coded credentials in your code, including database passwords, with an API call to Secrets Manager to retrieve the secret programmatically. For more information about Secrets Manager, see the [AWS Secrets Manager User Guide](https://docs.aws.amazon.com/secretsmanager/latest/userguide/). 

When you store database secrets in Secrets Manager, your AWS account incurs charges. For information about pricing, see [AWS Secrets Manager Pricing](https://aws.amazon.com/secrets-manager/pricing).

You can specify that RDS manages the master user password in Secrets Manager for an Amazon RDS DB instance or Multi-AZ DB cluster when you perform one of the following operations:
+ Create a DB instance
+ Create a Multi-AZ DB cluster
+ Create a tenant database in an RDS for Oracle CDB
+ Modify a DB instance
+ Modify a Multi-AZ DB cluster
+ Modify a tenant database (RDS for Oracle only)
+ Restore a DB instance from Amazon S3
+ Restore a DB instance from a snapshot or to a point in time (RDS for Oracle only)

When you specify that RDS manages the master user password in Secrets Manager, RDS generates the password and stores it in Secrets Manager. You can interact directly with the secret to retrieve the credentials for the master user. You can also specify a customer managed key to encrypt the secret, or use the KMS key that is provided by Secrets Manager.

RDS manages the settings for the secret and rotates the secret every seven days by default. You can modify some of the settings, such as the rotation schedule. If you delete a DB instance that manages a secret in Secrets Manager, the secret and its associated metadata are also deleted.

To connect to a DB instance or Multi-AZ DB cluster with the credentials in a secret, you can retrieve the secret from Secrets Manager. For more information, see [ Retrieve secrets from AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets.html) and [Connect to a SQL database with credentials in an AWS Secrets Manager secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets_jdbc.html) in the *AWS Secrets Manager User Guide*. 

## Benefits of managing master user passwords with Secrets Manager
<a name="rds-secrets-manager-benefits"></a>

Managing RDS master user passwords with Secrets Manager provides the following benefits:
+ RDS automatically generates database credentials.
+ RDS automatically stores and manages database credentials in AWS Secrets Manager.
+ RDS rotates database credentials regularly, without requiring application changes.
+ Secrets Manager secures database credentials from human access and plain text view.
+ Secrets Manager allows retrieval of database credentials in secrets for database connections.
+ Secrets Manager allows fine-grained control of access to database credentials in secrets using IAM.
+ You can optionally separate database encryption from credentials encryption with different KMS keys.
+ You can eliminate manual management and rotation of database credentials.
+ You can monitor database credentials easily with AWS CloudTrail and Amazon CloudWatch.

For more information about the benefits of Secrets Manager, see the [AWS Secrets Manager User Guide](https://docs.aws.amazon.com/secretsmanager/latest/userguide/).

## Permissions required for Secrets Manager integration
<a name="rds-secrets-manager-permissions"></a>

Users must have the required permissions to perform operations related to Secrets Manager integration. You can create IAM policies that grant permissions to perform specific API operations on the specified resources they need. You can then attach those policies to the IAM permission sets or roles that require those permissions. For more information, see [Identity and access management for Amazon RDS](UsingWithRDS.IAM.md).

For create, modify, or restore operations, the user who specifies that Amazon RDS manages the master user password in Secrets Manager must have permissions to perform the following operations:
+ `kms:DescribeKey`
+ `secretsmanager:CreateSecret`
+ `secretsmanager:TagResource`

The `kms:DescribeKey` permission is required to access your customer-managed key for the `MasterUserSecretKmsKeyId` and to describe `aws/secretsmanager`.

For create, modify, or restore operations, the user who specifies the customer managed key to encrypt the secret in Secrets Manager must have permissions to perform the following operations:
+ `kms:Decrypt`
+ `kms:GenerateDataKey`
+ `kms:CreateGrant`

For modify operations, the user who rotates the master user password in Secrets Manager must have permissions to perform the following operation:
+ `secretsmanager:RotateSecret`

## Enforcing RDS management of the master user password in AWS Secrets Manager
<a name="rds-secrets-manager-auth"></a>

You can use IAM condition keys to enforce RDS management of the master user password in AWS Secrets Manager. The following policy doesn't allow users to create or restore DB instances or DB clusters or create or modify tenant databases unless the master user password is managed by RDS in Secrets Manager.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Action": ["rds:CreateDBInstance", "rds:CreateDBCluster", "rds:RestoreDBInstanceFromS3", "rds:RestoreDBClusterFromS3",
                       "rds:RestoreDBInstanceFromDBSnapshot", "rds:RestoreDBInstanceToPointInTime", "rds:CreateTenantDatabase",
                       "rds:ModifyTenantDatabase"],
            "Resource": "*",
            "Condition": {
                "Bool": {
                    "rds:ManageMasterUserPassword": false
                }
            }
        }
    ]
}
```

------

**Note**  
This policy enforces password management in AWS Secrets Manager at creation. However, you can still disable Secrets Manager integration and manually set a master password by modifying the instance.  
To prevent this, include `rds:ModifyDBInstance`, `rds:ModifyDBCluster` in the action block of the policy. Be aware, this prevents the user from applying any further modifications to existing instances that don't have Secrets Manager integration enabled. 

For more information about using condition keys in IAM policies, see [Policy condition keys for Amazon RDS](security_iam_service-with-iam.md#UsingWithRDS.IAM.Conditions) and [Example policies: Using condition keys](UsingWithRDS.IAM.Conditions.Examples.md).

## Managing the master user password for a DB instance with Secrets Manager
<a name="rds-secrets-manager-db-instance"></a>

You can configure RDS management of the master user password in Secrets Manager when you perform the following actions:
+ [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md)
+ [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md)
+ [Restoring a backup into an Amazon RDS for MySQL DB instance](MySQL.Procedural.Importing.md)
+ [Restoring to a DB instance](USER_RestoreFromSnapshot.md) (RDS for Oracle only)
+ [Restoring a DB instance to a specified time for Amazon RDS](USER_PIT.md) (RDS for Oracle only)

You can perform the preceding operations using the RDS console, the AWS CLI, or the RDS API.

### Console
<a name="rds-secrets-manager-db-instance-console"></a>

Follow the instructions for creating or modifying a DB instance with the RDS console:
+ [Creating a DB instance](USER_CreateDBInstance.md#USER_CreateDBInstance.Creating)
+ [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md)
+ [Importing data from Amazon S3 to a new MySQL DB instance](MySQL.Procedural.Importing.md#MySQL.Procedural.Importing.PerformingImport)

When you use the RDS console to perform one of these operations, you can specify that the master user password is managed by RDS in Secrets Manager. When you're creating or restoring a DB instance, select **Manage master credentials in AWS Secrets Manager** in **Credential settings**. When you're modifying a DB instance, select **Manage master credentials in AWS Secrets Manager** in **Settings**.

The following image is an example of the **Manage master credentials in AWS Secrets Manager** setting when you are creating or restoring a DB instance.

![\[Manage master credentials in AWS Secrets Manager\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/secrets-manager-credential-settings-db-instance.png)


When you select this option, RDS generates the master user password and manages it throughout its lifecycle in Secrets Manager.

![\[Manage master credentials in AWS Secrets Manager selected\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/secrets-manager-integration-create-db-instance.png)


You can choose to encrypt the secret with a KMS key that Secrets Manager provides or with a customer managed key that you create. After RDS is managing the database credentials for a DB instance, you can't change the KMS key used to encrypt the secret.

You can choose other settings to meet your requirements. For more information about the available settings when you're creating a DB instance, see [Settings for DB instances](USER_CreateDBInstance.Settings.md). For more information about the available settings when you're modifying a DB instance, see [Settings for DB instances](USER_ModifyInstance.Settings.md).

### AWS CLI
<a name="rds-secrets-manager-db-instance-cli"></a>

To manage the master user password with RDS in Secrets Manager, specify the `--manage-master-user-password` option in one of the following AWS CLI commands:
+ [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html)
+ [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html)
+ [restore-db-instance-from-s3](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-s3.html)
+ [restore-db-instance-from-db-snapshot](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html) (RDS for Oracle only)
+ [restore-db-instance-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-to-point-in-time.html) (RDS for Oracle only)

When you specify the `--manage-master-user-password` option in these commands, RDS generates the master user password and manages it throughout its lifecycle in Secrets Manager.

To encrypt the secret, you can specify a customer managed key or use the default KMS key that is provided by Secrets Manager. Use the `--master-user-secret-kms-key-id` option to specify a customer managed key. The AWS KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the KMS key. To use a KMS key in a different AWS account, specify the key ARN or alias ARN. After RDS is managing the database credentials for a DB instance, you can't change the KMS key that is used to encrypt the secret.

You can choose other settings to meet your requirements. For more information about the available settings when you are creating a DB instance, see [Settings for DB instances](USER_CreateDBInstance.Settings.md). For more information about the available settings when you are modifying a DB instance, see [Settings for DB instances](USER_ModifyInstance.Settings.md).

The following example creates a DB instance and specifies that RDS manages the master user password in Secrets Manager. The secret is encrypted using the KMS key that is provided by Secrets Manager.

**Example**  
For Linux, macOS, or Unix:  

```
1. aws rds create-db-instance \
2.     --db-instance-identifier mydbinstance \
3.     --engine mysql \
4.     --engine-version 8.0.39 \
5.     --db-instance-class db.r5b.large \
6.     --allocated-storage 200 \
7.     --master-username testUser \
8.     --manage-master-user-password
```
For Windows:  

```
1. aws rds create-db-instance ^
2.     --db-instance-identifier mydbinstance ^
3.     --engine mysql ^
4.     --engine-version 8.0.39 ^
5.     --db-instance-class db.r5b.large ^
6.     --allocated-storage 200 ^
7.     --master-username testUser ^
8.     --manage-master-user-password
```

### RDS API
<a name="rds-secrets-manager-db-instance-api"></a>

To specify that RDS manages the master user password in Secrets Manager, set the `ManageMasterUserPassword` parameter to `true` in one of the following RDS API operations:
+ [CreateDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html)
+ [ModifyDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html)
+ [RestoreDBInstanceFromS3](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromS3.html)
+ [RestoreDBInstanceFromSnapshot](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromSnapshot.html) (RDS for Oracle only)
+ [RestoreDBInstanceToPointInTime](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceToPointInTime.html) (RDS for Oracle only)

When you set the `ManageMasterUserPassword` parameter to `true` in one of these operations, RDS generates the master user password and manages it throughout its lifecycle in Secrets Manager.

To encrypt the secret, you can specify a customer managed key or use the default KMS key that is provided by Secrets Manager. Use the `MasterUserSecretKmsKeyId` parameter to specify a customer managed key. The AWS KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the KMS key. To use a KMS key in a different AWS account, specify the key ARN or alias ARN. After RDS is managing the database credentials for a DB instance, you can't change the KMS key that is used to encrypt the secret.

## Managing the master user password for an RDS for Oracle tenant database with Secrets Manager
<a name="rds-secrets-manager-tenant"></a>

You can configure RDS management of the master user password in Secrets Manager when you perform the following actions:
+ [Adding an RDS for Oracle tenant database to your CDB instance](oracle-cdb-configuring.adding.pdb.md)
+ [Modifying an RDS for Oracle tenant database](oracle-cdb-configuring.modifying.pdb.md)

You can use the RDS console, the AWS CLI, or the RDS API to perform the preceding actions.

### Console
<a name="rds-secrets-manager-tenant-console"></a>

Follow the instructions for creating or modifying an RDS for Oracle tenant database with the RDS console:
+ [Adding an RDS for Oracle tenant database to your CDB instance](oracle-cdb-configuring.adding.pdb.md)
+ [Modifying an RDS for Oracle tenant database](oracle-cdb-configuring.modifying.pdb.md)

When you use the RDS console to perform one of the preceding operations, you can specify that RDS manage the master password in Secrets Manager. When you create a tenant database, select **Manage master credentials in AWS Secrets Manager** in **Credential settings**. When you modify a tenant database, select **Manage master credentials in AWS Secrets Manager** in **Settings**.

The following image is an example of the **Manage master credentials in AWS Secrets Manager** setting when you are creating a tenant database.

![\[Manage master credentials in AWS Secrets Manager\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/secrets-manager-credential-settings-db-instance.png)


When you select this option, RDS generates the master user password and manages it throughout its lifecycle in Secrets Manager.

![\[Manage master credentials in AWS Secrets Manager selected\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/secrets-manager-integration-create-db-instance.png)


You can choose to encrypt the secret with a KMS key that Secrets Manager provides or with a customer managed key that you create. After RDS is managing the database credentials for a tenant database, you can't change the KMS key that is used to encrypt the secret.

You can choose other settings to meet your requirements. For more information about the available settings when you are creating a tenant database, see [Settings for DB instances](USER_CreateDBInstance.Settings.md). For more information about the available settings when you are modifying a tenant database, see [Settings for DB instances](USER_ModifyInstance.Settings.md).

### AWS CLI
<a name="rds-secrets-manager-db-instance-cli"></a>

To manage the master user password with RDS in Secrets Manager, specify the `--manage-master-user-password` option in one of the following AWS CLI commands:
+ [create-tenant-database](https://docs.aws.amazon.com/cli/latest/reference/rds/create-tenant-database.html)
+ [modify-tenant-database](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-tenant-database.html)

When you specify the `--manage-master-user-password` option in the preceding commands, RDS generates the master user password and manages it throughout its lifecycle in Secrets Manager.

To encrypt the secret, you can specify a customer managed key or use the default KMS key that is provided by Secrets Manager. Use the `--master-user-secret-kms-key-id` option to specify a customer managed key. The AWS KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the KMS key. To use a KMS key in a different AWS account, specify the key ARN or alias ARN. After RDS is managing the database credentials for a tenant database, you can't change the KMS key that is used to encrypt the secret.

You can choose other settings to meet your requirements. For more information about the available settings when you are creating a tenant database, see [create-tenant-database](https://docs.aws.amazon.com/cli/latest/reference/rds/create-tenant-database.html). For more information about the available settings when you are modifying a tenant database, see [modify-tenant-database](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-tenant-database.html).

The following example creates an RDS for Oracle tenant database and specifies that RDS manages the master user password in Secrets Manager. The secret is encrypted using the KMS key that is provided by Secrets Manager.

**Example**  
For Linux, macOS, or Unix:  

```
1. aws rds create-tenant-database --region us-east-1 \
2.     --db-instance-identifier my-cdb-inst \
3.     --tenant-db-name mypdb2 \
4.     --master-username mypdb2-admin \
5.     --character-set-name UTF-16 \
6.     --manage-master-user-password
```
For Windows:  

```
1. aws rds create-tenant-database --region us-east-1 ^
2.     --db-instance-identifier my-cdb-inst ^
3.     --tenant-db-name mypdb2 ^
4.     --master-username mypdb2-admin ^
5.     --character-set-name UTF-16 ^
6.     --manage-master-user-password
```

### RDS API
<a name="rds-secrets-manager-db-instance-api"></a>

To specify that RDS manages the master user password in Secrets Manager, set the `ManageMasterUserPassword` parameter to `true` in one of the following RDS API operations:
+ [CreateTenantDatabase](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateTenantDatabase.html)
+ [ModifyTenantDatabase](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyTenantDatabase.html)

When you set the `ManageMasterUserPassword` parameter to `true` in one of these operations, RDS generates the master user password and manages it throughout its lifecycle in Secrets Manager.

To encrypt the secret, you can specify a customer managed key or use the default KMS key that is provided by Secrets Manager. Use the `MasterUserSecretKmsKeyId` parameter to specify a customer managed key. The AWS KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the KMS key. To use a KMS key in a different AWS account, specify the key ARN or alias ARN. After RDS is managing the database credentials for a tenant database, you can't change the KMS key that is used to encrypt the secret.

## Managing the master user password for a Multi-AZDB cluster with Secrets Manager
<a name="rds-secrets-manager-db-cluster"></a>

You can configure RDS management of the master user password in Secrets Manager when you perform the following actions:
+ [Creating a Multi-AZ DB cluster for Amazon RDS](create-multi-az-db-cluster.md)
+ [Modifying a Multi-AZ DB cluster for Amazon RDS](modify-multi-az-db-cluster.md)

You can use the RDS console, the AWS CLI, or the RDS API to perform these actions.

### Console
<a name="rds-secrets-manager-db-cluster-console"></a>

Follow the instructions for creating or modifying a Multi-AZ DB cluster with the RDS console:
+ [Creating a DB cluster](create-multi-az-db-cluster.md#create-multi-az-db-cluster-creating)
+ [Modifying a Multi-AZ DB cluster for Amazon RDS](modify-multi-az-db-cluster.md)

When you use the RDS console to perform one of these operations, you can specify that the master user password is managed by RDS in Secrets Manager. To do so when you are creating a DB cluster, select **Manage master credentials in AWS Secrets Manager** in **Credential settings**. When you are modifying a DB cluster, select **Manage master credentials in AWS Secrets Manager** in **Settings**.

The following image is an example of the **Manage master credentials in AWS Secrets Manager** setting when you are creating a DB cluster.

![\[Manage master credentials in AWS Secrets Manager\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/secrets-manager-credential-settings.png)


When you select this option, RDS generates the master user password and manages it throughout its lifecycle in Secrets Manager.

![\[Manage master credentials in AWS Secrets Manager selected\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/secrets-manager-integration-create.png)


You can choose to encrypt the secret with a KMS key that Secrets Manager provides or with a customer managed key that you create. After RDS is managing the database credentials for a DB cluster, you can't change the KMS key that is used to encrypt the secret.

You can choose other settings to meet your requirements.

For more information about the available settings when you are creating a Multi-AZ DB cluster, see [Settings for creating Multi-AZ DB clusters](create-multi-az-db-cluster.md#create-multi-az-db-cluster-settings). For more information about the available settings when you are modifying a Multi-AZ DB cluster, see [Settings for modifying Multi-AZ DB clusters](modify-multi-az-db-cluster.md#modify-multi-az-db-cluster-settings).

### AWS CLI
<a name="rds-secrets-manager-db-cluster-cli"></a>

To specify that RDS manages the master user password in Secrets Manager, specify the `--manage-master-user-password` option in one of the following commands:
+ [create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html)
+ [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html)

When you specify the `--manage-master-user-password` option in these commands, RDS generates the master user password and manages it throughout its lifecycle in Secrets Manager.

To encrypt the secret, you can specify a customer managed key or use the default KMS key that is provided by Secrets Manager. Use the `--master-user-secret-kms-key-id` option to specify a customer managed key. The AWS KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the KMS key. To use a KMS key in a different AWS account, specify the key ARN or alias ARN. After RDS is managing the database credentials for a DB cluster, you can't change the KMS key that is used to encrypt the secret.

You can choose other settings to meet your requirements.

For more information about the available settings when you are creating a Multi-AZ DB cluster, see [Settings for creating Multi-AZ DB clusters](create-multi-az-db-cluster.md#create-multi-az-db-cluster-settings). For more information about the available settings when you are modifying a Multi-AZ DB cluster, see [Settings for modifying Multi-AZ DB clusters](modify-multi-az-db-cluster.md#modify-multi-az-db-cluster-settings).

This example creates a Multi-AZ DB cluster and specifies that RDS manages the password in Secrets Manager. The secret is encrypted using the KMS key that is provided by Secrets Manager.

**Example**  
For Linux, macOS, or Unix:  

```
 1. aws rds create-db-cluster \
 2.    --db-cluster-identifier mysql-multi-az-db-cluster \
 3.    --engine mysql \
 4.    --engine-version 8.0.39  \
 5.    --backup-retention-period 1  \
 6.    --allocated-storage 4000 \
 7.    --storage-type io1 \
 8.    --iops 10000 \
 9.    --db-cluster-instance-class db.r6gd.xlarge \
10.    --master-username testUser \
11.    --manage-master-user-password
```
For Windows:  

```
 1. aws rds create-db-cluster ^
 2.    --db-cluster-identifier mysql-multi-az-db-cluster ^
 3.    --engine mysql ^
 4.    --engine-version 8.0.39 ^
 5.    --backup-retention-period 1 ^
 6.    --allocated-storage 4000 ^
 7.    --storage-type io1 ^
 8.    --iops 10000 ^
 9.    --db-cluster-instance-class db.r6gd.xlarge ^
10.    --master-username testUser ^
11.    --manage-master-user-password
```

### RDS API
<a name="rds-secrets-manager-db-cluster-api"></a>

To specify that RDS manages the master user password in Secrets Manager, set the `ManageMasterUserPassword` parameter to `true` in one of the following operations:
+ [CreateDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html)
+ [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html)

When you set the `ManageMasterUserPassword` parameter to `true` in one of these operations, RDS generates the master user password and manages it throughout its lifecycle in Secrets Manager.

To encrypt the secret, you can specify a customer managed key or use the default KMS key that is provided by Secrets Manager. Use the `MasterUserSecretKmsKeyId` parameter to specify a customer managed key. The AWS KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the KMS key. To use a KMS key in a different AWS account, specify the key ARN or alias ARN. After RDS is managing the database credentials for a DB cluster, you can't change the KMS key that is used to encrypt the secret.

## Rotating the master user password secret for a DB instance
<a name="rds-secrets-manager-rotate-db-instance"></a>

When RDS rotates a master user password secret, Secrets Manager generates a new secret version for the existing secret. The new version of the secret contains the new master user password. Amazon RDS changes the master user password for the DB instance to match the password for the new secret version.

You can rotate a secret immediately instead of waiting for a scheduled rotation. To rotate a master user password secret in Secrets Manager, modify the DB instance. For information about modifying a DB instance, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

You can rotate a master user password secret immediately with the RDS console, the AWS CLI, or the RDS API. The new password is always 28 characters long and contains at least one upper and lowercase character, one number, and one punctuation. 

### Console
<a name="rds-secrets-manager-rotate-db-instance-console"></a>

To rotate a master user password secret using the RDS console, modify the DB instance and select **Rotate secret immediately** in **Settings**.

![\[Rotate a master user password secret immediately\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/secrets-manager-integration-rotate.png)


Follow the instructions for modifying a DB instance with the RDS console in [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md). You must choose **Apply immediately** on the confirmation page.

### AWS CLI
<a name="rds-secrets-manager-rotate-db-instance-cli"></a>

To rotate a master user password secret using the AWS CLI, use the [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) command and specify the `--rotate-master-user-password` option. You must specify the `--apply-immediately` option when you rotate the master password.

This example rotates a master user password secret.

**Example**  
For Linux, macOS, or Unix:  

```
1. aws rds modify-db-instance \
2.     --db-instance-identifier mydbinstance \
3.     --rotate-master-user-password \
4.     --apply-immediately
```
For Windows:  

```
1. aws rds modify-db-instance ^
2.     --db-instance-identifier mydbinstance ^
3.     --rotate-master-user-password ^
4.     --apply-immediately
```

### RDS API
<a name="rds-secrets-manager-rotate-db-instance-api"></a>

You can rotate a master user password secret using the [ModifyDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html) operation and setting the `RotateMasterUserPassword` parameter to `true`. You must set the `ApplyImmediately` parameter to `true` when you rotate the master password.

## Rotating the master user password secret for a Multi-AZDB cluster
<a name="rds-secrets-manager-rotate-db-cluster"></a>

When RDS rotates a master user password secret, Secrets Manager generates a new secret version for the existing secret. The new version of the secret contains the new master user password. Amazon RDS changes the master user password for the Multi-AZ DB cluster to match the password for the new secret version.

You can rotate a secret immediately instead of waiting for a scheduled rotation. To rotate a master user password secret in Secrets Manager, modify the Multi-AZ DB cluster. For information about modifying a Multi-AZ DB cluster, see [Modifying a Multi-AZ DB cluster for Amazon RDS](modify-multi-az-db-cluster.md). 

You can rotate a master user password secret immediately with the RDS console, the AWS CLI, or the RDS API. The new password is always 28 characters long and contains atleast one upper and lowercase character, one number, and one punctuation. 

### Console
<a name="rds-secrets-manager-rotate-db-instance-console"></a>

To rotate a master user password secret using the RDS console, modify the Multi-AZ DB cluster and select **Rotate secret immediately** in **Settings**.

![\[Rotate a master user password secret immediately\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/secrets-manager-integration-rotate-taz-cluster.png)


Follow the instructions for modifying a Multi-AZ DB cluster with the RDS console in  [Modifying a Multi-AZ DB cluster for Amazon RDS](modify-multi-az-db-cluster.md). You must choose **Apply immediately** on the confirmation page.

### AWS CLI
<a name="rds-secrets-manager-rotate-db-instance-cli"></a>

To rotate a master user password secret using the AWS CLI, use the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) command and specify the `--rotate-master-user-password` option. You must specify the `--apply-immediately` option when you rotate the master password.

This example rotates a master user password secret.

**Example**  
For Linux, macOS, or Unix:  

```
1. aws rds modify-db-cluster \
2.     --db-cluster-identifier mydbcluster \
3.     --rotate-master-user-password \
4.     --apply-immediately
```
For Windows:  

```
1. aws rds modify-db-cluster ^
2.     --db-cluster-identifier mydbcluster ^
3.     --rotate-master-user-password ^
4.     --apply-immediately
```

### RDS API
<a name="rds-secrets-manager-rotate-db-instance-api"></a>

You can rotate a master user password secret using the [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) operation and setting the `RotateMasterUserPassword` parameter to `true`. You must set the `ApplyImmediately` parameter to `true` when you rotate the master password.

## Viewing the details about a secret for a DB instance
<a name="rds-secrets-manager-view-db-instance"></a>

You can retrieve your secrets using the console ([https://console.aws.amazon.com/secretsmanager/](https://console.aws.amazon.com/secretsmanager/)) or the AWS CLI ([get-secret-value](https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/get-secret-value.html) Secrets Manager command).

You can find the Amazon Resource Name (ARN) of a secret managed by RDS in Secrets Manager with the RDS console, the AWS CLI, or the RDS API.

### Console
<a name="rds-secrets-manager-view-db-instance-console"></a>

**To view the details about a secret managed by RDS in Secrets Manager**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the DB instance to show its details.

1. Choose the **Configuration** tab.

   In **Master Credentials ARN**, you can view the secret ARN.  
![\[View the details about a secret managed by RDS in Secrets Manager\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/secrets-manager-integration-view-instance.png)

   You can follow the **Manage in Secrets Manager** link to view and manage the secret in the Secrets Manager console.

### AWS CLI
<a name="rds-secrets-manager-view-db-instance-cli"></a>

You can use the [describe-db-instances](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html) RDS CLI command to find the following information about a secret managed by RDS in Secrets Manager:
+ `SecretArn` – The ARN of the secret
+ `SecretStatus` – The status of the secret

  The possible status values include the following:
  + `creating` – The secret is being created.
  + `active` – The secret is available for normal use and rotation.
  + `rotating` – The secret is being rotated.
  + `impaired` – The secret can be used to access database credentials, but it can't be rotated. A secret might have this status if, for example, permissions are changed so that RDS can no longer access the secret or the KMS key for the secret.

    When a secret has this status, you can correct the condition that caused the status. If you correct the condition that caused status, the status remains `impaired` until the next rotation. Alternatively, you can modify the DB instance to turn off automatic management of database credentials, and then modify the DB instance again to turn on automatic management of database credentials. To modify the DB instance, use the `--manage-master-user-password` option in the [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) command.
+ `KmsKeyId` – The ARN of the KMS key that is used to encrypt the secret

Specify the `--db-instance-identifier` option to show output for a specific DB instance. This example shows the output for a secret that is used by a DB instance.

**Example**  

```
1. aws rds describe-db-instances --db-instance-identifier mydbinstance
```
Following is sample output for a secret:  

```
"MasterUserSecret": {
                "SecretArn": "arn:aws:secretsmanager:eu-west-1:123456789012:secret:rds!db-033d7456-2c96-450d-9d48-f5de3025e51c-xmJRDx",
                "SecretStatus": "active",
                "KmsKeyId": "arn:aws:kms:eu-west-1:123456789012:key/0987dcba-09fe-87dc-65ba-ab0987654321"
            }
```

When you have the secret ARN, you can view details about the secret using the [get-secret-value](https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/get-secret-value.html) Secrets Manager CLI command.

This example shows the details for the secret in the previous sample output.

**Example**  
For Linux, macOS, or Unix:  

```
aws secretsmanager get-secret-value \
    --secret-id 'arn:aws:secretsmanager:eu-west-1:123456789012:secret:rds!db-033d7456-2c96-450d-9d48-f5de3025e51c-xmJRDx'
```
For Windows:  

```
aws secretsmanager get-secret-value ^
    --secret-id 'arn:aws:secretsmanager:eu-west-1:123456789012:secret:rds!db-033d7456-2c96-450d-9d48-f5de3025e51c-xmJRDx'
```

### RDS API
<a name="rds-secrets-manager-rotate-db-instance-api"></a>

You can view the ARN, status, and KMS key for a secret managed by RDS in Secrets Manager by using the [DescribeDBInstances](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html) operation and setting the `DBInstanceIdentifier` parameter to a DB instance identifier. Details about the secret are included in the output.

When you have the secret ARN, you can view details about the secret using the [GetSecretValue](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html) Secrets Manager operation.

## Viewing the details about a secret for a Multi-AZDB cluster
<a name="rds-secrets-manager-view-db-cluster"></a>

You can retrieve your secrets using the console ([https://console.aws.amazon.com/secretsmanager/](https://console.aws.amazon.com/secretsmanager/)) or the AWS CLI ([get-secret-value](https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/get-secret-value.html) Secrets Manager command).

You can find the Amazon Resource Name (ARN) of a secret managed by RDS in Secrets Manager with the RDS console, the AWS CLI, or the RDS API.

### Console
<a name="rds-secrets-manager-view-db-cluster-console"></a>

**To view the details about a secret managed by RDS in Secrets Manager**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the Multi-AZ DB cluster to show its details.

1. Choose the **Configuration** tab.

   In **Master Credentials ARN**, you can view the secret ARN.  
![\[View the details about a secret managed by RDS in Secrets Manager\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/secrets-manager-integration-view-taz-cluster.png)

   You can follow the **Manage in Secrets Manager** link to view and manage the secret in the Secrets Manager console.

### AWS CLI
<a name="rds-secrets-manager-view-db-instance-cli"></a>

You can use the RDS AWS CLI [describe-db-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) command to find the following information about a secret managed by RDS in Secrets Manager:
+ `SecretArn` – The ARN of the secret
+ `SecretStatus` – The status of the secret

  The possible status values include the following:
  + `creating` – The secret is being created.
  + `active` – The secret is available for normal use and rotation.
  + `rotating` – The secret is being rotated.
  + `impaired` – The secret can be used to access database credentials, but it can't be rotated. A secret might have this status if, for example, permissions are changed so that RDS can no longer access the secret or the KMS key for the secret.

    When a secret has this status, you can correct the condition that caused the status. If you correct the condition that caused status, the status remains `impaired` until the next rotation. Alternatively, you can modify the DB cluster to turn off automatic management of database credentials, and then modify the DB cluster again to turn on automatic management of database credentials. To modify the DB cluster, use the `--manage-master-user-password` option in the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) command.
+ `KmsKeyId` – The ARN of the KMS key that is used to encrypt the secret

Specify the `--db-cluster-identifier` option to show output for a specific DB cluster. This example shows the output for a secret that is used by a DB cluster.

**Example**  

```
1. aws rds describe-db-clusters --db-cluster-identifier mydbcluster
```
The following sample shows the output for a secret:  

```
"MasterUserSecret": {
                "SecretArn": "arn:aws:secretsmanager:eu-west-1:123456789012:secret:rds!cluster-033d7456-2c96-450d-9d48-f5de3025e51c-xmJRDx",
                "SecretStatus": "active",
                "KmsKeyId": "arn:aws:kms:eu-west-1:123456789012:key/0987dcba-09fe-87dc-65ba-ab0987654321"
            }
```

When you have the secret ARN, you can view details about the secret using the [get-secret-value](https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/get-secret-value.html) Secrets Manager CLI command.

This example shows the details for the secret in the previous sample output.

**Example**  
For Linux, macOS, or Unix:  

```
aws secretsmanager get-secret-value \
    --secret-id 'arn:aws:secretsmanager:eu-west-1:123456789012:secret:rds!cluster-033d7456-2c96-450d-9d48-f5de3025e51c-xmJRDx'
```
For Windows:  

```
aws secretsmanager get-secret-value ^
    --secret-id 'arn:aws:secretsmanager:eu-west-1:123456789012:secret:rds!cluster-033d7456-2c96-450d-9d48-f5de3025e51c-xmJRDx'
```

### RDS API
<a name="rds-secrets-manager-rotate-db-instance-api"></a>

You can view the ARN, status, and KMS key for a secret managed by RDS in Secrets Manager using the [DescribeDBClusters](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html) RDS operation and setting the `DBClusterIdentifier` parameter to a DB cluster identifier. Details about the secret are included in the output.

When you have the secret ARN, you can view details about the secret using the [GetSecretValue](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html) Secrets Manager operation.

## Viewing the details about a secret for a tenant database
<a name="rds-secrets-manager-view-tenant"></a>

You can retrieve your secrets using the console ([https://console.aws.amazon.com/secretsmanager/](https://console.aws.amazon.com/secretsmanager/)) or the AWS CLI ([get-secret-value](https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/get-secret-value.html) Secrets Manager command).

You can find the Amazon Resource Name (ARN) of a secret managed by Amazon RDS in AWS Secrets Manager with the Amazon RDS console, the AWS CLI, or the Amazon RDS API.

### Console
<a name="rds-secrets-manager-view-tenant-console"></a>

**To view the details about a secret managed by Amazon RDS in AWS Secrets Manager for a tenant database**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the DB instance that contains the tenant database to show its details.

1. Choose the **Configuration** tab.

   In the **Tenant databases** section, find the tenant database and view its **Master Credentials ARN**.

   You can follow the **Manage in Secrets Manager** link to view and manage the secret in the Secrets Manager console.

### AWS CLI
<a name="rds-secrets-manager-view-tenant-cli"></a>

You can use the [describe-tenant-databases](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-tenant-databases.html) Amazon RDS AWS CLI command to find the following information about a secret managed by Amazon RDS in AWS Secrets Manager for a tenant database:
+ `SecretArn` – The ARN of the secret
+ `SecretStatus` – The status of the secret

  The possible status values include the following:
  + `creating` – The secret is being created.
  + `active` – The secret is available for normal use and rotation.
  + `rotating` – The secret is being rotated.
  + `impaired` – The secret can be used to access database credentials, but it can't be rotated. A secret might have this status if, for example, permissions are changed so that Amazon RDS can no longer access the secret or the KMS key for the secret.

    When a secret has this status, you can correct the condition that caused the status. If you correct the condition that caused status, the status remains `impaired` until the next rotation. Alternatively, you can modify the tenant database to turn off automatic management of database credentials, and then modify the tenant database again to turn on automatic management of database credentials. To modify the tenant database, use the `--manage-master-user-password` option in the [modify-tenant-database](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-tenant-database.html) command.
+ `KmsKeyId` – The ARN of the KMS key that is used to encrypt the secret

Specify the `--db-instance-identifier` option to show output for tenant databases in a specific DB instance. You can also specify the `--tenant-db-name` option to show output for a specific tenant database. This example shows the output for a secret that is used by a tenant database.

**Example**  

```
1. aws rds describe-tenant-databases \
2.     --db-instance-identifier database-3 \
3.     --query "TenantDatabases[0].MasterUserSecret"
```
Following is sample output for a secret:  

```
{
    "SecretArn": "arn:aws:secretsmanager:us-east-2:123456789012:secret:rds!db-ABC123",
    "SecretStatus": "active",
    "KmsKeyId": "arn:aws:kms:us-east-2:123456789012:key/aa11bb22-####-####-####-fedcba123456"
}
```

When you have the secret ARN, you can view details about the secret using the [get-secret-value](https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/get-secret-value.html) Secrets Manager AWS CLI command.

This example shows the details for the secret in the previous sample output.

**Example**  
For Linux, macOS, or Unix:  

```
aws secretsmanager get-secret-value \
    --secret-id 'arn:aws:secretsmanager:us-east-2:123456789012:secret:rds!db-ABC123'
```
For Windows:  

```
aws secretsmanager get-secret-value ^
    --secret-id 'arn:aws:secretsmanager:us-east-2:123456789012:secret:rds!db-ABC123'
```

### Amazon RDS API
<a name="rds-secrets-manager-view-tenant-api"></a>

You can view the ARN, status, and KMS key for a secret managed by Amazon RDS in AWS Secrets Manager by using the [DescribeTenantDatabases](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeTenantDatabases.html) operation and setting the `DBInstanceIdentifier` parameter to a DB instance identifier. You can also set the `TenantDBName` parameter to a specific tenant database name. Details about the secret are included in the output.

When you have the secret ARN, you can view details about the secret using the [GetSecretValue](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html) Secrets Manager operation.

## Region and version availability
<a name="rds-secrets-manager-availability"></a>

Feature availability and support varies across specific versions of each database engine and across AWS Regions. For more information about version and Region availability with Secrets Manager integration with Amazon RDS, see [Supported Regions and DB engines for the Secrets Manager integration with Amazon RDS](Concepts.RDS_Fea_Regions_DB-eng.Feature.SecretsManager.md). 

# Data protection in Amazon RDS
<a name="DataDurability"></a>

The AWS [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/) applies to data protection in Amazon Relational Database Service. As described in this model, AWS is responsible for protecting the global infrastructure that runs all of the AWS Cloud. You are responsible for maintaining control over your content that is hosted on this infrastructure. You are also responsible for the security configuration and management tasks for the AWS services that you use. For more information about data privacy, see the [Data Privacy FAQ](https://aws.amazon.com/compliance/data-privacy-faq/). For information about data protection in Europe, see the [AWS Shared Responsibility Model and GDPR](https://aws.amazon.com/blogs/security/the-aws-shared-responsibility-model-and-gdpr/) blog post on the *AWS Security Blog*.

For data protection purposes, we recommend that you protect AWS account credentials and set up individual users with AWS IAM Identity Center or AWS Identity and Access Management (IAM). That way, each user is given only the permissions necessary to fulfill their job duties. We also recommend that you secure your data in the following ways:
+ Use multi-factor authentication (MFA) with each account.
+ Use SSL/TLS to communicate with AWS resources. We require TLS 1.2 and recommend TLS 1.3.
+ Set up API and user activity logging with AWS CloudTrail. For information about using CloudTrail trails to capture AWS activities, see [Working with CloudTrail trails](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-trails.html) in the *AWS CloudTrail User Guide*.
+ Use AWS encryption solutions, along with all default security controls within AWS services.
+ Use advanced managed security services such as Amazon Macie, which assists in discovering and securing sensitive data that is stored in Amazon S3.
+ If you require FIPS 140-3 validated cryptographic modules when accessing AWS through a command line interface or an API, use a FIPS endpoint. For more information about the available FIPS endpoints, see [Federal Information Processing Standard (FIPS) 140-3](https://aws.amazon.com/compliance/fips/).

We strongly recommend that you never put confidential or sensitive information, such as your customers' email addresses, into tags or free-form text fields such as a **Name** field. This includes when you work with Amazon RDS or other AWS services using the console, API, AWS CLI, or AWS SDKs. Any data that you enter into tags or free-form text fields used for names may be used for billing or diagnostic logs. If you provide a URL to an external server, we strongly recommend that you do not include credentials information in the URL to validate your request to that server.

**Topics**
+ [

# Protecting data using encryption
](Encryption.md)
+ [

# Internetwork traffic privacy
](inter-network-traffic-privacy.md)

# Protecting data using encryption
<a name="Encryption"></a>

You can enable encryption for database resources. You can also encrypt connections to DB instances.

**Topics**
+ [

# Encrypting Amazon RDS resources
](Overview.Encryption.md)
+ [

# AWS KMS key management
](Overview.Encryption.Keys.md)
+ [

# Using SSL/TLS to encrypt a connection to a DB instance or cluster
](UsingWithRDS.SSL.md)
+ [

# Rotating your SSL/TLS certificate
](UsingWithRDS.SSL-certificate-rotation.md)

# Encrypting Amazon RDS resources
<a name="Overview.Encryption"></a>

Amazon RDS can encrypt your Amazon RDS DB instances. Data that is encrypted at rest includes the underlying storage for DB instances, its logs, automated backups, read replicas, and snapshots.

Amazon RDS encrypted DB instances use the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS DB instances.

After your data is encrypted, Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance. You don't need to modify your database client applications to use encryption.

**Note**  
For encrypted and unencrypted DB instances, data that is in transit between the source and the read replicas is encrypted, even when replicating across AWS Regions.

**Topics**
+ [

## Overview of encrypting Amazon RDS resources
](#Overview.Encryption.Overview)
+ [

## Encrypting a DB instance
](#Overview.Encryption.Enabling)
+ [

## Determining whether encryption is turned on for a DB instance
](#Overview.Encryption.Determining)
+ [

## Availability of Amazon RDS encryption
](#Overview.Encryption.Availability)
+ [

## Encryption in transit
](#Overview.Encryption.InTransit)
+ [

## Limitations of Amazon RDS encrypted DB instances
](#Overview.Encryption.Limitations)

## Overview of encrypting Amazon RDS resources
<a name="Overview.Encryption.Overview"></a>

Amazon RDS encrypted DB instances provide an additional layer of data protection by securing your data from unauthorized access to the underlying storage. You can use Amazon RDS encryption to increase data protection of your applications deployed in the cloud, and to fulfill compliance requirements for encryption at rest. For an Amazon RDS encrypted DB instance, all logs, backups, and snapshots are encrypted. For more information about the availability and limitations of encryption, see [Availability of Amazon RDS encryption](#Overview.Encryption.Availability) and [Limitations of Amazon RDS encrypted DB instances](#Overview.Encryption.Limitations).

Amazon RDS uses an AWS Key Management Service key to encrypt these resources. AWS KMS combines secure, highly available hardware and software to provide a key management system scaled for the cloud. You can use an AWS managed key, or you can create customer managed keys. 

When you create an encrypted DB instance, you can choose a customer managed key or the AWS managed key for Amazon RDS to encrypt your DB instance. If you don't specify the key identifier for a customer managed key, Amazon RDS uses the AWS managed key for your new DB instance. Amazon RDS creates an AWS managed key for Amazon RDS for your AWS account. Your AWS account has a different AWS managed key for Amazon RDS for each AWS Region.

To manage the customer managed keys used for encrypting and decrypting your Amazon RDS resources, you use the [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/). 

Using AWS KMS, you can create customer managed keys and define the policies that control the use of these customer managed keys. AWS KMS supports CloudTrail, so you can audit KMS key usage to verify that customer managed keys are being used appropriately. You can use your customer managed keys with Amazon Aurora and supported AWS services such as Amazon S3, Amazon EBS, and Amazon Redshift. For a list of services that are integrated with AWS KMS, see [AWS Service Integration](https://aws.amazon.com/kms/features/#AWS_Service_Integration). Some considerations about using KMS keys: 
+ Once you have created an encrypted DB instance, you can't change the KMS key used by that DB instance. Therefore, be sure to determine your KMS key requirements before you create your encrypted DB instance.

  If you must change the encryption key for your DB instance, create a manual snapshot of your instance and enable encryption while copying the snapshot. For more information, see [ re:Post Knowledge article](https://repost.aws/knowledge-center/update-encryption-key-rds).
+ If you copy an encrypted snapshot, you can use a different KMS key to encrypt the target snapshot than the one that was used to encrypt the source snapshot. 
+ A read replica of an Amazon RDS encrypted instance must be encrypted using the same KMS key as the primary DB instance when both are in the same AWS Region. 
+ If the primary DB instance and read replica are in different AWS Regions, you encrypt the read replica using the KMS key for that AWS Region.
+ You can't share a snapshot that has been encrypted using the AWS managed key of the AWS account that shared the snapshot.
+ Amazon RDS also supports encrypting an Oracle or SQL Server DB instance with Transparent Data Encryption (TDE). TDE can be used with RDS encryption at rest, although using TDE and RDS encryption at rest simultaneously might slightly affect the performance of your database. You must manage different keys for each encryption method. For more information on TDE, see [Oracle Transparent Data Encryption](Appendix.Oracle.Options.AdvSecurity.md) or [Support for Transparent Data Encryption in SQL Server](Appendix.SQLServer.Options.TDE.md).

**Important**  
Amazon RDS loses access to the KMS key for a DB instance when you disable the KMS key. If you lose access to a KMS key, the encrypted DB instance goes into the `inaccessible-encryption-credentials-recoverable` state 2 hours after detection in instances where backups are enabled. The DB instance remains in this state for seven days, during which the instance is stopped. API calls made to the DB instance during this time might not succeed. To recover the DB instance, enable the KMS key and restart this DB instance. Enable the KMS key from the AWS Management Console, AWS CLI, or RDS API. Restart the DB instance using the AWS CLI command [start-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/start-db-instance.html) or AWS Management Console.   
The `inaccessible-encryption-credentials-recoverable` state only applies to DB instances that can stop. This recoverable state is not applicable to instances that can't stop, such as read replicas and instances with read replicas. For more information, see [Limitations of stopping your DB instance](USER_StopInstance.md#USER_StopInstance.Limitations).  
If the DB instance isn't recovered within seven days, it goes into the terminal `inaccessible-encryption-credentials` state. In this state, the DB instance is not usable anymore and you can only restore the DB instance from a backup. We strongly recommend that you always turn on backups for encrypted DB instances to guard against the loss of encrypted data in your databases.  
During the creation of a DB instance, Amazon RDS checks if the calling principal has access to the KMS key and generates a grant from the KMS key that it uses for the entire lifetime of the DB instance. Revoking the calling principal's access to the KMS key does not affect a running database. When using KMS keys in cross-account scenarios, such as copying a snapshot to another account, the KMS key needs to be shared with the other account. If you create a DB instance from the snapshot without specifying a different KMS key, the new instance uses the KMS key from the source account. Revoking access to the key after you create the DB instance does not affect the instance. However, disabling the key impacts all DB instances encrypted with that key. To prevent this, specify a different key during the snapshot copy operation.  
DB instances with disabled backups remain available until the volumes are detached from the host during an instance modification or a recovery. RDS moves the instances into `inaccessible-encryption-credentials-recoverable` state or `inaccessible-encryption-credentials` state as applicable.

For more information about KMS keys, see [AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#kms_keys) in the *AWS Key Management Service Developer Guide* and [AWS KMS key management](Overview.Encryption.Keys.md). 

## Encrypting a DB instance
<a name="Overview.Encryption.Enabling"></a>

To encrypt a new DB instance, choose **Enable encryption** on the Amazon RDS console. For information on creating a DB instance, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md). 

If you use the [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) AWS CLI command to create an encrypted DB instance, set the `--storage-encrypted` parameter. If you use the [CreateDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html) API operation, set the `StorageEncrypted` parameter to true.



If you use the AWS CLI `create-db-instance` command to create an encrypted DB instance with a customer managed key, set the `--kms-key-id` parameter to any key identifier for the KMS key. If you use the Amazon RDS API `CreateDBInstance` operation, set the `KmsKeyId` parameter to any key identifier for the KMS key. To use a customer managed key in a different AWS account, specify the key ARN or alias ARN.

## Determining whether encryption is turned on for a DB instance
<a name="Overview.Encryption.Determining"></a>

You can use the AWS Management Console, AWS CLI, or RDS API to determine whether encryption at rest is turned on for a DB instance.

### Console
<a name="Overview.Encryption.Determining.CON"></a>

**To determine whether encryption at rest is turned on for a DB instance**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the DB instance that you want to check to view its details.

1. Choose the **Configuration** tab, and check the **Encryption** value under **Storage**.

   It shows either **Enabled** or **Not enabled**.  
![\[Checking encryption at rest for a DB instance\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/encryption-check-db-instance.png)

### AWS CLI
<a name="Overview.Encryption.Determining.CLI"></a>

To determine whether encryption at rest is turned on for a DB instance by using the AWS CLI, call the [describe-db-instances](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html) command with the following option: 
+ `--db-instance-identifier` – The name of the DB instance.

The following example uses a query to return either `TRUE` or `FALSE` regarding encryption at rest for the `mydb` DB instance.

**Example**  

```
1. aws rds describe-db-instances --db-instance-identifier mydb --query "*[].{StorageEncrypted:StorageEncrypted}" --output text
```

### RDS API
<a name="Overview.Encryption.Determining.API"></a>

To determine whether encryption at rest is turned on for a DB instance by using the Amazon RDS API, call the [DescribeDBInstances](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html) operation with the following parameter: 
+ `DBInstanceIdentifier` – The name of the DB instance.

## Availability of Amazon RDS encryption
<a name="Overview.Encryption.Availability"></a>

Amazon RDS encryption is currently available for all database engines and storage types.

Amazon RDS encryption is available for most DB instance classes. The following table lists DB instance classes that *don't support* Amazon RDS encryption:


| Instance type | Instance class | 
| --- | --- | 
| General purpose (M1) |  db.m1.small db.m1.medium db.m1.large db.m1.xlarge  | 
| Memory optimized (M2) |  db.m2.xlarge db.m2.2xlarge db.m2.4xlarge  | 
| Burstable (T2) |  db.t2.micro  | 

## Encryption in transit
<a name="Overview.Encryption.InTransit"></a>

**Encryption at the physical layer**  
All data flowing accross AWS Regions over the AWS global network is automatically encrypted at the physical layer before it leaves AWS secured facilities. All traffic between AZs is encrypted. Additional layers of encryption, including those listed in this section may provide additional protections.

**Encryption provided by Amazon VPC peering and Transit Gateway cross-Region peering**  
All cross-Region traffic that uses Amazon VPC and Transit Gateway peering is automatically bulk-encrypted when it exits a Region. An additional layer of encryption is automatically provided at the physical layer for all traffic before it leaves AWS secured facilities.

**Encryption between instances**  
AWS provides secure and private connectivity between DB instances of all types. In addition, some instance types use the offload capabilities of the underlying Nitro System hardware to automatically encrypt in-transit traffic between instances. This encryption uses Authenticated Encryption with Associated Data (AEAD) algorithms, with 256-bit encryption. There is no impact on network performance. To support this additional in-transit traffic encryption between instances, the following requirements must be met:  
+ The instances use the following instance types:
  + **General purpose**: M6i, M6id, M6in, M6idn, M7g
  + **Memory optimized**: R6i, R6id, R6in, R6idn, R7g, X2idn, X2iedn, X2iezn
+ The instances are in the same AWS Region.
+ The instances are in the same VPC or peered VPCs, and the traffic does not pass through a virtual network device or service, such as a load balancer or a transit gateway.

## Limitations of Amazon RDS encrypted DB instances
<a name="Overview.Encryption.Limitations"></a>

The following limitations exist for Amazon RDS encrypted DB instances:
+ You can only encrypt an Amazon RDS DB instance when you create it, not after the DB instance is created.

  However, because you can encrypt a copy of an unencrypted snapshot, you can effectively add encryption to an unencrypted DB instance. That is, you can create a snapshot of your DB instance, and then create an encrypted copy of that snapshot. You can then restore a DB instance from the encrypted snapshot, and thus you have an encrypted copy of your original DB instance. For more information, see [Copying a DB snapshot for Amazon RDS](USER_CopySnapshot.md).
+ You can't turn off encryption on an encrypted DB instance.
+ You can't create an encrypted snapshot of an unencrypted DB instance.
+ A snapshot of an encrypted DB instance must be encrypted using the same KMS key as the DB instance.
+ You can't have an encrypted read replica of an unencrypted DB instance or an unencrypted read replica of an encrypted DB instance.
+ Encrypted read replicas must be encrypted with the same KMS key as the source DB instance when both are in the same AWS Region.
+ You can't restore an unencrypted backup or snapshot to an encrypted DB instance.
+ To copy an encrypted snapshot from one AWS Region to another, you must specify the KMS key in the destination AWS Region. This is because KMS keys are specific to the AWS Region that they are created in.

  The source snapshot remains encrypted throughout the copy process. Amazon RDS uses envelope encryption to protect data during the copy process. For more information about envelope encryption, see [ Envelope encryption](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping) in the *AWS Key Management Service Developer Guide*.
+ You can't unencrypt an encrypted DB instance. However, you can export data from an encrypted DB instance and import the data into an unencrypted DB instance.

# AWS KMS key management
<a name="Overview.Encryption.Keys"></a>

 Amazon RDS automatically integrates with [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/) for key management. Amazon RDS uses envelope encryption. For more information about envelope encryption, see [ Envelope encryption](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping) in the *AWS Key Management Service Developer Guide*. 

You can use two types of AWS KMS keys to encrypt your DB instances . 
+ If you want full control over a KMS key, you must create a *customer managed key*. For more information about customer managed keys, see [Customer managed keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) in the *AWS Key Management Service Developer Guide*. 
+  *AWS managed keys* are KMS keys in your account that are created, managed, and used on your behalf by an AWS service that is integrated with AWS KMS. By default, the RDS AWS managed key ( `aws/rds`) is used for encryption. You can't manage, rotate, or delete the RDS AWS managed key. For more information about AWS managed keys, see [AWS managed keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) in the *AWS Key Management Service Developer Guide*. 

To manage KMS keys used for Amazon RDS encrypted DB instances , use the [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/) in the [AWS KMS console](https://console.aws.amazon.com/kms), the AWS CLI, or the AWS KMS API. To view audit logs of every action taken with an AWS managed or customer managed key, use [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/). For more information about key rotation, see [Rotating AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html). 

## Authorizing use of a customer managed key
<a name="Overview.Encryption.Keys.Authorizing"></a>

When RDS uses a customer managed key in cryptographic operations, it acts on behalf of the user who is creating or changing the RDS resource.

To create an RDS resource using a customer managed key, a user must have permissions to call the following operations on the customer managed key:
+  `kms:CreateGrant` 
+  `kms:DescribeKey` 

You can specify these required permissions in a key policy, or in an IAM policy if the key policy allows it.

**Important**  
When you use explicit deny statements for all resources (\$1) in AWS KMS key policies with managed services like Amazon RDS, you must specify a condition to allow the resource owning account. Operations might fail without this condition, even if the deny rule includes exceptions for your IAM user.

**Tip**  
To follow the principle of least privilege, do not allow full access to `kms:CreateGrant`. Instead, use the [kms:ViaService condition key](https://docs.aws.amazon.com/kms/latest/developerguide/policy-conditions.html#conditions-kms-via-service) to allow the user to create grants on the KMS key only when the grant is created on the user's behalf by an AWS service.

You can make the IAM policy stricter in various ways. For example, if you want to allow the customer managed key to be used only for requests that originate in RDS , use the [ kms:ViaService condition key](https://docs.aws.amazon.com/kms/latest/developerguide/policy-conditions.html#conditions-kms-via-service) with the `rds.<region>.amazonaws.com` value. Also, you can use the keys or values in the [Amazon RDS encryption context](#Overview.Encryption.Keys.encryptioncontext) as a condition for using the customer managed key for encryption.

For more information, see [Allowing users in other accounts to use a KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html) in the *AWS Key Management Service Developer Guide* and [Key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies). 

## Amazon RDS encryption context
<a name="Overview.Encryption.Keys.encryptioncontext"></a>

When RDS uses your KMS key, or when Amazon EBS uses the KMS key on behalf of RDS , the service specifies an [encryption context](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context). The encryption context is [additional authenticated data](https://docs.aws.amazon.com/crypto/latest/userguide/cryptography-concepts.html#term-aad) (AAD) that AWS KMS uses to ensure data integrity. When an encryption context is specified for an encryption operation, the service must specify the same encryption context for the decryption operation. Otherwise, decryption fails. The encryption context is also written to your [AWS CloudTrail](https://aws.amazon.com/cloudtrail/) logs to help you understand why a given KMS key was used. Your CloudTrail logs might contain many entries describing the use of a KMS key, but the encryption context in each log entry can help you determine the reason for that particular use.

At minimum, Amazon RDS always uses the DB instance ID for the encryption context, as in the following JSON-formatted example:

```
{ "aws:rds:db-id": "db-CQYSMDPBRZ7BPMH7Y3RTDG5QY" }
```

This encryption context can help you identify the DB instance for which your KMS key was used.

When your KMS key is used for a specific DB instance and a specific Amazon EBS volume, both the DB instance ID and the Amazon EBS volume ID are used for the encryption context, as in the following JSON-formatted example:

```
{
  "aws:rds:db-id": "db-BRG7VYS3SVIFQW7234EJQOM5RQ",
  "aws:ebs:id": "vol-ad8c6542"
}
```

# Using SSL/TLS to encrypt a connection to a DB instance or cluster
<a name="UsingWithRDS.SSL"></a>

You can use Secure Socket Layer (SSL) or Transport Layer Security (TLS) from your application to encrypt a connection to a database running Db2, MariaDB, Microsoft SQL Server, MySQL, Oracle, or PostgreSQL.

SSL/TLS connections provide a layer of security by encrypting data that moves between your client and DB instance or cluster . Optionally, your SSL/TLS connection can perform server identity verification by validating the server certificate installed on your database. To require server identity verification, follow this general process:

1. Choose the **certificate authority (CA)** that signs the **DB server certificate,** for your database. For more information about certificate authorities, see [Certificate authorities](#UsingWithRDS.SSL.RegionCertificateAuthorities) . 

1. Download a certificate bundle to use when you are connecting to the database. To download a certificate bundle, see  [Certificate bundles by AWS Region](#UsingWithRDS.SSL.CertificatesAllRegions) . 
**Note**  
All certificates are only available for download using SSL/TLS connections.

1. Connect to the database using your DB engine's process for implementing SSL/TLS connections. Each DB engine has its own process for implementing SSL/TLS. To learn how to implement SSL/TLS for your database, follow the link that corresponds to your DB engine:
   +  [Using SSL/TLS with an Amazon RDS for Db2 DB instance](Db2.Concepts.SSL.md) 
   +  [SSL/TLS support for MariaDB DB instances on Amazon RDS](MariaDB.Concepts.SSLSupport.md) 
   +  [Using SSL with a Microsoft SQL Server DB instance](SQLServer.Concepts.General.SSL.Using.md) 
   +  [SSL/TLS support for MySQL DB instances on Amazon RDS](MySQL.Concepts.SSLSupport.md) 
   +  [Using SSL with an RDS for Oracle DB instance](Oracle.Concepts.SSL.md) 
   +  [Using SSL with a PostgreSQL DB instance](PostgreSQL.Concepts.General.SSL.md) 

## Certificate authorities
<a name="UsingWithRDS.SSL.RegionCertificateAuthorities"></a>

The **certificate authority (CA)** is the certificate that identifies the root CA at the top of the certificate chain. The CA signs the **DB server certificate,** which is installed on each DB instance. The DB server certificate identifies the DB instance as a trusted server.

![\[Certificate authority overview\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/certificate-authority-overview.png)


Amazon RDS provides the following CAs to sign the DB server certificate for a database.


****  

| Certificate authority (CA) | Description | Common name (CN) | 
| --- | --- | --- | 
|  rds-ca-rsa2048-g1  |  Uses a certificate authority with RSA 2048 private key algorithm and SHA256 signing algorithm in most AWS Regions. In the AWS GovCloud (US) Regions, this CA uses a certificate authority with RSA 2048 private key algorithm and SHA384 signing algorithm. This CA supports automatic server certificate rotation.  | Amazon RDS region-identifier Root CA RSA2048 G1 | 
|  rds-ca-rsa4096-g1  |  Uses a certificate authority with RSA 4096 private key algorithm and SHA384 signing algorithm. This CA supports automatic server certificate rotation.   | Amazon RDS region-identifier Root CA RSA4096 G1 | 
|  rds-ca-ecc384-g1  |  Uses a certificate authority with ECC 384 private key algorithm and SHA384 signing algorithm. This CA supports automatic server certificate rotation.   | Amazon RDS region-identifier Root CA ECC384 G1 | 

**Note**  
If you are using the AWS CLI, you can see the validities of the certificate authorities listed above by using [describe-certificates](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-certificates.html). 

These CA certificates are included in the regional and global certificate bundle. When you use the rds-ca-rsa2048-g1, rds-ca-rsa4096-g1, or rds-ca-ecc384-g1 CA with a database, RDS manages the DB server certificate on the database. RDS rotates the DB server certificate automatically before it expires. 

### Setting the CA for your database
<a name="UsingWithRDS.SSL.RegionCertificateAuthorities.Selection"></a>

You can set the CA for a database when you perform the following tasks:
+ Create a DB instance or Multi-AZ DB cluster – You can set the CA when you create a DB instance or cluster. For instructions, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md) or [Creating a Multi-AZ DB cluster for Amazon RDS](create-multi-az-db-cluster.md) .
+ Modify a DB instance or Multi-AZ DB cluster – You can set the CA for a DB instance or cluster by modifying it. For instructions, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md) or [Modifying a Multi-AZ DB cluster for Amazon RDS](modify-multi-az-db-cluster.md) .

**Note**  
 The default CA is set to rds-ca-rsa2048-g1.  You can override the default CA for your AWS account by using the [modify-certificates](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-certificates.html) command.

The available CAs depend on the DB engine and DB engine version. When you use the AWS Management Console, you can choose the CA using the **Certificate authority** setting, as shown in the following image.

![\[Certificate authority option\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/certificate-authority.png)


The console only shows the CAs that are available for the DB engine and DB engine version. If you're using the AWS CLI, you can set the CA for a DB instance using the [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) or [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) command. You can set the CA for a Multi-AZ DB cluster using the [create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html) or [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) command. 

If you're using the AWS CLI, you can see the available CAs for your account by using the [describe-certificates](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-certificates.html) command. This command also shows the expiration date for each CA in `ValidTill` in the output. You can find the CAs that are available for a specific DB engine and DB engine version using the [describe-db-engine-versions](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-engine-versions.html) command.

The following example shows the CAs available for the default RDS for PostgreSQL DB engine version.

```
aws rds describe-db-engine-versions --default-only --engine postgres
```

Your output is similar to the following. The available CAs are listed in `SupportedCACertificateIdentifiers`. The output also shows whether the DB engine version supports rotating the certificate without restart in `SupportsCertificateRotationWithoutRestart`. 

```
{
    "DBEngineVersions": [
        {
            "Engine": "postgres",
            "MajorEngineVersion": "13",
            "EngineVersion": "13.4",
            "DBParameterGroupFamily": "postgres13",
            "DBEngineDescription": "PostgreSQL",
            "DBEngineVersionDescription": "PostgreSQL 13.4-R1",
            "ValidUpgradeTarget": [],
            "SupportsLogExportsToCloudwatchLogs": false,
            "SupportsReadReplica": true,
            "SupportedFeatureNames": [
                "Lambda"
            ],
            "Status": "available",
            "SupportsParallelQuery": false,
            "SupportsGlobalDatabases": false,
            "SupportsBabelfish": false,
            "SupportsCertificateRotationWithoutRestart": true,
            "SupportedCACertificateIdentifiers": [
                "rds-ca-rsa2048-g1",
                "rds-ca-ecc384-g1",
                "rds-ca-rsa4096-g1"
            ]
        }
    ]
}
```

### DB server certificate validities
<a name="UsingWithRDS.SSL.RegionCertificateAuthorities.DBServerCert"></a>

The validity of DB server certificate depends on the DB engine and DB engine version. If the DB engine version supports rotating the certificate without restart, the validity of the DB server certificate is 1 year. Otherwise the validity is 3 years.

For more information about DB server certificate rotation, see [Automatic server certificate rotation](UsingWithRDS.SSL-certificate-rotation.md#UsingWithRDS.SSL-certificate-rotation-server-cert-rotation) . 

### Viewing the CA for your DB instance
<a name="UsingWithRDS.SSL.RegionCertificateAuthorities.Viewing"></a>

You can view the details about the CA for a database by viewing the **Connectivity & security** tab in the console, as in the following image.

![\[Certificate authority details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/certificate-authority-details.png)


If you're using the AWS CLI, you can view the details about the CA for a DB instance by using the [describe-db-instances](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html) command. You can view the details about the CA for a Multi-AZ DB cluster by using the [describe-db-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) command. 

## Download certificate bundles for Amazon RDS
<a name="UsingWithRDS.SSL.CertificatesDownload"></a>

When you connect to your database with SSL or TLS, the database instance requires a trust certificate from Amazon RDS. Select the appropriate link in the following table to download the bundle that corresponds with the AWS Region where you host your database.

### Certificate bundles by AWS Region
<a name="UsingWithRDS.SSL.CertificatesAllRegions"></a>

The certificate bundles for all AWS Regions and GovCloud (US) Regions contain the following root CA certificates:
+  `rds-ca-rsa2048-g1` 
+  `rds-ca-rsa4096-g1` 
+  `rds-ca-ecc384-g1` 

The `rds-ca-rsa4096-g1` and `rds-ca-ecc384-g1` certificates are not available in the following Regions:
+ Asia Pacific (Mumbai)
+ Asia Pacific (Melbourne)
+ Canada West (Calgary)
+ Europe (Zurich)
+ Europe (Spain)
+ Israel (Tel Aviv)

Your application trust store needs to only register the root CA certificate. Do not register the intermediate CA certificates to your trust store as this might cause connection issues when RDS automatically rotates your DB server certificate.

**Note**  
Amazon RDS Proxy uses certificates from the AWS Certificate Manager (ACM). If you're using RDS Proxy, you don't need to download Amazon RDS certificates or update applications that use RDS Proxy connections. For more information, see [Using TLS/SSL with RDS Proxy](rds-proxy.howitworks.md#rds-proxy-security.tls) .

To download a certificate bundle for an AWS Region, select the link for the AWS Region that hosts your database in the following table.


|  **AWS Region**  |  **Certificate bundle (PEM)**  |  **Certificate bundle (PKCS7)**  | 
| --- | --- | --- | 
| Any commercial AWS Region |  [global-bundle.pem](https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem)  |  [global-bundle.p7b](https://truststore.pki.rds.amazonaws.com/global/global-bundle.p7b)  | 
| US East (N. Virginia) |  [us-east-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/us-east-1/us-east-1-bundle.pem)  |  [us-east-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/us-east-1/us-east-1-bundle.p7b)  | 
| US East (Ohio) |  [us-east-2-bundle.pem](https://truststore.pki.rds.amazonaws.com/us-east-2/us-east-2-bundle.pem)  |  [us-east-2-bundle.p7b](https://truststore.pki.rds.amazonaws.com/us-east-2/us-east-2-bundle.p7b)  | 
| US West (N. California) |  [us-west-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/us-west-1/us-west-1-bundle.pem)  |  [us-west-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/us-west-1/us-west-1-bundle.p7b)  | 
| US West (Oregon) |  [us-west-2-bundle.pem](https://truststore.pki.rds.amazonaws.com/us-west-2/us-west-2-bundle.pem)  |  [us-west-2-bundle.p7b](https://truststore.pki.rds.amazonaws.com/us-west-2/us-west-2-bundle.p7b)  | 
| Africa (Cape Town) |  [af-south-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/af-south-1/af-south-1-bundle.pem)  |  [af-south-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/af-south-1/af-south-1-bundle.p7b)  | 
| Asia Pacific (Hong Kong) |  [ap-east-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-east-1/ap-east-1-bundle.pem)  |  [ap-east-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-east-1/ap-east-1-bundle.p7b)  | 
| Asia Pacific (Hyderabad) |  [ap-south-2-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-south-2/ap-south-2-bundle.pem)  |  [ap-south-2-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-south-2/ap-south-2-bundle.p7b)  | 
| Asia Pacific (Jakarta) |  [ap-southeast-3-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-southeast-3/ap-southeast-3-bundle.pem)  |  [ap-southeast-3-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-southeast-3/ap-southeast-3-bundle.p7b)  | 
| Asia Pacific (Malaysia) |  [ap-southeast-5-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-southeast-5/ap-southeast-5-bundle.pem)  |  [ap-southeast-5-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-southeast-5/ap-southeast-5-bundle.p7b)  | 
| Asia Pacific (Melbourne) |  [ap-southeast-4-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-southeast-4/ap-southeast-4-bundle.pem)  |  [ap-southeast-4-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-southeast-4/ap-southeast-4-bundle.p7b)  | 
| Asia Pacific (Mumbai) |  [ap-south-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-south-1/ap-south-1-bundle.pem)  |  [ap-south-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-south-1/ap-south-1-bundle.p7b)  | 
| Asia Pacific (Osaka) |  [ap-northeast-3-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-northeast-3/ap-northeast-3-bundle.pem)  |  [ap-northeast-3-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-northeast-3/ap-northeast-3-bundle.p7b)  | 
| Asia Pacific (Thailand) |  [ap-southeast-7-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-southeast-7/ap-southeast-7-bundle.pem)  |  [ap-southeast-7-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-southeast-7/ap-southeast-7-bundle.p7b)  | 
| Asia Pacific (Tokyo) |  [ap-northeast-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-northeast-1/ap-northeast-1-bundle.pem)  |  [ap-northeast-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-northeast-1/ap-northeast-1-bundle.p7b)  | 
| Asia Pacific (Seoul) |  [ap-northeast-2-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-northeast-2/ap-northeast-2-bundle.pem)  |  [ap-northeast-2-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-northeast-2/ap-northeast-2-bundle.p7b)  | 
| Asia Pacific (Singapore) |  [ap-southeast-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-southeast-1/ap-southeast-1-bundle.pem)  |  [ap-southeast-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-southeast-1/ap-southeast-1-bundle.p7b)  | 
| Asia Pacific (Sydney) |  [ap-southeast-2-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-southeast-2/ap-southeast-2-bundle.pem)  |  [ap-southeast-2-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-southeast-2/ap-southeast-2-bundle.p7b)  | 
| Canada (Central) |  [ca-central-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/ca-central-1/ca-central-1-bundle.pem)  |  [ca-central-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ca-central-1/ca-central-1-bundle.p7b)  | 
| Canada West (Calgary) |  [ca-west-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/ca-west-1/ca-west-1-bundle.pem)  |  [ca-west-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ca-west-1/ca-west-1-bundle.p7b)  | 
| Europe (Frankfurt) |  [eu-central-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/eu-central-1/eu-central-1-bundle.pem)  |  [eu-central-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/eu-central-1/eu-central-1-bundle.p7b)  | 
| Europe (Ireland) |  [eu-west-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/eu-west-1/eu-west-1-bundle.pem)  |  [eu-west-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/eu-west-1/eu-west-1-bundle.p7b)  | 
| Europe (London) |  [eu-west-2-bundle.pem](https://truststore.pki.rds.amazonaws.com/eu-west-2/eu-west-2-bundle.pem)  |  [eu-west-2-bundle.p7b](https://truststore.pki.rds.amazonaws.com/eu-west-2/eu-west-2-bundle.p7b)  | 
| Europe (Milan) |  [eu-south-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/eu-south-1/eu-south-1-bundle.pem)  |  [eu-south-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/eu-south-1/eu-south-1-bundle.p7b)  | 
| Europe (Paris) |  [eu-west-3-bundle.pem](https://truststore.pki.rds.amazonaws.com/eu-west-3/eu-west-3-bundle.pem)  |  [eu-west-3-bundle.p7b](https://truststore.pki.rds.amazonaws.com/eu-west-3/eu-west-3-bundle.p7b)  | 
| Europe (Spain) |  [eu-south-2-bundle.pem](https://truststore.pki.rds.amazonaws.com/eu-south-2/eu-south-2-bundle.pem)  |  [eu-south-2-bundle.p7b](https://truststore.pki.rds.amazonaws.com/eu-south-2/eu-south-2-bundle.p7b)  | 
| Europe (Stockholm) |  [eu-north-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/eu-north-1/eu-north-1-bundle.pem)  |  [eu-north-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/eu-north-1/eu-north-1-bundle.p7b)  | 
| Europe (Zurich) |  [eu-central-2-bundle.pem](https://truststore.pki.rds.amazonaws.com/eu-central-2/eu-central-2-bundle.pem)  |  [eu-central-2-bundle.p7b](https://truststore.pki.rds.amazonaws.com/eu-central-2/eu-central-2-bundle.p7b)  | 
| Israel (Tel Aviv) |  [il-central-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/il-central-1/il-central-1-bundle.pem)  |  [il-central-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/il-central-1/il-central-1-bundle.p7b)  | 
| Mexico (Central) |  [mx-central-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/mx-central-1/mx-central-1-bundle.pem)  |  [mx-central-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/mx-central-1/mx-central-1-bundle.p7b)  | 
| Middle East (Bahrain) |  [me-south-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/me-south-1/me-south-1-bundle.pem)  |  [me-south-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/me-south-1/me-south-1-bundle.p7b)  | 
| Middle East (UAE) |  [me-central-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/me-central-1/me-central-1-bundle.pem)  |  [me-central-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/me-central-1/me-central-1-bundle.p7b)  | 
| South America (São Paulo) |  [sa-east-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/sa-east-1/sa-east-1-bundle.pem)  |  [sa-east-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/sa-east-1/sa-east-1-bundle.p7b)  | 
| Any AWS GovCloud (US) Regions |  [global-bundle.pem](https://truststore.pki.us-gov-west-1.rds.amazonaws.com/global/global-bundle.pem)  |  [global-bundle.p7b](https://truststore.pki.us-gov-west-1.rds.amazonaws.com/global/global-bundle.p7b)  | 
| AWS GovCloud (US-East) |  [us-gov-east-1-bundle.pem](https://truststore.pki.us-gov-west-1.rds.amazonaws.com/us-gov-east-1/us-gov-east-1-bundle.pem)  |  [us-gov-east-1-bundle.p7b](https://truststore.pki.us-gov-west-1.rds.amazonaws.com/us-gov-east-1/us-gov-east-1-bundle.p7b)  | 
| AWS GovCloud (US-West) |  [us-gov-west-1-bundle.pem](https://truststore.pki.us-gov-west-1.rds.amazonaws.com/us-gov-west-1/us-gov-west-1-bundle.pem)  |  [us-gov-west-1-bundle.p7b](https://truststore.pki.us-gov-west-1.rds.amazonaws.com/us-gov-west-1/us-gov-west-1-bundle.p7b)  | 

### Viewing the contents of your CA certificate
<a name="UsingWithRDS.SSL.CertificatesDownload.viewing"></a>

To check the contents of your CA certificate bundle, use the following command: 

```
keytool -printcert -v -file global-bundle.pem
```

# Rotating your SSL/TLS certificate
<a name="UsingWithRDS.SSL-certificate-rotation"></a>

Amazon RDS Certificate Authority certificates rds-ca-2019 expired in August, 2024. If you use or plan to use Secure Sockets Layer (SSL) or Transport Layer Security (TLS) with certificate verification to connect to your RDS DB instances or Multi-AZ DB clusters,consider using one of the new CA certificates rds-ca-rsa2048-g1, rds-ca-rsa4096-g1 or rds-ca-ecc384-g1. If you currently do not use SSL/TLS with certificate verification, you might still have an expired CA certificate and must update them to a new CA certificate if you plan to use SSL/TLS with certificate verification to connect to your RDS databases.

Amazon RDS provides new CA certificates as an AWS security best practice. For information about the new certificates and the supported AWS Regions, see [Using SSL/TLS to encrypt a connection to a DB instance or cluster ](UsingWithRDS.SSL.md) .

To update the CA certificate for your database, use the following methods: 
+  [Updating your CA certificate by modifying your DB instance or cluster](#UsingWithRDS.SSL-certificate-rotation-updating) 
+  [Updating your CA certificate by applying maintenance](#UsingWithRDS.SSL-certificate-rotation-maintenance-update) 

Before you update your DB instances or Multi-AZ DB clusters to use the new CA certificate, make sure that you update your clients or applications connecting to your RDS databases.

## Considerations for rotating certificates
<a name="UsingWithRDS.SSL-certificate-rotation-considerations"></a>

Consider the following situations before rotating your certificate:
+ Amazon RDS Proxy uses certificates from the AWS Certificate Manager (ACM). If you're using RDS Proxy, when you rotate your SSL/TLS certificate, you don't need to update applications that use RDS Proxy connections. For more information, see [Using TLS/SSL with RDS Proxy](rds-proxy.howitworks.md#rds-proxy-security.tls) .
+ If you're using a Go version 1.15 application with a DB instance or Multi-AZ DB cluster that was created or updated to the rds-ca-2019 certificate prior to July 28, 2020, you must update the certificate again. Update the certificate to rds-ca-rsa2048-g1, rds-ca-rsa4096-g1, or rds-ca-ecc384-g1 depending on your engine . 

  Use the `modify-db-instance` command for a DB instance, or the `modify-db-cluster` command for a Multi-AZ DB cluster, using the new CA certificate identifier. You can find the CAs that are available for a specific DB engine and DB engine version using the `describe-db-engine-versions` command. 

  If you created your database or updated its certificate after July 28, 2020, no action is required. For more information, see [Go GitHub issue \$139568](https://github.com/golang/go/issues/39568). 

## Updating your CA certificate by modifying your DB instance or cluster
<a name="UsingWithRDS.SSL-certificate-rotation-updating"></a>

The following example updates your CA certificate from *rds-ca-2019* to *rds-ca-rsa2048-g1*.You can choose a different certificate. For more information, see [Certificate authorities](UsingWithRDS.SSL.md#UsingWithRDS.SSL.RegionCertificateAuthorities) . 

Update your application trust store to reduce any down time associated with updating your CA certificate. For more information about restarts associated with CA certificate rotation, see [Automatic server certificate rotation](#UsingWithRDS.SSL-certificate-rotation-server-cert-rotation) .

**To update your CA certificate by modifying your DB instance or cluster**

1. Download the new SSL/TLS certificate as described in [Using SSL/TLS to encrypt a connection to a DB instance or cluster ](UsingWithRDS.SSL.md) .

1. Update your applications to use the new SSL/TLS certificate.

   The methods for updating applications for new SSL/TLS certificates depend on your specific applications. Work with your application developers to update the SSL/TLS certificates for your applications.

   For information about checking for SSL/TLS connections and updating applications for each DB engine, see the following topics:
   +  [Updating applications to connect to MariaDB instances using new SSL/TLS certificates](ssl-certificate-rotation-mariadb.md) 
   +  [Updating applications to connect to Microsoft SQL Server DB instances using new SSL/TLS certificates](ssl-certificate-rotation-sqlserver.md) 
   +  [Updating applications to connect to MySQL DB instances using new SSL/TLS certificates](ssl-certificate-rotation-mysql.md) 
   +  [Updating applications to connect to Oracle DB instances using new SSL/TLS certificates](ssl-certificate-rotation-oracle.md) 
   +  [Updating applications to connect to PostgreSQL DB instances using new SSL/TLS certificates](ssl-certificate-rotation-postgresql.md) 

   For a sample script that updates a trust store for a Linux operating system, see [Sample script for importing certificates into your trust store](#UsingWithRDS.SSL-certificate-rotation-sample-script) .
**Note**  
The certificate bundle contains certificates for both the old and new CA, so you can upgrade your application safely and maintain connectivity during the transition period. If you are using the AWS Database Migration Service to migrate a database to a DB instance or cluster , we recommend using the certificate bundle to ensure connectivity during the migration.

1. Modify the DB instance or Multi-AZ DB cluster to change the CA from **rds-ca-2019** to **rds-ca-rsa2048-g1**. To check if your database requires a restart to update the CA certificates, use the [describe-db-engine-versions](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-engine-versions.html) command and check the `SupportsCertificateRotationWithoutRestart` flag. 
**Important**  
If you are experiencing connectivity issues after certificate expiry, use the apply immediately option by specifying **Apply immediately** in the console or by specifying the `--apply-immediately` option using the AWS CLI. By default, this operation is scheduled to run during your next maintenance window.  
For RDS for Oracle DB instances, we recommend that you restart your Oracle DB to prevent any connection errors.  
For RDS for SQL Server Multi-AZ instances with AlwaysOn or Mirroring option enabled, a failover is expected when instance is rebooted after the certificate rotation.  
To set an override for your instance CA that's different from the default RDS CA, use the [modify-certificates](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-certificates.html) CLI command.

You can use the AWS Management Console or the AWS CLI to change the CA certificate from **rds-ca-2019** to **rds-ca-rsa2048-g1** for a DB instance or Multi-AZ DB cluster. 

------
#### [ Console ]

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then choose the DB instance or Multi-AZ DB cluster that you want to modify. 

1. Choose **Modify**.   
![\[Modify DB instance or Multi-AZ DB cluster\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/ssl-rotate-cert-modify.png)

1. In the **Connectivity** section, choose **rds-ca-rsa2048-g1**.   
![\[Choose CA certificate\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/ssl-rotate-cert-ca-rsa2048-g1.png)

1. Choose **Continue** and check the summary of modifications. 

1. To apply the changes immediately, choose **Apply immediately**. 

1. On the confirmation page, review your changes. If they are correct, choose **Modify DB Instance** or **Modify cluster**  to save your changes. 
**Important**  
When you schedule this operation, make sure that you have updated your client-side trust store beforehand.

   Or choose **Back** to edit your changes or **Cancel** to cancel your changes. 

------
#### [ AWS CLI ]

To use the AWS CLI to change the CA from **rds-ca-2019** to **rds-ca-rsa2048-g1** for a DB instance or Multi-AZ DB cluster, call the [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) or [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) command. Specify the DB instance or cluster identifier and the `--ca-certificate-identifier` option.

Use the `--apply-immediately` parameter to apply the update immediately. By default, this operation is scheduled to run during your next maintenance window.

**Important**  
When you schedule this operation, make sure that you have updated your client-side trust store beforehand.

**Example**  
 **DB instance**   
The following example modifies `mydbinstance` by setting the CA certificate to `rds-ca-rsa2048-g1`.   
For Linux, macOS, or Unix:  

```
aws rds modify-db-instance \
    --db-instance-identifier mydbinstance \
    --ca-certificate-identifier rds-ca-rsa2048-g1
```
For Windows:  

```
aws rds modify-db-instance ^
    --db-instance-identifier mydbinstance ^
    --ca-certificate-identifier rds-ca-rsa2048-g1
```
If your instance requires reboot, you can use the [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) CLI command and specify the `--no-certificate-rotation-restart` option.

**Example**  
 **Multi-AZ DB cluster**   
The following example modifies `mydbcluster` by setting the CA certificate to `rds-ca-rsa2048-g1`.   
For Linux, macOS, or Unix:  

```
aws rds modify-db-cluster \
    --db-cluster-identifier mydbcluster \
    --ca-certificate-identifier rds-ca-rsa2048-g1
```
For Windows:  

```
aws rds modify-db-cluster ^
    --db-cluster-identifier mydbcluster ^
    --ca-certificate-identifier rds-ca-rsa2048-g1
```

------

## Updating your CA certificate by applying maintenance
<a name="UsingWithRDS.SSL-certificate-rotation-maintenance-update"></a>

Perform the following steps to update your CA certificate by applying maintenance.

------
#### [ Console ]

**To update your CA certificate by applying maintenance**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Certificate update**.   
![\[Certificate rotation navigation pane option\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/ssl-rotate-cert-certupdate.png)

   The **Databases requiring certificate update** page appears.  
![\[Update CA certificate for database\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/ssl-rotate-cert-update-multiple.png)
**Note**  
This page only shows the DB instances and clusters for the current AWS Region. If you have databases in more than one AWS Region, check this page in each AWS Region to see all DB instances with old SSL/TLS certificates.

1. Choose the DB instance or Multi-AZ DB cluster that you want to update.

   You can schedule the certificate rotation for your next maintenance window by choosing **Schedule**. Apply the rotation immediately by choosing **Apply now**. 
**Important**  
If you experience connectivity issues after certificate expiry, use the **Apply now** option.

1. 

   1. If you choose **Schedule**, you are prompted to confirm the CA certificate rotation. This prompt also states the scheduled window for your update.   
![\[Confirm certificate rotation\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/ssl-rotate-cert-confirm-schedule.png)

   1. If you choose **Apply now**, you are prompted to confirm the CA certificate rotation.  
![\[Confirm certificate rotation\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/ssl-rotate-cert-confirm-now.png)
**Important**  
Before scheduling the CA certificate rotation on your database, update any client applications that use SSL/TLS and the server certificate to connect. These updates are specific to your DB engine. After you have updated these client applications, you can confirm the CA certificate rotation. 

   To continue, choose the check box, and then choose **Confirm**. 

1. Repeat steps 3 and 4 for each DB instance and cluster that you want to update.

------

## Automatic server certificate rotation
<a name="UsingWithRDS.SSL-certificate-rotation-server-cert-rotation"></a>

If your root CA supports automatic server certificate rotation, RDS automatically handles the rotation of the DB server certificate. RDS uses the same root CA for this automatic rotation, so you don't need to download a new CA bundle. See [Certificate authorities](UsingWithRDS.SSL.md#UsingWithRDS.SSL.RegionCertificateAuthorities) .

The rotation and validity of your DB server certificate depend on your DB engine:
+ If your DB engine supports rotation without restart, RDS automatically rotates the DB server certificate without requiring any action from you. RDS attempts to rotate your DB server certificate in your preferred maintenance window at the DB server certificate half life. The new DB server certificate is valid for 12 months.
+ If your DB engine doesn't support rotation without restart, Amazon RDS makes a `server-certificate-rotation` Pending Maintenance Action visible via Describe-pending-maintenance-actions API, at the half life of the certificate, or at least 3 months before expiry. You can apply the rotation using the apply-pending-maintenance-action API. The new DB server certificate is valid for 36 months.

Use the [ describe-db-engine-versions](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-engine-versions.html) command and inspect the `SupportsCertificateRotationWithoutRestart` flag to identify whether the DB engine version supports rotating the certificate without restart. For more information, see [Setting the CA for your database](UsingWithRDS.SSL.md#UsingWithRDS.SSL.RegionCertificateAuthorities.Selection) . 

**Important**  
 For Amazon RDS for Oracle DB instances, you will see the `SupportsCertificateRotationWithoutRestart` flag of the DB engine versions marked as `FALSE`. However, Amazon RDS for Oracle DB instances do NOT require restart, but the database listener is restarted during the server certificate rotation. Existing database connections are unaffected, but new connections will encounter errors for a brief period while the listener is restarted. If you want to manually rotate the server certificate, use the [ apply-pending-maintenance-action](https://docs.aws.amazon.com/cli/latest/reference/rds/apply-pending-maintenance-action.html) AWS CLI command. 

## Sample script for importing certificates into your trust store
<a name="UsingWithRDS.SSL-certificate-rotation-sample-script"></a>

The following are sample shell scripts that import the certificate bundle into a trust store.

Each sample shell script uses keytool, which is part of the Java Development Kit (JDK). For information about installing the JDK, see [ JDK Installation Guide](https://docs.oracle.com/en/java/javase/17/install/overview-jdk-installation.html). 

------
#### [ Linux ]

The following is a sample shell script that imports the certificate bundle into a trust store on a Linux operating system.

```
mydir=tmp/certs
if [ ! -e "${mydir}" ]
then
mkdir -p "${mydir}"
fi truststore=${mydir}/rds-truststore.jks storepassword=changeit

curl -sS "https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem"> ${mydir}/global-bundle.pem
awk 'split_after == 1 {n++;split_after=0} /-----END CERTIFICATE-----/ {split_after=1}{print > "rds-ca-" n+1 ".pem"}' < ${mydir}/global-bundle.pem

for CERT in rds-ca-*; do alias=$(openssl x509 -noout -text -in $CERT | perl -ne 'next unless /Subject:/; s/.*(CN=|CN = )//; print')
  echo "Importing $alias"
  keytool -import -file ${CERT} -alias "${alias}" -storepass ${storepassword} -keystore ${truststore} -noprompt
  rm $CERT
done

rm ${mydir}/global-bundle.pem

echo "Trust store content is: "

keytool -list -v -keystore "$truststore" -storepass ${storepassword} | grep Alias | cut -d " " -f3- | while read alias 
do expiry=`keytool -list -v -keystore "$truststore" -storepass ${storepassword} -alias "${alias}" | grep Valid | perl -ne 'if(/until: (.*?)\n/) { print "$1\n"; }'`
   echo " Certificate ${alias} expires in '$expiry'" 
done
```

------
#### [ macOS ]

The following is a sample shell script that imports the certificate bundle into a trust store on macOS.

```
mydir=tmp/certs
if [ ! -e "${mydir}" ]
then
mkdir -p "${mydir}"
fi truststore=${mydir}/rds-truststore.jks storepassword=changeit

curl -sS "https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem"> ${mydir}/global-bundle.pem
split -p "-----BEGIN CERTIFICATE-----" ${mydir}/global-bundle.pem rds-ca-

for CERT in rds-ca-*; do alias=$(openssl x509 -noout -text -in $CERT | perl -ne 'next unless /Subject:/; s/.*(CN=|CN = )//; print')
  echo "Importing $alias"
  keytool -import -file ${CERT} -alias "${alias}" -storepass ${storepassword} -keystore ${truststore} -noprompt
  rm $CERT
done

rm ${mydir}/global-bundle.pem

echo "Trust store content is: "

keytool -list -v -keystore "$truststore" -storepass ${storepassword} | grep Alias | cut -d " " -f3- | while read alias 
do expiry=`keytool -list -v -keystore "$truststore" -storepass ${storepassword} -alias "${alias}" | grep Valid | perl -ne 'if(/until: (.*?)\n/) { print "$1\n"; }'`
   echo " Certificate ${alias} expires in '$expiry'" 
done
```

------

# Internetwork traffic privacy
<a name="inter-network-traffic-privacy"></a>

Connections are protected both between Amazon RDS and on-premises applications and between Amazon RDS and other AWS resources within the same AWS Region.

## Traffic between service and on-premises clients and applications
<a name="inter-network-traffic-privacy-on-prem"></a>

You have two connectivity options between your private network and AWS: 
+ An AWS Site-to-Site VPN connection. For more information, see [What is AWS Site-to-Site VPN?](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html) 
+ An Direct Connect connection. For more information, see [What is Direct Connect?](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) 

You get access to Amazon RDS through the network by using AWS-published API operations. Clients must support the following:
+ Transport Layer Security (TLS). We require TLS 1.2 and recommend TLS 1.3.
+ Cipher suites with perfect forward secrecy (PFS) such as DHE (Ephemeral Diffie-Hellman) or ECDHE (Elliptic Curve Ephemeral Diffie-Hellman). Most modern systems such as Java 7 and later support these modes.

Additionally, requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal. Or you can use the [AWS Security Token Service](https://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html) (AWS STS) to generate temporary security credentials to sign requests.

# Identity and access management for Amazon RDS
<a name="UsingWithRDS.IAM"></a>





AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be *authenticated* (signed in) and *authorized* (have permissions) to use Amazon RDS resources. IAM is an AWS service that you can use with no additional charge.

**Topics**
+ [

## Audience
](#security_iam_audience)
+ [

## Authenticating with identities
](#security_iam_authentication)
+ [

## Managing access using policies
](#security_iam_access-manage)
+ [

# How Amazon RDS works with IAM
](security_iam_service-with-iam.md)
+ [

# Identity-based policy examples for Amazon RDS
](security_iam_id-based-policy-examples.md)
+ [

# AWS managed policies for Amazon RDS
](rds-security-iam-awsmanpol.md)
+ [

# Amazon RDS updates to AWS managed policies
](rds-manpol-updates.md)
+ [

# Preventing cross-service confused deputy problems
](cross-service-confused-deputy-prevention.md)
+ [

# IAM database authentication for MariaDB, MySQL, and PostgreSQL
](UsingWithRDS.IAMDBAuth.md)
+ [

# Troubleshooting Amazon RDS identity and access
](security_iam_troubleshoot.md)

## Audience
<a name="security_iam_audience"></a>

How you use AWS Identity and Access Management (IAM) differs, depending on the work you do in Amazon RDS.

**Service user** – If you use the Amazon RDS service to do your job, then your administrator provides you with the credentials and permissions that you need. As you use more Amazon RDS features to do your work, you might need additional permissions. Understanding how access is managed can help you request the right permissions from your administrator. If you cannot access a feature in Amazon RDS, see [Troubleshooting Amazon RDS identity and access](security_iam_troubleshoot.md).

**Service administrator** – If you're in charge of Amazon RDS resources at your company, you probably have full access to Amazon RDS. It's your job to determine which Amazon RDS features and resources your employees should access. You must then submit requests to your administrator to change the permissions of your service users. Review the information on this page to understand the basic concepts of IAM. To learn more about how your company can use IAM with Amazon RDS, see [How Amazon RDS works with IAM](security_iam_service-with-iam.md).

**Administrator** – If you're an administrator, you might want to learn details about how you can write policies to manage access to Amazon RDS. To view example Amazon RDS identity-based policies that you can use in IAM, see [Identity-based policy examples for Amazon RDS](security_iam_id-based-policy-examples.md).

## Authenticating with identities
<a name="security_iam_authentication"></a>

Authentication is how you sign in to AWS using your identity credentials. You must be authenticated as the AWS account root user, an IAM user, or by assuming an IAM role.

You can sign in as a federated identity using credentials from an identity source like AWS IAM Identity Center (IAM Identity Center), single sign-on authentication, or Google/Facebook credentials. For more information about signing in, see [How to sign in to your AWS account](https://docs.aws.amazon.com/signin/latest/userguide/how-to-sign-in.html) in the *AWS Sign-In User Guide*.

For programmatic access, AWS provides an SDK and CLI to cryptographically sign requests. For more information, see [AWS Signature Version 4 for API requests](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_sigv.html) in the *IAM User Guide*.

### AWS account root user
<a name="security_iam_authentication-rootuser"></a>

 When you create an AWS account, you begin with one sign-in identity called the AWS account *root user* that has complete access to all AWS services and resources. We strongly recommend that you don't use the root user for everyday tasks. For tasks that require root user credentials, see [Tasks that require root user credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html#root-user-tasks) in the *IAM User Guide*. 

### Federated identity
<a name="security_iam_authentication-federatedidentity"></a>

As a best practice, require human users to use federation with an identity provider to access AWS services using temporary credentials.

A *federated identity* is a user from your enterprise directory, web identity provider, or Directory Service that accesses AWS services using credentials from an identity source. Federated identities assume roles that provide temporary credentials.

For centralized access management, we recommend AWS IAM Identity Center. For more information, see [What is IAM Identity Center?](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) in the *AWS IAM Identity Center User Guide*.

### IAM users and groups
<a name="security_iam_authentication-iamuser"></a>

An *[IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html)* is an identity with specific permissions for a single person or application. We recommend using temporary credentials instead of IAM users with long-term credentials. For more information, see [Require human users to use federation with an identity provider to access AWS using temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-users-federation-idp) in the *IAM User Guide*.

An [https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) specifies a collection of IAM users and makes permissions easier to manage for large sets of users. For more information, see [Use cases for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/gs-identities-iam-users.html) in the *IAM User Guide*.

You can authenticate to your DB instance using IAM database authentication.

IAM database authentication works with the following DB engines:
+ RDS for MariaDB
+ RDS for MySQL
+ RDS for PostgreSQL

For more information about authenticating to your DB instance using IAM, see [IAM database authentication for MariaDB, MySQL, and PostgreSQL](UsingWithRDS.IAMDBAuth.md).

### IAM roles
<a name="security_iam_authentication-iamrole"></a>

An *[IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html)* is an identity within your AWS account that has specific permissions. It is similar to a user, but is not associated with a specific person. You can temporarily assume an IAM role in the AWS Management Console by [switching roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-console.html). You can assume a role by calling an AWS CLI or AWS API operation or by using a custom URL. For more information about methods for using roles, see [Using IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html) in the *IAM User Guide*.

IAM roles with temporary credentials are useful in the following situations:
+ **Temporary user permissions** – A user can assume an IAM role to temporarily take on different permissions for a specific task. 
+ **Federated user access** – To assign permissions to a federated identity, you create a role and define permissions for the role. When a federated identity authenticates, the identity is associated with the role and is granted the permissions that are defined by the role. For information about roles for federation, see [ Create a role for a third-party identity provider (federation)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*. If you use IAM Identity Center, you configure a permission set. To control what your identities can access after they authenticate, IAM Identity Center correlates the permission set to a role in IAM. For information about permissions sets, see [ Permission sets](https://docs.aws.amazon.com/singlesignon/latest/userguide/permissionsetsconcept.html) in the *AWS IAM Identity Center User Guide*. 
+ **Cross-account access** – You can use an IAM role to allow someone (a trusted principal) in a different account to access resources in your account. Roles are the primary way to grant cross-account access. However, with some AWS services, you can attach a policy directly to a resource (instead of using a role as a proxy). To learn the difference between roles and resource-based policies for cross-account access, see [How IAM roles differ from resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_compare-resource-policies.html) in the *IAM User Guide*.
+ **Cross-service access** – Some AWS services use features in other AWS services. For example, when you make a call in a service, it's common for that service to run applications in Amazon EC2 or store objects in Amazon S3. A service might do this using the calling principal's permissions, using a service role, or using a service-linked role. 
  + **Forward access sessions** – Forward access sessions (FAS) use the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. For policy details when making FAS requests, see [Forward access sessions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_forward_access_sessions.html). 
  + **Service role** – A service role is an [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see [Create a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *IAM User Guide*. 
  + **Service-linked role** – A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. 
+ **Applications running on Amazon EC2** – You can use an IAM role to manage temporary credentials for applications that are running on an EC2 instance and making AWS CLI or AWS API requests. This is preferable to storing access keys within the EC2 instance. To assign an AWS role to an EC2 instance and make it available to all of its applications, you create an instance profile that is attached to the instance. An instance profile contains the role and enables programs that are running on the EC2 instance to get temporary credentials. For more information, see [Use an IAM role to grant permissions to applications running on Amazon EC2 instances](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html) in the *IAM User Guide*. 

To learn whether to use IAM roles, see [When to create an IAM role (instead of a user)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html#id_which-to-choose_role) in the *IAM User Guide*.

## Managing access using policies
<a name="security_iam_access-manage"></a>

You control access in AWS by creating policies and attaching them to IAM identities or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an entity (root user, user, or IAM role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. For more information about the structure and contents of JSON policy documents, see [Overview of JSON policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#access_policies-json) in the *IAM User Guide*.

An administrator can use policies to specify who has access to AWS resources, and what actions they can perform on those resources. Every IAM entity (permission set or role) starts with no permissions. In other words, by default, users can do nothing, not even change their own password. To give a user permission to do something, an administrator must attach a permissions policy to a user. Or the administrator can add the user to a group that has the intended permissions. When an administrator gives permissions to a group, all users in that group are granted those permissions.

IAM policies define permissions for an action regardless of the method that you use to perform the operation. For example, suppose that you have a policy that allows the `iam:GetRole` action. A user with that policy can get role information from the AWS Management Console, the AWS CLI, or the AWS API.

### Identity-based policies
<a name="security_iam_access-manage-id-based-policies"></a>

Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as a permission set or role. These policies control what actions that identity can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

Identity-based policies can be further categorized as *inline policies* or *managed policies*. Inline policies are embedded directly into a single permission set or role. Managed policies are standalone policies that you can attach to multiple permission sets and roles in your AWS account. Managed policies include AWS managed policies and customer managed policies. To learn how to choose between a managed policy or an inline policy, see [Choosing between managed policies and inline policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#choosing-managed-or-inline) in the *IAM User Guide*.

For information about AWS managed policies that are specific to Amazon RDS, see [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md).

### Other policy types
<a name="security_iam_access-manage-other-policies"></a>

AWS supports additional, less-common policy types. These policy types can set the maximum permissions granted to you by the more common policy types. 
+ **Permissions boundaries** – A permissions boundary is an advanced feature in which you set the maximum permissions that an identity-based policy can grant to an IAM entity (permission set or role). You can set a permissions boundary for an entity. The resulting permissions are the intersection of entity's identity-based policies and its permissions boundaries. Resource-based policies that specify the permission set or role in the `Principal` field are not limited by the permissions boundary. An explicit deny in any of these policies overrides the allow. For more information about permissions boundaries, see [Permissions boundaries for IAM entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) in the *IAM User Guide*.
+ **Service control policies (SCPs)** – SCPs are JSON policies that specify the maximum permissions for an organization or organizational unit (OU) in AWS Organizations. AWS Organizations is a service for grouping and centrally managing multiple AWS accounts that your business owns. If you enable all features in an organization, then you can apply service control policies (SCPs) to any or all of your accounts. The SCP limits permissions for entities in member accounts, including each AWS account root user. For more information about Organizations and SCPs, see [How SCPs work](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_about-scps.html) in the *AWS Organizations User Guide*.
+ **Session policies** – Session policies are advanced policies that you pass as a parameter when you programmatically create a temporary session for a role or federated user. The resulting session's permissions are the intersection of the permission sets or role's identity-based policies and the session policies. Permissions can also come from a resource-based policy. An explicit deny in any of these policies overrides the allow. For more information, see [Session policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session) in the *IAM User Guide*. 

### Multiple policy types
<a name="security_iam_access-manage-multiple-policies"></a>

When multiple types of policies apply to a request, the resulting permissions are more complicated to understand. To learn how AWS determines whether to allow a request when multiple policy types are involved, see [Policy evaluation logic](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html) in the *IAM User Guide*.

# How Amazon RDS works with IAM
<a name="security_iam_service-with-iam"></a>

Before you use IAM to manage access to Amazon RDS, you should understand what IAM features are available to use with Amazon RDS.

The following table lists IAM features you can use with Amazon RDS:


| IAM feature | Amazon RDS support | 
| --- | --- | 
|  [Identity-based policies](#security_iam_service-with-iam-id-based-policies)  |  Yes  | 
|  [Resource-based policies](#security_iam_service-with-iam-resource-based-policies)  |  No  | 
|  [Policy actions](#security_iam_service-with-iam-id-based-policies-actions)  |  Yes  | 
|  [Policy resources](#security_iam_service-with-iam-id-based-policies-resources)  |  Yes  | 
|  [Policy condition keys (service-specific)](#UsingWithRDS.IAM.Conditions)  |  Yes  | 
|  [ACLs](#security_iam_service-with-iam-acls)  |  No  | 
|  [Attribute-based access control (ABAC) (tags in policies)](#security_iam_service-with-iam-tags)  |  Yes  | 
|  [Temporary credentials](#security_iam_service-with-iam-roles-tempcreds)  |  Yes  | 
|  [Forward access sessions](#security_iam_service-with-iam-principal-permissions)  |  Yes  | 
|  [Service roles](#security_iam_service-with-iam-roles-service)  |  Yes  | 
|  [Service-linked roles](#security_iam_service-with-iam-roles-service-linked)  |  Yes  | 

To get a high-level view of how Amazon RDS and other AWS services work with IAM, see [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) in the *IAM User Guide*.

**Topics**
+ [

## Amazon RDS identity-based policies
](#security_iam_service-with-iam-id-based-policies)
+ [

## Resource-based policies within Amazon RDS
](#security_iam_service-with-iam-resource-based-policies)
+ [

## Policy actions for Amazon RDS
](#security_iam_service-with-iam-id-based-policies-actions)
+ [

## Policy resources for Amazon RDS
](#security_iam_service-with-iam-id-based-policies-resources)
+ [

## Policy condition keys for Amazon RDS
](#UsingWithRDS.IAM.Conditions)
+ [

## Access control lists (ACLs) in Amazon RDS
](#security_iam_service-with-iam-acls)
+ [

## Attribute-based access control (ABAC) in policies with Amazon RDS tags
](#security_iam_service-with-iam-tags)
+ [

## Using temporary credentials with Amazon RDS
](#security_iam_service-with-iam-roles-tempcreds)
+ [

## Forward access sessions for Amazon RDS
](#security_iam_service-with-iam-principal-permissions)
+ [

## Service roles for Amazon RDS
](#security_iam_service-with-iam-roles-service)
+ [

## Service-linked roles for Amazon RDS
](#security_iam_service-with-iam-roles-service-linked)

## Amazon RDS identity-based policies
<a name="security_iam_service-with-iam-id-based-policies"></a>

**Supports identity-based policies:** Yes.

Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as an IAM user, group of users, or role. These policies control what actions users and roles can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see [Define custom IAM permissions with customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

With IAM identity-based policies, you can specify allowed or denied actions and resources as well as the conditions under which actions are allowed or denied. To learn about all of the elements that you can use in a JSON policy, see [IAM JSON policy elements reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html) in the *IAM User Guide*.

### Identity-based policy examples for Amazon RDS
<a name="security_iam_service-with-iam-id-based-policies-examples"></a>

To view examples of Amazon RDS identity-based policies, see [Identity-based policy examples for Amazon RDS](security_iam_id-based-policy-examples.md).

## Resource-based policies within Amazon RDS
<a name="security_iam_service-with-iam-resource-based-policies"></a>

**Supports resource-based policies:** No.

Resource-based policies are JSON policy documents that you attach to a resource. Examples of resource-based policies are IAM *role trust policies* and Amazon S3 *bucket policies*. In services that support resource-based policies, service administrators can use them to control access to a specific resource. For the resource where the policy is attached, the policy defines what actions a specified principal can perform on that resource and under what conditions. You must [specify a principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) in a resource-based policy. Principals can include accounts, users, roles, federated users, or AWS services.

To enable cross-account access, you can specify an entire account or IAM entities in another account as the principal in a resource-based policy. For more information, see [Cross account resource access in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-cross-account-resource-access.html) in the *IAM User Guide*.

## Policy actions for Amazon RDS
<a name="security_iam_service-with-iam-id-based-policies-actions"></a>

**Supports policy actions:** Yes.

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Action` element of a JSON policy describes the actions that you can use to allow or deny access in a policy. Include actions in a policy to grant permissions to perform the associated operation.

Policy actions in Amazon RDS use the following prefix before the action: `rds:`. For example, to grant someone permission to describe DB instances with the Amazon RDS `DescribeDBInstances` API operation, you include the `rds:DescribeDBInstances` action in their policy. Policy statements must include either an `Action` or `NotAction` element. Amazon RDS defines its own set of actions that describe tasks that you can perform with this service.

To specify multiple actions in a single statement, separate them with commas as follows.

```
"Action": [
      "rds:action1",
      "rds:action2"
```

You can specify multiple actions using wildcards (\$1). For example, to specify all actions that begin with the word `Describe`, include the following action.

```
"Action": "rds:Describe*"
```



To see a list of Amazon RDS actions, see [Actions Defined by Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-actions-as-permissions) in the *Service Authorization Reference*.

## Policy resources for Amazon RDS
<a name="security_iam_service-with-iam-id-based-policies-resources"></a>

**Supports policy resources:** Yes.

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Resource` JSON policy element specifies the object or objects to which the action applies. As a best practice, specify a resource using its [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html). For actions that don't support resource-level permissions, use a wildcard (\$1) to indicate that the statement applies to all resources.

```
"Resource": "*"
```

The DB instance resource has the following Amazon Resource Name (ARN).

```
arn:${Partition}:rds:${Region}:${Account}:{ResourceType}/${Resource}
```

For more information about the format of ARNs, see [Amazon Resource Names (ARNs) and AWS service namespaces](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).

For example, to specify the `dbtest` DB instance in your statement, use the following ARN.

```
"Resource": "arn:aws:rds:us-west-2:123456789012:db:dbtest"
```

To specify all DB instances that belong to a specific account, use the wildcard (\$1).

```
"Resource": "arn:aws:rds:us-east-1:123456789012:db:*"
```

Some RDS API operations, such as those for creating resources, can't be performed on a specific resource. In those cases, use the wildcard (\$1).

```
"Resource": "*"
```

Many Amazon RDS API operations involve multiple resources. For example, `CreateDBInstance` creates a DB instance. You can specify that an user must use a specific security group and parameter group when creating a DB instance. To specify multiple resources in a single statement, separate the ARNs with commas. 

```
"Resource": [
      "resource1",
      "resource2"
```

To see a list of Amazon RDS resource types and their ARNs, see [Resources Defined by Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-resources-for-iam-policies) in the *Service Authorization Reference*. To learn with which actions you can specify the ARN of each resource, see [Actions Defined by Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-actions-as-permissions).

## Policy condition keys for Amazon RDS
<a name="UsingWithRDS.IAM.Conditions"></a>

**Supports service-specific policy condition keys:** Yes.

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Condition` element specifies when statements execute based on defined criteria. You can create conditional expressions that use [condition operators](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html), such as equals or less than, to match the condition in the policy with values in the request. To see all AWS global condition keys, see [AWS global condition context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) in the *IAM User Guide*.

Amazon RDS defines its own set of condition keys and also supports using some global condition keys. To see all AWS global condition keys, see [AWS global condition context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) in the *IAM User Guide*.



 All RDS API operations support the `aws:RequestedRegion` condition key. 

To see a list of Amazon RDS condition keys, see [Condition Keys for Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-policy-keys) in the *Service Authorization Reference*. To learn with which actions and resources you can use a condition key, see [Actions Defined by Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-actions-as-permissions).

## Access control lists (ACLs) in Amazon RDS
<a name="security_iam_service-with-iam-acls"></a>

**Supports access control lists (ACLs):** No

Access control lists (ACLs) control which principals (account members, users, or roles) have permissions to access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy document format.

## Attribute-based access control (ABAC) in policies with Amazon RDS tags
<a name="security_iam_service-with-iam-tags"></a>

**Supports attribute-based access control (ABAC) tags in policies:** Yes

Attribute-based access control (ABAC) is an authorization strategy that defines permissions based on attributes called tags. You can attach tags to IAM entities and AWS resources, then design ABAC policies to allow operations when the principal's tag matches the tag on the resource.

To control access based on tags, you provide tag information in the [condition element](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) of a policy using the `aws:ResourceTag/key-name`, `aws:RequestTag/key-name`, or `aws:TagKeys` condition keys.

If a service supports all three condition keys for every resource type, then the value is **Yes** for the service. If a service supports all three condition keys for only some resource types, then the value is **Partial**.

For more information about ABAC, see [Define permissions with ABAC authorization](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) in the *IAM User Guide*. To view a tutorial with steps for setting up ABAC, see [Use attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_attribute-based-access-control.html) in the *IAM User Guide*.

For more information about tagging Amazon RDS resources, see [Specifying conditions: Using custom tags](UsingWithRDS.IAM.SpecifyingCustomTags.md). To view an example identity-based policy for limiting access to a resource based on the tags on that resource, see [Grant permission for actions on a resource with a specific tag with two different values](security_iam_id-based-policy-examples-create-and-modify-examples.md#security_iam_id-based-policy-examples-grant-permissions-tags).

## Using temporary credentials with Amazon RDS
<a name="security_iam_service-with-iam-roles-tempcreds"></a>

**Supports temporary credentials:** Yes.

Temporary credentials provide short-term access to AWS resources and are automatically created when you use federation or switch roles. AWS recommends that you dynamically generate temporary credentials instead of using long-term access keys. For more information, see [Temporary security credentials in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) and [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) in the *IAM User Guide*.

## Forward access sessions for Amazon RDS
<a name="security_iam_service-with-iam-principal-permissions"></a>

**Supports forward access sessions:** Yes.

 Forward access sessions (FAS) use the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. For policy details when making FAS requests, see [Forward access sessions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_forward_access_sessions.html). 

## Service roles for Amazon RDS
<a name="security_iam_service-with-iam-roles-service"></a>

**Supports service roles:** Yes.

 A service role is an [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see [Create a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *IAM User Guide*. 

**Warning**  
Changing the permissions for a service role might break Amazon RDS functionality. Edit service roles only when Amazon RDS provides guidance to do so.

## Service-linked roles for Amazon RDS
<a name="security_iam_service-with-iam-roles-service-linked"></a>

**Supports service-linked roles:** Yes.

 A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. 

For details about using Amazon RDS service-linked roles, see [Using service-linked roles for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md).

# Identity-based policy examples for Amazon RDS
<a name="security_iam_id-based-policy-examples"></a>

By default, permission sets and roles don't have permission to create or modify Amazon RDS resources. They also can't perform tasks using the AWS Management Console, AWS CLI, or AWS API. An administrator must create IAM policies that grant permission sets and roles permission to perform specific API operations on the specified resources they need. The administrator must then attach those policies to the permission sets or roles that require those permissions.

To learn how to create an IAM identity-based policy using these example JSON policy documents, see [Creating policies on the JSON tab](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html#access_policies_create-json-editor) in the *IAM User Guide*.

**Topics**
+ [

## Policy best practices
](#security_iam_service-with-iam-policy-best-practices)
+ [

## Using the Amazon RDS console
](#security_iam_id-based-policy-examples-console)
+ [

## Permissions required to use the console
](#UsingWithRDS.IAM.RequiredPermissions.Console)
+ [

## Allow users to view their own permissions
](#security_iam_id-based-policy-examples-view-own-permissions)
+ [

# Permission policies to create, modify and, delete resources in Amazon RDS
](security_iam_id-based-policy-examples-create-and-modify-examples.md)
+ [

# Example policies: Using condition keys
](UsingWithRDS.IAM.Conditions.Examples.md)
+ [

# Specifying conditions: Using custom tags
](UsingWithRDS.IAM.SpecifyingCustomTags.md)
+ [

# Grant permission to tag Amazon RDS resources during creation
](security_iam_id-based-policy-examples-grant-permissions-tags-on-create.md)

## Policy best practices
<a name="security_iam_service-with-iam-policy-best-practices"></a>

Identity-based policies determine whether someone can create, access, or delete Amazon RDS resources in your account. These actions can incur costs for your AWS account. When you create or edit identity-based policies, follow these guidelines and recommendations:
+ **Get started with AWS managed policies and move toward least-privilege permissions** – To get started granting permissions to your users and workloads, use the *AWS managed policies* that grant permissions for many common use cases. They are available in your AWS account. We recommend that you reduce permissions further by defining AWS customer managed policies that are specific to your use cases. For more information, see [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) or [AWS managed policies for job functions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html) in the *IAM User Guide*.
+ **Apply least-privilege permissions** – When you set permissions with IAM policies, grant only the permissions required to perform a task. You do this by defining the actions that can be taken on specific resources under specific conditions, also known as *least-privilege permissions*. For more information about using IAM to apply permissions, see [ Policies and permissions in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) in the *IAM User Guide*.
+ **Use conditions in IAM policies to further restrict access** – You can add a condition to your policies to limit access to actions and resources. For example, you can write a policy condition to specify that all requests must be sent using SSL. You can also use conditions to grant access to service actions if they are used through a specific AWS service, such as CloudFormation. For more information, see [ IAM JSON policy elements: Condition](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) in the *IAM User Guide*.
+ **Use IAM Access Analyzer to validate your IAM policies to ensure secure and functional permissions** – IAM Access Analyzer validates new and existing policies so that the policies adhere to the IAM policy language (JSON) and IAM best practices. IAM Access Analyzer provides more than 100 policy checks and actionable recommendations to help you author secure and functional policies. For more information, see [Validate policies with IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html) in the *IAM User Guide*.
+ **Require multi-factor authentication (MFA)** – If you have a scenario that requires IAM users or a root user in your AWS account, turn on MFA for additional security. To require MFA when API operations are called, add MFA conditions to your policies. For more information, see [ Secure API access with MFA](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html) in the *IAM User Guide*.

For more information about best practices in IAM, see [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*.

## Using the Amazon RDS console
<a name="security_iam_id-based-policy-examples-console"></a>

To access the Amazon RDS console, you must have a minimum set of permissions. These permissions must allow you to list and view details about the Amazon RDS resources in your AWS account. If you create an identity-based policy that is more restrictive than the minimum required permissions, the console won't function as intended for entities (users or roles) with that policy.

You don't need to allow minimum console permissions for users that are making calls only to the AWS CLI or the AWS API. Instead, allow access to only the actions that match the API operation that you're trying to perform.

To ensure that those entities can still use the Amazon RDS console, also attach the following AWS managed policy to the entities.

```
AmazonRDSReadOnlyAccess
```

For more information, see [Adding permissions to a user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

## Permissions required to use the console
<a name="UsingWithRDS.IAM.RequiredPermissions.Console"></a>

For a user to work with the console, that user must have a minimum set of permissions. These permissions allow the user to describe the Amazon RDS resources for their AWS account and to provide other related information, including Amazon EC2 security and network information.

If you create an IAM policy that is more restrictive than the minimum required permissions, the console doesn't function as intended for users with that IAM policy. To ensure that those users can still use the console, also attach the `AmazonRDSReadOnlyAccess` managed policy to the user, as described in [Managing access using policies](UsingWithRDS.IAM.md#security_iam_access-manage).

You don't need to allow minimum console permissions for users that are making calls only to the AWS CLI or the Amazon RDS API. 

The following policy grants full access to all Amazon RDS resources for the root AWS account:

```
AmazonRDSFullAccess             
```

## Allow users to view their own permissions
<a name="security_iam_id-based-policy-examples-view-own-permissions"></a>

This example shows how you might create a policy that allows IAM users to view the inline and managed policies that are attached to their user identity. This policy includes permissions to complete this action on the console or programmatically using the AWS CLI or AWS API.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "ViewOwnUserInfo",
            "Effect": "Allow",
            "Action": [
                "iam:GetUserPolicy",
                "iam:ListGroupsForUser",
                "iam:ListAttachedUserPolicies",
                "iam:ListUserPolicies",
                "iam:GetUser"
            ],
            "Resource": ["arn:aws:iam::*:user/${aws:username}"]
        },
        {
            "Sid": "NavigateInConsole",
            "Effect": "Allow",
            "Action": [
                "iam:GetGroupPolicy",
                "iam:GetPolicyVersion",
                "iam:GetPolicy",
                "iam:ListAttachedGroupPolicies",
                "iam:ListGroupPolicies",
                "iam:ListPolicyVersions",
                "iam:ListPolicies",
                "iam:ListUsers"
            ],
            "Resource": "*"
        }
    ]
}
```

# Permission policies to create, modify and, delete resources in Amazon RDS
<a name="security_iam_id-based-policy-examples-create-and-modify-examples"></a>

The following sections present examples of permission policies that grant and restrict access to resources:

## Allow a user to create DB instances in an AWS account
<a name="security_iam_id-based-policy-examples-create-db-instance-in-account"></a>

The following is an example policy that allows the account with the ID `123456789012` to create DB instances for your AWS account. The policy requires that the name of the new DB instance begin with `test`. The new DB instance must also use the MySQL database engine and the `db.t2.micro` DB instance class. In addition, the new DB instance must use an option group and a DB parameter group that starts with `default`, and it must use the `default` subnet group.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "AllowCreateDBInstanceOnly",
         "Effect": "Allow",
         "Action": [
            "rds:CreateDBInstance"
         ],
         "Resource": [
            "arn:aws:rds:*:123456789012:db:test*",
            "arn:aws:rds:*:123456789012:og:default*",
            "arn:aws:rds:*:123456789012:pg:default*",
            "arn:aws:rds:*:123456789012:subgrp:default"
         ],
         "Condition": {
            "StringEquals": {
               "rds:DatabaseEngine": "mysql",
               "rds:DatabaseClass": "db.t2.micro"
            }
         }
      }
   ]
}
```

------

The policy includes a single statement that specifies the following permissions for the user:
+ The policy allows the account to create a DB instance using the [CreateDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html) API operation (this also applies to the [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) AWS CLI command and the AWS Management Console).
+ The `Resource` element specifies that the user can perform actions on or with resources. You specify resources using an Amazon Resources Name (ARN). This ARN includes the name of the service that the resource belongs to (`rds`), the AWS Region (`*` indicates any region in this example), the AWS account number (`123456789012` is the account number in this example), and the type of resource. For more information about creating ARNs, see [Amazon Resource Names (ARNs) in Amazon RDS](USER_Tagging.ARN.md).

  The `Resource` element in the example specifies the following policy constraints on resources for the user:
  + The DB instance identifier for the new DB instance must begin with `test` (for example, `testCustomerData1`, `test-region2-data`).
  + The option group for the new DB instance must begin with `default`.
  + The DB parameter group for the new DB instance must begin with `default`.
  + The subnet group for the new DB instance must be the `default` subnet group.
+ The `Condition` element specifies that the DB engine must be MySQL and the DB instance class must be `db.t2.micro`. The `Condition` element specifies the conditions when a policy should take effect. You can add additional permissions or restrictions by using the `Condition` element. For more information about specifying conditions, see [Policy condition keys for Amazon RDS](security_iam_service-with-iam.md#UsingWithRDS.IAM.Conditions). This example specifies the `rds:DatabaseEngine` and `rds:DatabaseClass` conditions. For information about the valid condition values for `rds:DatabaseEngine`, see the list under the `Engine` parameter in [CreateDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html). For information about the valid condition values for `rds:DatabaseClass`, see [Supported DB engines for DB instance classes](Concepts.DBInstanceClass.Support.md) . 

The policy doesn't specify the `Principal` element because in an identity-based policy you don't specify the principal who gets the permission. When you attach policy to a user, the user is the implicit principal. When you attach a permission policy to an IAM role, the principal identified in the role's trust policy gets the permissions.

To see a list of Amazon RDS actions, see [Actions Defined by Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-actions-as-permissions) in the *Service Authorization Reference*.

## Allow a user to perform any describe action on any RDS resource
<a name="IAMPolicyExamples-RDS-perform-describe-action"></a>

The following permissions policy grants permissions to a user to run all of the actions that begin with `Describe`. These actions show information about an RDS resource, such as a DB instance. The wildcard character (\$1) in the `Resource` element indicates that the actions are allowed for all Amazon RDS resources owned by the account. 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "AllowRDSDescribe",
         "Effect": "Allow",
         "Action": "rds:Describe*",
         "Resource": "*"
      }
   ]
}
```

------

## Allow a user to create a DB instance that uses the specified DB parameter group and subnet group
<a name="security_iam_id-based-policy-examples-create-db-instance-specified-groups"></a>

The following permissions policy grants permissions to allow a user to only create a DB instance that must use the `mydbpg` DB parameter group and the `mydbsubnetgroup` DB subnet group. 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "VisualEditor0",
         "Effect": "Allow",
         "Action": "rds:CreateDBInstance",
         "Resource": [
            "arn:aws:rds:*:*:pg:mydbpg",
            "arn:aws:rds:*:*:subgrp:mydbsubnetgroup"
         ]
      }
   ]
}
```

------

## Grant permission for actions on a resource with a specific tag with two different values
<a name="security_iam_id-based-policy-examples-grant-permissions-tags"></a>

You can use conditions in your identity-based policy to control access to Amazon RDS resources based on tags. The following policy allows permission to perform the `CreateDBSnapshot` API operation on DB instances with either the `stage` tag set to `development` or `test`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"AllowAnySnapshotName",
         "Effect":"Allow",
         "Action":[
            "rds:CreateDBSnapshot"
         ],
         "Resource":"arn:aws:rds:*:123456789012:snapshot:*"
      },
      {
         "Sid":"AllowDevTestToCreateSnapshot",
         "Effect":"Allow",
         "Action":[
            "rds:CreateDBSnapshot"
         ],
         "Resource":"arn:aws:rds:*:123456789012:db:*",
         "Condition":{
            "StringEquals":{
                "rds:db-tag/stage":[
                  "development",
                  "test"
               ]
            }
         }
      }
   ]
}
```

------

The following policy allows permission to perform the `ModifyDBInstance` API operation on DB instances with either the `stage` tag set to `development` or `test`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"AllowChangingParameterOptionSecurityGroups",
         "Effect":"Allow",
         "Action":[
            "rds:ModifyDBInstance"
         ],
         "Resource": [
            "arn:aws:rds:*:123456789012:pg:*",
            "arn:aws:rds:*:123456789012:secgrp:*",
            "arn:aws:rds:*:123456789012:og:*"
         ]
      },
      {
         "Sid":"AllowDevTestToModifyInstance",
         "Effect":"Allow",
         "Action":[
            "rds:ModifyDBInstance"
         ],
         "Resource":"arn:aws:rds:*:123456789012:db:*",
         "Condition":{
            "StringEquals":{
                "rds:db-tag/stage":[
                  "development",
                  "test"
               ]
            }
         }
      }
   ]
}
```

------

## Prevent a user from deleting a DB instance
<a name="IAMPolicyExamples-RDS-prevent-db-deletion"></a>

The following permissions policy grants permissions to prevent a user from deleting a specific DB instance. For example, you might want to deny the ability to delete your production DB instances to any user that is not an administrator.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "DenyDelete1",
         "Effect": "Deny",
         "Action": "rds:DeleteDBInstance",
         "Resource": "arn:aws:rds:us-west-2:123456789012:db:my-mysql-instance"
      }
   ]
}
```

------

## Deny all access to a resource
<a name="IAMPolicyExamples-RDS-deny-all-access"></a>

You can explicitly deny access to a resource. Deny policies take precedence over allow policies. The following policy explicitly denies a user the ability to manage a resource:

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Effect": "Deny",
         "Action": "rds:*",
         "Resource": "arn:aws:rds:us-east-1:123456789012:db:mydb"
      }
   ]
}
```

------

# Example policies: Using condition keys
<a name="UsingWithRDS.IAM.Conditions.Examples"></a>

Following are examples of how you can use condition keys in Amazon RDS IAM permissions policies. 

## Example 1: Grant permission to create a DB instance that uses a specific DB engine and isn't MultiAZ
<a name="w2aac58c48c33c21b5"></a>

The following policy uses an RDS condition key and allows a user to create only DB instances that use the MySQL database engine and don't use MultiAZ. The `Condition` element indicates the requirement that the database engine is MySQL. 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "AllowMySQLCreate",
         "Effect": "Allow",
         "Action": "rds:CreateDBInstance",
         "Resource": "*",
         "Condition": {
            "StringEquals": {
               "rds:DatabaseEngine": "mysql"
            },
            "Bool": {
               "rds:MultiAz": false
            }
         }
      }
   ]
}
```

------

## Example 2: Explicitly deny permission to create DB instances for certain DB instance classes and create DB instances that use Provisioned IOPS
<a name="w2aac58c48c33c21b7"></a>

The following policy explicitly denies permission to create DB instances that use the DB instance classes `r3.8xlarge` and `m4.10xlarge`, which are the largest and most expensive DB instance classes. This policy also prevents users from creating DB instances that use Provisioned IOPS, which incurs an additional cost. 

Explicitly denying permission supersedes any other permissions granted. This ensures that identities to not accidentally get permission that you never want to grant.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "DenyLargeCreate",
         "Effect": "Deny",
         "Action": "rds:CreateDBInstance",
         "Resource": "*",
         "Condition": {
            "StringEquals": {
               "rds:DatabaseClass": [
                  "db.r3.8xlarge",
                  "db.m4.10xlarge"
               ]
            }
         }
      },
      {
         "Sid": "DenyPIOPSCreate",
         "Effect": "Deny",
         "Action": "rds:CreateDBInstance",
         "Resource": "*",
         "Condition": {
            "NumericNotEquals": {
               "rds:Piops": "0"
            }
         }
      }
   ]
}
```

------

## Example 3: Limit the set of tag keys and values that can be used to tag a resource
<a name="w2aac58c48c33c21b9"></a>

The following policy uses an RDS condition key and allows the addition of a tag with the key `stage` to be added to a resource with the values `test`, `qa`, and `production`.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowTagEdits",
      "Effect": "Allow",
      "Action": [
        "rds:AddTagsToResource",
        "rds:RemoveTagsFromResource"
      ],
      "Resource": "arn:aws:rds:us-east-1:123456789012:db:db-123456",
      "Condition": {
        "StringEquals": {
          "rds:req-tag/stage": [
            "test",
            "qa",
            "production"
          ]
        }
      }
    }
  ]
}
```

------

# Specifying conditions: Using custom tags
<a name="UsingWithRDS.IAM.SpecifyingCustomTags"></a>

Amazon RDS supports specifying conditions in an IAM policy using custom tags.

For example, suppose that you add a tag named `environment` to your DB instances with values such as `beta`, `staging`, `production`, and so on. If you do, you can create a policy that restricts certain users to DB instances based on the `environment` tag value.

**Note**  
Custom tag identifiers are case-sensitive.

The following table lists the RDS tag identifiers that you can use in a `Condition` element. 

<a name="rds-iam-condition-tag-reference"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAM.SpecifyingCustomTags.html)

The syntax for a custom tag condition is as follows:

`"Condition":{"StringEquals":{"rds:rds-tag-identifier/tag-name": ["value"]} }` 

For example, the following `Condition` element applies to DB instances with a tag named `environment` and a tag value of `production`. 

` "Condition":{"StringEquals":{"rds:db-tag/environment": ["production"]} } ` 

For information about creating tags, see [Tagging Amazon RDS resources](USER_Tagging.md).

**Important**  
If you manage access to your RDS resources using tagging, we recommend that you secure access to the tags for your RDS resources. You can manage access to tags by creating policies for the `AddTagsToResource` and `RemoveTagsFromResource` actions. For example, the following policy denies users the ability to add or remove tags for all resources. You can then create policies to allow specific users to add or remove tags.   

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"DenyTagUpdates",
         "Effect":"Deny",
         "Action":[
            "rds:AddTagsToResource",
            "rds:RemoveTagsFromResource"
         ],
         "Resource":"*"
      }
   ]
}
```

To see a list of Amazon RDS actions, see [Actions Defined by Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-actions-as-permissions) in the *Service Authorization Reference*.

## Example policies: Using custom tags
<a name="UsingWithRDS.IAM.Conditions.Tags.Examples"></a>

Following are examples of how you can use custom tags in Amazon RDS IAM permissions policies. For more information about adding tags to an Amazon RDS resource, see [Amazon Resource Names (ARNs) in Amazon RDS](USER_Tagging.ARN.md). 

**Note**  
All examples use the us-west-2 region and contain fictitious account IDs.

### Example 1: Grant permission for actions on a resource with a specific tag with two different values
<a name="w2aac58c48c33c23c29b6"></a>

The following policy allows permission to perform the `CreateDBSnapshot` API operation on DB instances with either the `stage` tag set to `development` or `test`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"AllowAnySnapshotName",
         "Effect":"Allow",
         "Action":[
            "rds:CreateDBSnapshot"
         ],
         "Resource":"arn:aws:rds:*:123456789012:snapshot:*"
      },
      {
         "Sid":"AllowDevTestToCreateSnapshot",
         "Effect":"Allow",
         "Action":[
            "rds:CreateDBSnapshot"
         ],
         "Resource":"arn:aws:rds:*:123456789012:db:*",
         "Condition":{
            "StringEquals":{
                "rds:db-tag/stage":[
                  "development",
                  "test"
               ]
            }
         }
      }
   ]
}
```

------

The following policy allows permission to perform the `ModifyDBInstance` API operation on DB instances with either the `stage` tag set to `development` or `test`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"AllowChangingParameterOptionSecurityGroups",
         "Effect":"Allow",
         "Action":[
            "rds:ModifyDBInstance"
            ],
          "Resource": [
            "arn:aws:rds:*:123456789012:pg:*",
            "arn:aws:rds:*:123456789012:secgrp:*",
            "arn:aws:rds:*:123456789012:og:*"
            ]
       },
       {
         "Sid":"AllowDevTestToModifyInstance",
         "Effect":"Allow",
         "Action":[
            "rds:ModifyDBInstance"
            ],
         "Resource":"arn:aws:rds:*:123456789012:db:*",
         "Condition":{
            "StringEquals":{
               "rds:db-tag/stage":[
                  "development",
                  "test"
                  ]
               }
            }
       }
    ]
}
```

------

### Example 2: Explicitly deny permission to create a DB instance that uses specified DB parameter groups
<a name="w2aac58c48c33c23c29b8"></a>

The following policy explicitly denies permission to create a DB instance that uses DB parameter groups with specific tag values. You might apply this policy if you require that a specific customer-created DB parameter group always be used when creating DB instances. Policies that use `Deny` are most often used to restrict access that was granted by a broader policy.

Explicitly denying permission supersedes any other permissions granted. This ensures that identities to not accidentally get permission that you never want to grant.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"DenyProductionCreate",
         "Effect":"Deny",
         "Action":"rds:CreateDBInstance",
         "Resource":"arn:aws:rds:*:123456789012:pg:*",
         "Condition":{
            "StringEquals":{
               "rds:pg-tag/usage":"prod"
            }
         }
      }
   ]
}
```

------

### Example 3: Grant permission for actions on a DB instance with an instance name that is prefixed with a user name
<a name="w2aac58c48c33c23c29c10"></a>

The following policy allows permission to call any API (except to `AddTagsToResource` or `RemoveTagsFromResource`) on a DB instance that has a DB instance name that is prefixed with the user's name and that has a tag called `stage` equal to `devo` or that has no tag called `stage`.

The `Resource` line in the policy identifies a resource by its Amazon Resource Name (ARN). For more information about using ARNs with Amazon RDS resources, see [Amazon Resource Names (ARNs) in Amazon RDS](USER_Tagging.ARN.md). 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"AllowFullDevAccessNoTags",
         "Effect":"Allow",
         "NotAction":[
            "rds:AddTagsToResource",
            "rds:RemoveTagsFromResource"
         ],
         "Resource":"arn:aws:rds:*:123456789012:db:${aws:username}*",
         "Condition":{
            "StringEqualsIfExists":{
               "rds:db-tag/stage":"devo"
            }
         }
      }
   ]
}
```

------

# Grant permission to tag Amazon RDS resources during creation
<a name="security_iam_id-based-policy-examples-grant-permissions-tags-on-create"></a>

Some RDS API operations allow you to specify tags when you create the resource. You can use resource tags to implement attribute-based control (ABAC). For more information, see [What is ABAC for AWS?](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) and [Controlling access to AWS resources using tags](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html).

To enable users to tag resources on creation, they must have permissions to use the action that creates the resource, such as `rds:CreateDBInstance`. If tags are specified in the create action, RDS performs additional authorization on the `rds:AddTagsToResource` action to verify if users have permissions to create tags. Therefore, users must also have explicit permissions to use the `rds:AddTagsToResource` action.

In the IAM policy definition for the `rds:AddTagsToResource` action, you can use the `aws:RequestTag` condition key to require tags in a request to tag a resource.

For example, the following policy allows users to create DB instances and apply tags during DB instance creation, but only with specific tag keys (`environment` or `project`):

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
       {
           "Effect": "Allow",
           "Action": [
               "rds:CreateDBInstance"
           ],
           "Resource": "*"
       },
       {
           "Effect": "Allow",
           "Action": [
               "rds:AddTagsToResource"
           ],
           "Resource": "*",
           "Condition": {
               "StringEquals": {
                   "aws:RequestTag/environment": ["production", "development"],
                   "aws:RequestTag/project": ["dataanalytics", "webapp"]
               },
               "ForAllValues:StringEquals": {
                   "aws:TagKeys": ["environment", "project"]
               }
           }
       }
   ]
}
```

------

This policy denies any create DB instance request that includes tags other than the `environment` or `project` tags, or that doesn't specify either of these tags. Additionally, users must specify values for the tags that match the allowed values in the policy.

The following policy allows users to create DB clusters and apply any tags during creation except the `environment=prod` tag:

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
       {
           "Effect": "Allow",
           "Action": [
               "rds:CreateDBCluster"
           ],
           "Resource": "*"
       },
       {
           "Effect": "Allow",
           "Action": [
               "rds:AddTagsToResource"
           ],
           "Resource": "*",
           "Condition": {
               "StringNotEquals": {
                   "aws:RequestTag/environment": "prod"
               }
           }
       }
   ]
}
```

------

## Supported RDS API actions for tagging on creation
<a name="security_iam_id-based-policy-examples-supported-rds-api-actions-tagging-creation"></a>

The following RDS API actions support tagging when you create a resource. For these actions, you can specify tags when you create the resource:
+ `CreateBlueGreenDeployment`
+ `CreateCustomDBEngineVersion`
+ `CreateDBCluster`
+ `CreateDBClusterEndpoint`
+ `CreateDBClusterParameterGroup`
+ `CreateDBClusterSnapshot`
+ `CreateDBInstance`
+ `CreateDBInstanceReadReplica`
+ `CreateDBParameterGroup`
+ `CreateDBProxy`
+ `CreateDBProxyEndpoint`
+ `CreateDBSecurityGroup`
+ `CreateDBShardGroup`
+ `CreateDBSnapshot`
+ `CreateDBSubnetGroup`
+ `CreateEventSubscription`
+ `CreateGlobalCluster`
+ `CreateIntegration`
+ `CreateOptionGroup`
+ `CreateTenantDatabase`
+ `CopyDBClusterParameterGroup`
+ `CopyDBClusterSnapshot`
+ `CopyDBParameterGroup`
+ `CopyDBSnapshot`
+ `CopyOptionGroup`
+ `RestoreDBClusterFromS3`
+ `RestoreDBClusterFromSnapshot`
+ `RestoreDBClusterToPointInTime`
+ `RestoreDBInstanceFromDBSnapshot`
+ `RestoreDBInstanceFromS3`
+ `RestoreDBInstanceToPointInTime`
+ `PurchaseReservedDBInstancesOffering`

If you use the AWS CLI or API to create a resource with tags, the `Tags` parameter is used to apply tags to resources during creation.

For these API actions, if tagging fails, the resource is not created, and the request fails with an error. This ensures that resources are either created with tags or not created at all, preventing resources from being created without the intended tags.

# AWS managed policies for Amazon RDS
<a name="rds-security-iam-awsmanpol"></a>

To add permissions to permission sets and roles, it's easier to use AWS managed policies than to write policies yourself. It takes time and expertise to [create IAM customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html) that provide your team with only the permissions they need. To get started quickly, you can use our AWS managed policies. These policies cover common use cases and are available in your AWS account. For more information about AWS managed policies, see [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) in the *IAM User Guide*.

AWS services maintain and update AWS managed policies. You can't change the permissions in AWS managed policies. Services occasionally add additional permissions to an AWS managed policy to support new features. This type of update affects all identities (permission sets and roles) where the policy is attached. Services are most likely to update an AWS managed policy when a new feature is launched or when new operations become available. Services don't remove permissions from an AWS managed policy, so policy updates don't break your existing permissions.

Additionally, AWS supports managed policies for job functions that span multiple services. For example, the `ReadOnlyAccess` AWS managed policy provides read-only access to all AWS services and resources. When a service launches a new feature, AWS adds read-only permissions for new operations and resources. For a list and descriptions of job function policies, see [AWS managed policies for job functions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html) in the *IAM User Guide*.

**Topics**
+ [

## AWS managed policy: AmazonRDSReadOnlyAccess
](#rds-security-iam-awsmanpol-AmazonRDSReadOnlyAccess)
+ [

## AWS managed policy: AmazonRDSFullAccess
](#rds-security-iam-awsmanpol-AmazonRDSFullAccess)
+ [

## AWS managed policy: AmazonRDSDataFullAccess
](#rds-security-iam-awsmanpol-AmazonRDSDataFullAccess)
+ [

## AWS managed policy: AmazonRDSEnhancedMonitoringRole
](#rds-security-iam-awsmanpol-AmazonRDSEnhancedMonitoringRole)
+ [

## AWS managed policy: AmazonRDSPerformanceInsightsReadOnly
](#rds-security-iam-awsmanpol-AmazonRDSPerformanceInsightsReadOnly)
+ [

## AWS managed policy: AmazonRDSPerformanceInsightsFullAccess
](#rds-security-iam-awsmanpol-AmazonRDSPerformanceInsightsFullAccess)
+ [

## AWS managed policy: AmazonRDSDirectoryServiceAccess
](#rds-security-iam-awsmanpol-AmazonRDSDirectoryServiceAccess)
+ [

## AWS managed policy: AmazonRDSServiceRolePolicy
](#rds-security-iam-awsmanpol-AmazonRDSServiceRolePolicy)
+ [

## AWS managed policy: AmazonRDSCustomServiceRolePolicy
](#rds-security-iam-awsmanpol-AmazonRDSCustomServiceRolePolicy)
+ [

## AWS managed policy: AmazonRDSCustomInstanceProfileRolePolicy
](#rds-security-iam-awsmanpol-AmazonRDSCustomInstanceProfileRolePolicy)
+ [

## AWS managed policy: AmazonRDSPreviewServiceRolePolicy
](#rds-security-iam-awsmanpol-AmazonRDSPreviewServiceRolePolicy)
+ [

## AWS managed policy: AmazonRDSBetaServiceRolePolicy
](#rds-security-iam-awsmanpol-AmazonRDSBetaServiceRolePolicy)

## AWS managed policy: AmazonRDSReadOnlyAccess
<a name="rds-security-iam-awsmanpol-AmazonRDSReadOnlyAccess"></a>

This policy allows read-only access to Amazon RDS through the AWS Management Console.

**Permissions details**

This policy includes the following permissions:
+ `rds` – Allows principals to describe Amazon RDS resources and list the tags for Amazon RDS resources.
+ `cloudwatch` – Allows principals to get Amazon CloudWatch metric statistics.
+ `ec2` – Allows principals to describe Availability Zones and networking resources.
+ `logs` – Allows principals to describe CloudWatch Logs log streams of log groups, and get CloudWatch Logs log events.
+ `devops-guru` – Allows principals to describe resources that have Amazon DevOps Guru coverage, which is specified either by CloudFormation stack names or resource tags.

For more information about this policy, including the JSON policy document, see [AmazonRDSReadOnlyAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSReadOnlyAccess.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSFullAccess
<a name="rds-security-iam-awsmanpol-AmazonRDSFullAccess"></a>

This policy provides full access to Amazon RDS through the AWS Management Console.

**Permissions details**

This policy includes the following permissions:
+ `rds` – Allows principals full access to Amazon RDS.
+ `application-autoscaling` – Allows principals describe and manage Application Auto Scaling scaling targets and policies.
+ `cloudwatch` – Allows principals get CloudWatch metric statics and manage CloudWatch alarms.
+ `ec2` – Allows principals to describe Availability Zones and networking resources.
+ `logs` – Allows principals to describe CloudWatch Logs log streams of log groups, and get CloudWatch Logs log events.
+ `outposts` – Allows principals to get AWS Outposts instance types.
+ `pi` – Allows principals to get Performance Insights metrics.
+ `sns` – Allows principals to Amazon Simple Notification Service (Amazon SNS) subscriptions and topics, and to publish Amazon SNS messages.
+ `devops-guru` – Allows principals to describe resources that have Amazon DevOps Guru coverage, which is specified either by CloudFormation stack names or resource tags.

For more information about this policy, including the JSON policy document, see [AmazonRDSFullAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSFullAccess.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSDataFullAccess
<a name="rds-security-iam-awsmanpol-AmazonRDSDataFullAccess"></a>

This policy allows full access to use the Data API and the query editor on Aurora Serverless clusters in a specific AWS account. This policy allows the AWS account to get the value of a secret from AWS Secrets Manager. 

You can attach the `AmazonRDSDataFullAccess` policy to your IAM identities.

**Permissions details**

This policy includes the following permissions:
+ `dbqms` – Allows principals to access, create, delete, describe, and update queries. The Database Query Metadata Service (`dbqms`) is an internal-only service. It provides your recent and saved queries for the query editor on the AWS Management Console for multiple AWS services, including Amazon RDS.
+ `rds-data` – Allows principals to run SQL statements on Aurora Serverless databases.
+ `secretsmanager` – Allows principals to get the value of a secret from AWS Secrets Manager.

For more information about this policy, including the JSON policy document, see [AmazonRDSDataFullAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSDataFullAccess.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSEnhancedMonitoringRole
<a name="rds-security-iam-awsmanpol-AmazonRDSEnhancedMonitoringRole"></a>

This policy provides access to Amazon CloudWatch Logs for Amazon RDS Enhanced Monitoring.

**Permissions details**

This policy includes the following permissions:
+ `logs` – Allows principals to create CloudWatch Logs log groups and retention policies, and to create and describe CloudWatch Logs log streams of log groups. It also allows principals to put and get CloudWatch Logs log events.

For more information about this policy, including the JSON policy document, see [AmazonRDSEnhancedMonitoringRole](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSEnhancedMonitoringRole.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSPerformanceInsightsReadOnly
<a name="rds-security-iam-awsmanpol-AmazonRDSPerformanceInsightsReadOnly"></a>

This policy provides read-only access to Amazon RDS Performance Insights for Amazon RDS DB instances and Amazon Aurora DB clusters.

This policy now includes `Sid` (statement ID) as an identifier for the policy statement. 

**Permissions details**

This policy includes the following permissions:
+ `rds` – Allows principals to describe Amazon RDS DB instances and Amazon Aurora DB clusters.
+ `pi` – Allows principals make calls to the Amazon RDS Performance Insights API and access Performance Insights metrics.

For more information about this policy, including the JSON policy document, see [AmazonRDSPerformanceInsightsReadOnly](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSPerformanceInsightsReadOnly.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSPerformanceInsightsFullAccess
<a name="rds-security-iam-awsmanpol-AmazonRDSPerformanceInsightsFullAccess"></a>

This policy provides full access to Amazon RDS Performance Insights for Amazon RDS DB instances and Amazon Aurora DB clusters.

This policy now includes `Sid` (statement ID) as an identifier for the policy statement. 

**Permissions details**

This policy includes the following permissions:
+ `rds` – Allows principals to describe Amazon RDS DB instances and Amazon Aurora DB clusters.
+ `pi` – Allows principals make calls to the Amazon RDS Performance Insights API, and create, view, and delete performance analysis reports.
+ `cloudwatch` – Allows principals to list all the Amazon CloudWatch metrics, and get metric data and statistics.

For more information about this policy, including the JSON policy document, see [AmazonRDSPerformanceInsightsFullAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSPerformanceInsightsFullAccess.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSDirectoryServiceAccess
<a name="rds-security-iam-awsmanpol-AmazonRDSDirectoryServiceAccess"></a>

This policy allows Amazon RDS to make calls to the Directory Service.

**Permissions details**

This policy includes the following permission:
+ `ds` – Allows principals to describe Directory Service directories and control authorization to Directory Service directories.

For more information about this policy, including the JSON policy document, see [AmazonRDSDirectoryServiceAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSDirectoryServiceAccess.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSServiceRolePolicy
<a name="rds-security-iam-awsmanpol-AmazonRDSServiceRolePolicy"></a>

You can't attach the `AmazonRDSServiceRolePolicy` policy to your IAM entities. This policy is attached to a service-linked role that allows Amazon RDS to perform actions on your behalf. For more information, see [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions).

## AWS managed policy: AmazonRDSCustomServiceRolePolicy
<a name="rds-security-iam-awsmanpol-AmazonRDSCustomServiceRolePolicy"></a>

You can't attach the `AmazonRDSCustomServiceRolePolicy` policy to your IAM entities. This policy is attached to a service-linked role that allows Amazon RDS to call AWS services on behalf of your RDS DB resources.

This policy includes the following permissions:
+ `ec2` ‐ Allows RDS Custom to perform backup operations on the DB instance that provides point-in-time restore capabilities.
+ `secretsmanager` ‐ Allows RDS Custom to manage DB instance specific secrets created by RDS Custom.
+ `cloudwatch` ‐ Allows RDS Custom to upload DB instance metrics and logs to CloudWatch through CloudWatch agent.
+ `events`, `sqs` ‐ Allows RDS Custom to send and receive status information about the DB instance.
+ `cloudtrail` ‐ Allows RDS Custom to recieve change events about the DB instance
+ `servicequotas` ‐ Allows RDS Custom to read service quotas related to the DB instance
+ `ssm` ‐ Allows RDS Custom to manage the DB instance's underlying EC2 instance.
+ `rds` ‐ Allows RDS Custom to manage RDS resources for your DB instance
+ `iam` ‐ Allows RDS Custom to validate and attach the instance profile to a DB instance's underlying EC2 instance.

For more information about this policy, including the JSON policy document, see [AmazonRDSCustomServiceRolePolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSCustomServiceRolePolicy.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSCustomInstanceProfileRolePolicy
<a name="rds-security-iam-awsmanpol-AmazonRDSCustomInstanceProfileRolePolicy"></a>

You shouldn't attach `AmazonRDSCustomInstanceProfileRolePolicy` to your IAM entities. It should only be attached to an instance profile role that is used to grant permissions to your Amazon RDS Custom DB instance to perform various automation actions and database management tasks. Pass the instance profile as `custom-iam-instance-profile` parameter during the RDS Custom instance creation and RDS Custom associates this instance profile to your DB instance.

**Permissions details**

This policy includes the following permissions:
+ `ssm`, `ssmmessages`, `ec2messages` ‐ Allows RDS Custom to communicate, execute automation and maintain agents on the DB instance through Systems Manager.
+ `ec2`, `s3` ‐ Allows RDS Custom to perform backup operations on the DB instance that provides point-in-time restore capabilities.
+ `secretsmanager` ‐ Allows RDS Custom to manage DB instance specific secrets created by RDS Custom.
+ `cloudwatch`, `logs` ‐ Allows RDS Custom to upload DB instance metrics and logs to CloudWatch through CloudWatch agent.
+ `events`, `sqs` ‐ Allows RDS Custom to send and receive status information about the DB instance.
+ `kms` ‐ Allows RDS Custom to use an instance-specific KMS key to perform encryption of secrets and S3 objects that RDS Custom manages.

For more information about this policy, including the JSON policy document, see [AmazonRDSCustomInstanceProfileRolePolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSCustomInstanceProfileRolePolicy.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSPreviewServiceRolePolicy
<a name="rds-security-iam-awsmanpol-AmazonRDSPreviewServiceRolePolicy"></a>

You shouldn't attach `AmazonRDSPreviewServiceRolePolicy` to your IAM entities. This policy is attached to a service-linked role that allows Amazon RDS to call AWS services on behalf of your RDS DB resources. For more information, see [Service-linked role for Amazon RDS Preview](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-rdspreview). 

**Permissions details**

This policy includes the following permissions:
+ `ec2` ‐ Allows principals to describe Availability Zones and networking resources.
+ `secretsmanager` – Allows principals to get the value of a secret from AWS Secrets Manager.
+ `cloudwatch`, `logs` ‐ Allows Amazon RDS to upload DB instance metrics and logs to CloudWatch through CloudWatch agent.

For more information about this policy, including the JSON policy document, see [AmazonRDSPreviewServiceRolePolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSPreviewServiceRolePolicy.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSBetaServiceRolePolicy
<a name="rds-security-iam-awsmanpol-AmazonRDSBetaServiceRolePolicy"></a>

You shouldn't attach `AmazonRDSBetaServiceRolePolicy` to your IAM entities. This policy is attached to a service-linked role that allows Amazon RDS to call AWS services on behalf of your RDS DB resources. For more information, see [Service-linked role permissions for Amazon RDS Beta](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-rdsbeta).

**Permissions details**

This policy includes the following permissions:
+ `ec2` ‐ Allows Amazon RDS to perform backup operations on the DB instance that provides point-in-time restore capabilities.
+ `secretsmanager` ‐ Allows Amazon RDS to manage DB instance specific secrets created by Amazon RDS.
+ `cloudwatch`, `logs` ‐ Allows Amazon RDS to upload DB instance metrics and logs to CloudWatch through CloudWatch agent.

For more information about this policy, including the JSON policy document, see [AmazonRDSBetaServiceRolePolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSBetaServiceRolePolicy.html) in the *AWS Managed Policy Reference Guide*.

# Amazon RDS updates to AWS managed policies
<a name="rds-manpol-updates"></a>

View details about updates to AWS managed policies for Amazon RDS since this service began tracking these changes. For automatic alerts about changes to this page, subscribe to the RSS feed on the Amazon RDS [Document history](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/WhatsNew.html) page.




| Change | Description | Date | 
| --- | --- | --- | 
| [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom) – Update to existing policy |  Amazon RDS updated permissions on the `AmazonRDSCustomServiceRolePolicy` of the `AWSServiceRoleForRDSCustom` service-linked role. The update removes `ec2:CopySnapshot` from one statement and adds two new statements for source and destination snapshot permissions. These updates comply with a [ Change in EBS CopySnapshot authorization behavior](https://aws.amazon.com/blogs/storage/enhancing-resource-level-permissions-for-copying-amazon-ebs-snapshots/) while keeping effective permissions unchanged. For more information, see [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom).  | August 7, 2025 | 
| [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom) – Update to existing policy |  Amazon RDS added new permissions to the `AmazonRDSCustomServiceRolePolicy` of the `AWSServiceRoleForRDSCustom` service-linked role. These permissions allow RDS Custom to manage EC2 key-pairs and allow RDS Custom to integrate with Amazon SQS. For more information, see [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom).  | March 25, 2025 | 
|  [AWS managed policy: AmazonRDSCustomInstanceProfileRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSCustomInstanceProfileRolePolicy) – Update to existing policy |  Amazon RDS added new permissions to the managed policy `AmazonRDSCustomInstanceProfileRolePolicy` to allow the usage of RDS Custom managed secrets on an RDS Custom instance. For more information, see [AWS managed policy: AmazonRDSCustomInstanceProfileRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSCustomInstanceProfileRolePolicy).  | March 20, 2025 | 
| [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom) – Update to existing policy |  Amazon RDS added new permissions to the `AmazonRDSCustomServiceRolePolicy` of the `AWSServiceRoleForRDSCustom` service-linked role. These new permissions allow RDS Custom to list and restore Secrets Manager secrets. For more information, see [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom).  | March 6, 2025 | 
| [AWS managed policy: AmazonRDSPreviewServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSPreviewServiceRolePolicy) – Update to existing policy |  Amazon RDS removed `sns:Publish` permission from the `AmazonRDSPreviewServiceRolePolicy` of the `AWSServiceRoleForRDSPreview` service-linked role. For more information, see [AWS managed policy: AmazonRDSPreviewServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSPreviewServiceRolePolicy). | August 7, 2024 | 
| [AWS managed policy: AmazonRDSBetaServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSBetaServiceRolePolicy) – Update to existing policy |  Amazon RDS removed `sns:Publish` permission from the `AmazonRDSBetaServiceRolePolicy` of the `AWSServiceRoleForRDSBeta` service-linked role. For more information, see [AWS managed policy: AmazonRDSBetaServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSBetaServiceRolePolicy).  | August 7, 2024 | 
| [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom) – Update to existing policy |  Amazon RDS added new permissions to the `AmazonRDSCustomServiceRolePolicy` of the `AWSServiceRoleForRDSCustom` service-linked role. The permissions allow RDS Custom to communicate with Amazon RDS services in another AWS Region and copy EC2 images. For more information, see [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom).  | July 18, 2024 | 
| [AWS managed policy: AmazonRDSServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSServiceRolePolicy) – Update to existing policy |  Amazon RDS removed `sns:Publish` permission from the `AmazonRDSServiceRolePolicy` of the ` AWSServiceRoleForRDS` service-linked role. For more information, see [AWS managed policy: AmazonRDSServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSServiceRolePolicy).  | July 2, 2024 | 
| [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom) – Update to existing policy |  Amazon RDS added new permissions to the `AmazonRDSCustomServiceRolePolicy` of the `AWSServiceRoleForRDSCustom` service-linked role. This new permission allow RDS Custom to associate a service-role as an instance profile to an RDS Custom instance. For more information, see [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom).  | April 19, 2024 | 
| [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md) – Update to existing policy |  Amazon RDS added a new permission to the `AmazonRDSCustomServiceRolePolicy` of the `AWSServiceRoleForRDSCustom` service-linked role to allow RDS Custom for SQL Server to modify the underlying database host instance type. RDS also added the `ec2:DescribeInstanceTypes` permission to get instance type information for database host. For more information, see [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md).  | April 8, 2024 | 
|  [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md) – New policy  | Amazon RDS added a new managed policy named AmazonRDSCustomInstanceProfileRolePolicy to allow RDS Custom to perform automation actions and database management tasks through an EC2 instance profile. For more information, see [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md). | February 27, 2024 | 
|  [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy | Amazon RDS added new statement IDs to the `AmazonRDSServiceRolePolicy` of the `AWSServiceRoleForRDS` service-linked role. For more information, see [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions).  |  January 19, 2024  | 
|  [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md) – Update to existing policies  |  The `AmazonRDSPerformanceInsightsReadOnly` and `AmazonRDSPerformanceInsightsFullAccess` managed policies now includes `Sid` (statement ID) as an identifier in the policy statement.  For more information, see [AWS managed policy: AmazonRDSPerformanceInsightsReadOnly](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSPerformanceInsightsReadOnly) and [AWS managed policy: AmazonRDSPerformanceInsightsFullAccess](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSPerformanceInsightsFullAccess)   |  October 23, 2023  | 
|  [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added new permissions to the `AmazonRDSCustomServiceRolePolicy` of the `AWSServiceRoleForRDSCustom` service-linked role. These new permissions allow RDS Custom for Oracle to create, modify, and delete EventBridge Managed Rules. For more information, see [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom).  |  September 20, 2023  | 
|  [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md) – Update to existing policy  |  Amazon RDS added new permissions to `AmazonRDSFullAccess` managed policy. The permissions allow you to generate, view, and delete the performance analysis report for a time period. For more information about configuring access policies for Performance Insights, see [Configuring access policies for Performance Insights](USER_PerfInsights.access-control.md)  |  August 17, 2023  | 
|  [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md) – New policy and update to existing policy  |  Amazon RDS added new permissions to `AmazonRDSPerformanceInsightsReadOnly` managed policy and a new managed policy named `AmazonRDSPerformanceInsightsFullAccess`. These permissions allow you to analyse the Performance Insights for a time period, view the analysis results along with the recommendations, and delete the reports. For more information about configuring access policies for Performance Insights, see [Configuring access policies for Performance Insights](USER_PerfInsights.access-control.md)  |  August 16, 2023  | 
|  [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added new permissions to the `AmazonRDSCustomServiceRolePolicy` of the `AWSServiceRoleForRDSCustom` service-linked role. These new permissions allow RDS Custom for Oracle to use DB snapshots. For more information, see [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom).  |  June 23, 2023  | 
|  [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added new permissions to the `AmazonRDSCustomServiceRolePolicy` of the `AWSServiceRoleForRDSCustom` service-linked role. These new permissions allow RDS Custom for Oracle to use DB snapshots. For more information, see [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom).  |  June 23, 2023  | 
|  [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added new permissions to the `AmazonRDSCustomServiceRolePolicy` of the `AWSServiceRoleForRDSCustom` service-linked role. These new permissions allow RDS Custom to create network interfaces. For more information, see [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom).  |  May 30, 2023  | 
|  [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added new permissions to the `AmazonRDSCustomServiceRolePolicy` of the `AWSServiceRoleForRDSCustom` service-linked role. These new permissions allow RDS Custom to call Amazon EBS to check the storage quota. For more information, see [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom).  |  April 18, 2023  | 
|  [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS Custom added new permissions to the `AmazonRDSCustomServiceRolePolicy` of the `AWSServiceRoleForRDSCustom` service-linked role for integration with Amazon SQS. RDS Custom requires integration with Amazon SQS to create and manage SQS queues in the customer account. The SQS queue names follow the format `do-not-delete-rds-custom-[identifier]` and are tagged with `Amazon RDS Custom`. The permission for `ec2:CreateSnapshot` was also added to allow RDS Custom to create backups for volumes attached to the instance. For more information, see [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom).  |  April 6, 2023  | 
|  [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md) – Update to an existing policy  |  Amazon RDS added a new Amazon CloudWatch namespace `ListMetrics` to `AmazonRDSFullAccess` and `AmazonRDSReadOnlyAccess`. This namespace is required for Amazon RDS to list specific resource usage metrics. For more information, see [Overview of managing access permissions to your CloudWatch resources](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-access-control-overview-cw.html) in the *Amazon CloudWatch User Guide*.  |  April 4, 2023  | 
|  [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md) – Update to an existing policy  |  Amazon RDS added a new permission to `AmazonRDSFullAccess` and `AmazonRDSReadOnlyAccess` managed policies to allow the display of Amazon DevOps Guru findings in the RDS console. This permission is required to allow the display of DevOps Guru findings. For more information, see [ Amazon RDS updates to AWS managed policies](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-manpol-updates.html).  |  March, 30 2023  | 
|  [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added new permissions to the `AmazonRDSServiceRolePolicy` of the `AWSServiceRoleForRDS` service-linked role for integration with AWS Secrets Manager. RDS requires integration with Secrets Manager for managing master user passwords in Secrets Manager. The secret uses a reserved naming convention and restricts customer updates. For more information, see [Password management with Amazon RDS and AWS Secrets Manager](rds-secrets-manager.md).  |  December 22, 2022  | 
|  [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added new permissions to the `AmazonRDSCustomServiceRolePolicy` of the `AWSServiceRoleForRDSCustom` service-linked role. RDS Custom supports DB clusters. These new permissions in the policy allow RDS Custom to call AWS services on behalf of your DB clusters. For more information, see [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom).  |  November 9, 2022  | 
|  [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added new permissions to the `AWSServiceRoleForRDS` service-linked role for integration with AWS Secrets Manager. Integration with Secrets Manager is required for SQL Server Reporting Services (SSRS) Email to function on RDS. SSRS Email creates a secret on behalf of the customer. The secret uses a reserved naming convention and restricts customer updates. For more information, see [Using SSRS Email to send reports](SSRS.Email.md).  |  August 26, 2022  | 
|  [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added a new Amazon CloudWatch namespace to `AmazonRDSPreviewServiceRolePolicy` for `PutMetricData`. This namespace is required for Amazon RDS to publish resource usage metrics. For more information, see [Using condition keys to limit access to CloudWatch namespaces](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-cw-condition-keys-namespace.html) in the *Amazon CloudWatch User Guide*.  |  June 7, 2022  | 
|  [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added a new Amazon CloudWatch namespace to `AmazonRDSBetaServiceRolePolicy` for `PutMetricData`. This namespace is required for Amazon RDS to publish resource usage metrics. For more information, see [Using condition keys to limit access to CloudWatch namespaces](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-cw-condition-keys-namespace.html) in the *Amazon CloudWatch User Guide*.  |  June 7, 2022  | 
|  [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added a new Amazon CloudWatch namespace to `AWSServiceRoleForRDS` for `PutMetricData`. This namespace is required for Amazon RDS to publish resource usage metrics. For more information, see [Using condition keys to limit access to CloudWatch namespaces](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-cw-condition-keys-namespace.html) in the *Amazon CloudWatch User Guide*.  |  April 22, 2022  | 
|  [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added new permissions to the `AWSServiceRoleForRDS` service-linked role to manage permissions for customer-owned IP pools and local gateway route tables (LGW-RTBs). These permissions are required for RDS on Outposts to perform Multi-AZ replication across the Outposts’ local network. For more information, see [Working with Multi-AZ deployments for Amazon RDS on AWS Outposts](rds-on-outposts.maz.md).  |  April 19, 2022  | 
|  [Identity-based policies](UsingWithRDS.IAM.md#security_iam_access-manage-id-based-policies) – Update to an existing policy  |  Amazon RDS added a new permission to the `AmazonRDSFullAccess` managed policy to describe permissions on LGW-RTBs. This permission is required to describe permissions for RDS on Outposts to perform Multi-AZ replication across the Outposts’ local network. For more information, see [Working with Multi-AZ deployments for Amazon RDS on AWS Outposts](rds-on-outposts.maz.md).  |  April 19, 2022  | 
|  [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md) – New policy  |  Amazon RDS added a new managed policy named `AmazonRDSPerformanceInsightsReadOnly` to allow Amazon RDS to call AWS services on behalf of your DB instances. For more information about configuring access policies for Performance Insights, see [Configuring access policies for Performance Insights](USER_PerfInsights.access-control.md)  |  March 10, 2022  | 
|  [Service-linked role permissions for Amazon RDS](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added new Amazon CloudWatch namespaces to `AWSServiceRoleForRDS` for `PutMetricData`. These namespaces are required for Amazon DocumentDB (with MongoDB compatibility) and Amazon Neptune to publish CloudWatch metrics. For more information, see [Using condition keys to limit access to CloudWatch namespaces](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-cw-condition-keys-namespace.html) in the *Amazon CloudWatch User Guide*.  |  March 4, 2022  | 
|  [Service-linked role permissions for Amazon RDS Custom](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-custom) – New policy  |  Amazon RDS added a new service-linked role named `AWSServiceRoleForRDSCustom` to allow RDS Custom to call AWS services on behalf of your DB instances.  |  October 26, 2021  | 
|  Amazon RDS started tracking changes  |  Amazon RDS started tracking changes for its AWS managed policies.  |  October 26, 2021  | 

# Preventing cross-service confused deputy problems
<a name="cross-service-confused-deputy-prevention"></a>

The *confused deputy problem* is a security issue where an entity that doesn't have permission to perform an action can coerce a more-privileged entity to perform the action. In AWS, cross-service impersonation can result in the confused deputy problem. 

Cross-service impersonation can occur when one service (the *calling service*) calls another service (the *called service*). The calling service can be manipulated to use its permissions to act on another customer's resources in a way that it shouldn't have permission to access. To prevent this, AWS provides tools that can help you protect your data for all services with service principals that have been given access to resources in your account. For more information, see [The confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html) in the *IAM User Guide*.

To limit the permissions that Amazon RDS gives another service for a specific resource, we recommend using the [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn) and [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount) global condition context keys in resource policies. 

In some cases, the `aws:SourceArn` value doesn't contain the account ID, for example when you use the Amazon Resource Name (ARN) for an Amazon S3 bucket. In these cases, make sure to use both global condition context keys to limit permissions. In some cases, you use both global condition context keys and the `aws:SourceArn` value contains the account ID. In these cases, make sure that the `aws:SourceAccount` value and the account in the `aws:SourceArn` use the same account ID when they're used in the same policy statement. If you want only one resource to be associated with the cross-service access, use `aws:SourceArn`. If you want to allow any resource in the specified AWS account to be associated with the cross-service use, use `aws:SourceAccount`.

Make sure that the value of `aws:SourceArn` is an ARN for an Amazon RDS resource type. For more information, see [Amazon Resource Names (ARNs) in Amazon RDS](USER_Tagging.ARN.md).

The most effective way to protect against the confused deputy problem is to use the `aws:SourceArn` global condition context key with the full ARN of the resource. In some cases, you might not know the full ARN of the resource or you might be specifying multiple resources. In these cases, use the `aws:SourceArn` global context condition key with wildcards (`*`) for the unknown portions of the ARN. An example is `arn:aws:rds:*:123456789012:*`. 

The following example shows how you can use the `aws:SourceArn` and `aws:SourceAccount` global condition context keys in Amazon RDS to prevent the confused deputy problem.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": {
    "Sid": "ConfusedDeputyPreventionExamplePolicy",
    "Effect": "Allow",
    "Principal": {
      "Service": "rds.amazonaws.com"
    },
    "Action": "sts:AssumeRole",
    "Condition": {
      "ArnLike": {
        "aws:SourceArn": "arn:aws:rds:us-east-1:123456789012:db:mydbinstance"
      },
      "StringEquals": {
        "aws:SourceAccount": "123456789012"
      }
    }
  }
}
```

------

For more examples of policies that use the `aws:SourceArn` and `aws:SourceAccount` global condition context keys, see the following sections:
+ [Granting permissions to publish notifications to an Amazon SNS topic](USER_Events.GrantingPermissions.md)
+ [Manually creating an IAM role for native backup and restore](SQLServer.Procedural.Importing.Native.Enabling.md#SQLServer.Procedural.Importing.Native.Enabling.IAM)
+ [Setting up Windows Authentication for SQL Server DB instances](USER_SQLServerWinAuth.SettingUp.md)
+ [Prerequisites for integrating RDS for SQL Server with S3](Appendix.SQLServer.Options.S3-integration.preparing.md)
+ [Manually creating an IAM role for SQL Server Audit](Appendix.SQLServer.Options.Audit.IAM.md)
+ [Configuring IAM permissions for RDS for Oracle integration with Amazon S3](oracle-s3-integration.preparing.md)
+ [Setting up access to an Amazon S3 bucket](USER_PostgreSQL.S3Import.AccessPermission.md) (PostgreSQL import)
+ [Setting up access to an Amazon S3 bucket](postgresql-s3-export-access-bucket.md) (PostgreSQL export)

# IAM database authentication for MariaDB, MySQL, and PostgreSQL
<a name="UsingWithRDS.IAMDBAuth"></a>

You can authenticate to your DB instance using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works with MariaDB, MySQL, and PostgreSQL. With this authentication method, you don't need to use a password when you connect to a DB instance. Instead, you use an authentication token.

An *authentication token* is a unique string of characters that Amazon RDS generates on request. Authentication tokens are generated using AWS Signature Version 4. Each token has a lifetime of 15 minutes. You don't need to store user credentials in the database, because authentication is managed externally using IAM. You can also still use standard database authentication. The token is only used for authentication and doesn't affect the session after it is established.

IAM database authentication provides the following benefits:
+ Network traffic to and from the database is encrypted using Secure Socket Layer (SSL) or Transport Layer Security (TLS). For more information about using SSL/TLS with Amazon RDS, see [Using SSL/TLS to encrypt a connection to a DB instance or cluster ](UsingWithRDS.SSL.md).
+ You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance.
+ For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for greater security.

In general, consider using IAM database authentication when your applications create fewer than 200 connections per second, and you don't want to manage usernames and passwords directly in your application code.

The Amazon Web Services (AWS) JDBC Driver supports IAM database authentication. For more information, see [AWS IAM Authentication Plugin](https://github.com/aws/aws-advanced-jdbc-wrapper/blob/main/docs/using-the-jdbc-driver/using-plugins/UsingTheIamAuthenticationPlugin.md) in the [Amazon Web Services (AWS) JDBC Driver GitHub repository](https://github.com/aws/aws-advanced-jdbc-wrapper).

The Amazon Web Services (AWS) Python Driver supports IAM database authentication. For more information, see [AWS IAM Authentication Plugin](https://github.com/aws/aws-advanced-python-wrapper/blob/main/docs/using-the-python-driver/using-plugins/UsingTheIamAuthenticationPlugin.md) in the [Amazon Web Services (AWS) Python Driver GitHub repository](https://github.com/aws/aws-advanced-python-wrapper).

Navigate through the following topics to learn the process to set IAM for DB authentication:
+ [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md)
+ [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md)
+ [Creating a database account using IAM authentication](UsingWithRDS.IAMDBAuth.DBAccounts.md)
+ [Connecting to your DB instance using IAM authentication](UsingWithRDS.IAMDBAuth.Connecting.md) 

## Region and version availability
<a name="UsingWithRDS.IAMDBAuth.Availability"></a>

Feature availability and support varies across specific versions of each database engine. For more information on engine, version, and Region availability with Amazon RDS and IAM database authentication, see [Supported Regions and DB engines for IAM database authentication in Amazon RDS](Concepts.RDS_Fea_Regions_DB-eng.Feature.IamDatabaseAuthentication.md).

## CLI and SDK support
<a name="UsingWithRDS.IAMDBAuth.cli-sdk"></a>

IAM database authentication is available for the [AWS CLI](https://docs.aws.amazon.com/cli/latest/reference/rds/generate-db-auth-token.html) and for the following language-specific AWS SDKs:
+ [AWS SDK for .NET](https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/RDS/TRDSAuthTokenGenerator.html)
+ [AWS SDK for C\$1\$1](https://docs.aws.amazon.com/sdk-for-cpp/latest/api/class_aws_1_1_r_d_s_1_1_r_d_s_client.html#ae134ffffed5d7672f6156d324e7bd392)
+ [AWS SDK for Go](https://docs.aws.amazon.com/sdk-for-go/api/service/rds/#pkg-overview)
+ [AWS SDK for Java](https://docs.aws.amazon.com/sdk-for-java/latest/reference/software/amazon/awssdk/services/rds/RdsUtilities.html)
+ [AWS SDK for JavaScript](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/modules/_aws_sdk_rds_signer.html)
+ [AWS SDK for PHP](https://docs.aws.amazon.com/aws-sdk-php/v3/api/class-Aws.Rds.AuthTokenGenerator.html)
+ [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html#RDS.Client.generate_db_auth_token)
+ [AWS SDK for Ruby](https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/RDS/AuthTokenGenerator.html)

## Limitations for IAM database authentication
<a name="UsingWithRDS.IAMDBAuth.Limitations"></a>

When using IAM database authentication, the following limitations apply:
+ Currently, IAM database authentication doesn't support all global condition context keys.

  For more information about global condition context keys, see [AWS global condition context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) in the *IAM User Guide*.
+ For PostgreSQL, if the IAM role (`rds_iam`) is added to a user (including the RDS master user), IAM authentication takes precedence over password authentication, so the user must log in as an IAM user.
+ For PostgreSQL, Amazon RDS does not support enabling both IAM and Kerberos authentication methods at the same time.
+ For PostgreSQL, you can't use IAM authentication to establish a replication connection.
+ You cannot use a custom Route 53 DNS record instead of the DB instance endpoint to generate the authentication token.
+ CloudWatch and CloudTrail don't log IAM authentication. These services do not track `generate-db-auth-token` API calls that authorize the IAM role to enable database connection.
+ IAM DB authentication requires compute resources on the database instance. You must have between 300 and 1000 MiB extra memory on your database for reliable connectivity. To see the memory needed for your workload, compare the RES column for RDS processes in the Enhanced Monitoring processlist before and after enabling IAM DB authentication. See [Viewing OS metrics in the RDS console](USER_Monitoring.OS.Viewing.md).

  If you are using a burstable class instance, avoid running out of memory by reducing the memory used by other parameters like buffers and cache by the same amount.
+ IAM DB authentication is not supported for RDS on Outposts for any engine.

## Recommendations for IAM database authentication
<a name="UsingWithRDS.IAMDBAuth.ConnectionsPerSecond"></a>

We recommend the following when using IAM database authentication:
+ Use IAM database authentication when your application requires fewer than 200 new IAM database authentication connections per second.

  The database engines that work with Amazon RDS don't impose any limits on authentication attempts per second. However, when you use IAM database authentication, your application must generate an authentication token. Your application then uses that token to connect to the DB instance. If you exceed the limit of maximum new connections per second, then the extra overhead of IAM database authentication can cause connection throttling. 

  Consider using connection pooling in your applications to mitigate constant connection creation. This can reduce the overhead from IAM DB authentication and allow your applications to reuse existing connections. Alternatively, consider using RDS Proxy for these use cases. RDS Proxy has additional costs. See [RDS Proxy pricing](https://aws.amazon.com/rds/proxy/pricing/).
+ The size of an IAM database authentication token depends on many things including the number of IAM tags, IAM service policies, ARN lengths, as well as other IAM and database properties. The minimum size of this token is generally about 1 KB but can be larger. Since this token is used as the password in the connection string to the database using IAM authentication, you should ensure that your database driver (e.g., ODBC) and/or any tools do not limit or otherwise truncate this token due to its size. A truncated token will cause the authentication validation done by the database and IAM to fail.
+ If you are using temporary credentials when creating an IAM database authentication token, the temporary credentials must still be valid when using the IAM database authentication token to make a connection request.

## Unsupported AWS global condition context keys
<a name="UsingWithRDS.IAMDBAuth.GlobalContextKeys"></a>

 IAM database authentication does not support the following subset of AWS global condition context keys. 
+ `aws:Referer`
+ `aws:SourceIp`
+ `aws:SourceVpc`
+ `aws:SourceVpce`
+ `aws:UserAgent`
+ `aws:VpcSourceIp`

For more information, see [AWS global condition context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) in the *IAM User Guide*. 

# Enabling and disabling IAM database authentication
<a name="UsingWithRDS.IAMDBAuth.Enabling"></a>

By default, IAM database authentication is disabled on DB instances. You can enable or disable IAM database authentication using the AWS Management Console, AWS CLI, or the API.

You can enable IAM database authentication when you perform one of the following actions:
+ To create a new DB instance with IAM database authentication enabled, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).
+ To modify a DB instance to enable IAM database authentication, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).
+ To restore a DB instance from a snapshot with IAM database authentication enabled, see [Restoring to a DB instance](USER_RestoreFromSnapshot.md).
+ To restore a DB instance to a point in time with IAM database authentication enabled, see [Restoring a DB instance to a specified time for Amazon RDS](USER_PIT.md).

## Console
<a name="UsingWithRDS.IAMDBAuth.Enabling.Console"></a>

Each creation or modification workflow has a **Database authentication** section, where you can enable or disable IAM database authentication. In that section, choose **Password and IAM database authentication** to enable IAM database authentication.

**To enable or disable IAM database authentication for an existing DB instance**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the DB instance that you want to modify.
**Note**  
 Make sure that the DB instance is compatible with IAM authentication. Check the compatibility requirements in [Region and version availability](UsingWithRDS.IAMDBAuth.md#UsingWithRDS.IAMDBAuth.Availability).

1. Choose **Modify**.

1. In the **Database authentication** section, choose **Password and IAM database authentication** to enable IAM database authentication. Choose **Password authentication** or **Password and Kerberos authentication** to disable IAM authentication.

1. You can also choose to enable publishing IAM DB authentication logs to CloudWatch Logs. Under **Log exports**, choose the **iam-db-auth-error log** option. Publishing your logs to CloudWatch Logs consumes storage and you incur charges for that storage. Be sure to delete any CloudWatch Logs that you no longer need.

1. Choose **Continue**.

1. To apply the changes immediately, choose **Immediately** in the **Scheduling of modifications** section.

1. Choose **Modify DB instance** .

## AWS CLI
<a name="UsingWithRDS.IAMDBAuth.Enabling.CLI"></a>

To create a new DB instance with IAM authentication by using the AWS CLI, use the [https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) command. Specify the `--enable-iam-database-authentication` option, as shown in the following example.

```
aws rds create-db-instance \
    --db-instance-identifier mydbinstance \
    --db-instance-class db.m3.medium \
    --engine MySQL \
    --allocated-storage 20 \
    --master-username masterawsuser \
    --manage-master-user-password \
    --enable-iam-database-authentication
```

To update an existing DB instance to have or not have IAM authentication, use the AWS CLI command [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html). Specify either the `--enable-iam-database-authentication` or `--no-enable-iam-database-authentication` option, as appropriate.

**Note**  
 Make sure that the DB instance is compatible with IAM authentication. Check the compatibility requirements in [Region and version availability](UsingWithRDS.IAMDBAuth.md#UsingWithRDS.IAMDBAuth.Availability).

By default, Amazon RDS performs the modification during the next maintenance window. If you want to override this and enable IAM DB authentication as soon as possible, use the `--apply-immediately` parameter. 

The following example shows how to immediately enable IAM authentication for an existing DB instance.

```
aws rds modify-db-instance \
    --db-instance-identifier mydbinstance \
    --apply-immediately \
    --enable-iam-database-authentication
```

If you are restoring a DB instance, use one of the following AWS CLI commands:
+ `[restore-db-instance-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-to-point-in-time.html)`
+ `[restore-db-instance-from-db-snapshot](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html)`

The IAM database authentication setting defaults to that of the source snapshot. To change this setting, set the `--enable-iam-database-authentication` or `--no-enable-iam-database-authentication` option, as appropriate.

## RDS API
<a name="UsingWithRDS.IAMDBAuth.Enabling.API"></a>

To create a new DB instance with IAM authentication by using the API, use the API operation [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html). Set the `EnableIAMDatabaseAuthentication` parameter to `true`.

To update an existing DB instance to have IAM authentication, use the API operation [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBInstance.html). Set the `EnableIAMDatabaseAuthentication` parameter to `true` to enable IAM authentication, or `false` to disable it.

**Note**  
 Make sure that the DB instance is compatible with IAM authentication. Check the compatibility requirements in [Region and version availability](UsingWithRDS.IAMDBAuth.md#UsingWithRDS.IAMDBAuth.Availability).

If you are restoring a DB instance, use one of the following API operations:
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromDBSnapshot.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceFromDBSnapshot.html)
+  [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceToPointInTime.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBInstanceToPointInTime.html)

The IAM database authentication setting defaults to that of the source snapshot. To change this setting, set the `EnableIAMDatabaseAuthentication` parameter to `true` to enable IAM authentication, or `false` to disable it.

# Creating and using an IAM policy for IAM database access
<a name="UsingWithRDS.IAMDBAuth.IAMPolicy"></a>

To allow a user or role to connect to your DB instance, you must create an IAM policy. After that, you attach the policy to a permissions set or role.

**Note**  
To learn more about IAM policies, see [Identity and access management for Amazon RDS](UsingWithRDS.IAM.md).

The following example policy allows a user to connect to a DB instance using IAM database authentication.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "rds-db:connect"
            ],
            "Resource": [
                "arn:aws:rds-db:us-east-2:111122223333:dbuser:db-ABCDEFGHIJKL01234/db_user"
            ]
        }
    ]
}
```

------

**Important**  
A user with administrator permissions can access DB instances without explicit permissions in an IAM policy. If you want to restrict administrator access to DB instances, you can create an IAM role with the appropriate, lesser privileged permissions and assign it to the administrator.

**Note**  
Don't confuse the `rds-db:` prefix with other RDS API operation prefixes that begin with `rds:`. You use the `rds-db:` prefix and the `rds-db:connect` action only for IAM database authentication. They aren't valid in any other context. 

The example policy includes a single statement with the following elements:
+ `Effect` – Specify `Allow` to grant access to the DB instance. If you don't explicitly allow access, then access is denied by default.
+ `Action` – Specify `rds-db:connect` to allow connections to the DB instance.
+ `Resource` – Specify an Amazon Resource Name (ARN) that describes one database account in one DB instance. The ARN format is as follows.

  ```
  arn:aws:rds-db:region:account-id:dbuser:DbiResourceId/db-user-name
  ```

  In this format, replace the following:
  + `region` is the AWS Region for the DB instance. In the example policy, the AWS Region is `us-east-2`.
  + `account-id` is the AWS account number for the DB instance. In the example policy, the account number is `1234567890`. The user must be in the same account as the account for the DB instance.

    To perform cross-account access, create an IAM role with the policy shown above in the account for the DB instance and allow your other account to assume the role. 
  + `DbiResourceId` is the identifier for the DB instance. This identifier is unique to an AWS Region and never changes. In the example policy, the identifier is `db-ABCDEFGHIJKL01234`.

    To find a DB instance resource ID in the AWS Management Console for Amazon RDS, choose the DB instance to see its details. Then choose the **Configuration** tab. The **Resource ID** is shown in the **Configuration** section.

    Alternatively, you can use the AWS CLI command to list the identifiers and resource IDs for all of your DB instance in the current AWS Region, as shown following.

    ```
    aws rds describe-db-instances --query "DBInstances[*].[DBInstanceIdentifier,DbiResourceId]"
    ```

    If you are using Amazon Aurora, specify a `DbClusterResourceId` instead of a `DbiResourceId`. For more information, see [ Creating and using an IAM policy for IAM database access](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html) in the *Amazon Aurora User Guide*.
**Note**  
If you are connecting to a database through RDS Proxy, specify the proxy resource ID, such as `prx-ABCDEFGHIJKL01234`. For information about using IAM database authentication with RDS Proxy, see [Connecting to a database using IAM authentication](rds-proxy-connecting.md#rds-proxy-connecting-iam).
  + `db-user-name` is the name of the database account to associate with IAM authentication. In the example policy, the database account is `db_user`.

You can construct other ARNs to support various access patterns. The following policy allows access to two different database accounts in a DB instance.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Effect": "Allow",
         "Action": [
             "rds-db:connect"
         ],
         "Resource": [
             "arn:aws:rds-db:us-east-2:123456789012:dbuser:db-ABCDEFGHIJKL01234/jane_doe",
             "arn:aws:rds-db:us-east-2:123456789012:dbuser:db-ABCDEFGHIJKL01234/mary_roe"
         ]
      }
   ]
}
```

------

The following policy uses the "\$1" character to match all DB instances and database accounts for a particular AWS account and AWS Region. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "rds-db:connect"
            ],
            "Resource": [
                "arn:aws:rds-db:us-east-2:111122223333:dbuser:*/*"
            ]
        }
    ]
}
```

------

The following policy matches all of the DB instances for a particular AWS account and AWS Region. However, the policy only grants access to DB instances that have a `jane_doe` database account.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Effect": "Allow",
         "Action": [
             "rds-db:connect"
         ],
         "Resource": [
             "arn:aws:rds-db:us-east-2:123456789012:dbuser:*/jane_doe"
         ]
      }
   ]
}
```

------

The user or role has access to only those databases that the database user does. For example, suppose that your DB instance has a database named *dev*, and another database named *test*. If the database user `jane_doe` has access only to *dev*, any users or roles that access that DB instance with the `jane_doe` user also have access only to *dev*. This access restriction is also true for other database objects, such as tables, views, and so on.

An administrator must create IAM policies that grant entities permission to perform specific API operations on the specified resources they need. The administrator must then attach those policies to the permission sets or roles that require those permissions. For examples of policies, see [Identity-based policy examples for Amazon RDS](security_iam_id-based-policy-examples.md).

## Attaching an IAM policy to a permission set or role
<a name="UsingWithRDS.IAMDBAuth.IAMPolicy.Attaching"></a>

After you create an IAM policy to allow database authentication, you need to attach the policy to a permission set or role. For a tutorial on this topic, see [ Create and attach your first customer managed policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_managed-policies.html) in the *IAM User Guide*.

As you work through the tutorial, you can use one of the policy examples shown in this section as a starting point and tailor it to your needs. At the end of the tutorial, you have a permission set with an attached policy that can make use of the `rds-db:connect` action.

**Note**  
You can map multiple permission sets or roles to the same database user account. For example, suppose that your IAM policy specified the following resource ARN.  

```
arn:aws:rds-db:us-east-2:123456789012:dbuser:db-12ABC34DEFG5HIJ6KLMNOP78QR/jane_doe
```
If you attach the policy to *Jane*, *Bob*, and *Diego*, then each of those users can connect to the specified DB instance using the `jane_doe` database account.

# Creating a database account using IAM authentication
<a name="UsingWithRDS.IAMDBAuth.DBAccounts"></a>

With IAM database authentication, you don't need to assign database passwords to the user accounts you create. If you remove a user that is mapped to a database account, you should also remove the database account with the `DROP USER` statement.

**Note**  
The user name used for IAM authentication must match the case of the user name in the database.

**Topics**
+ [

## Using IAM authentication with MariaDB and MySQL
](#UsingWithRDS.IAMDBAuth.DBAccounts.MySQL)
+ [

## Using IAM authentication with PostgreSQL
](#UsingWithRDS.IAMDBAuth.DBAccounts.PostgreSQL)

## Using IAM authentication with MariaDB and MySQL
<a name="UsingWithRDS.IAMDBAuth.DBAccounts.MySQL"></a>

With MariaDB and MySQL, authentication is handled by `AWSAuthenticationPlugin`—an AWS-provided plugin that works seamlessly with IAM to authenticate your users. Connect to the DB instance as the master user or a different user who can create users and grant privileges. After connecting, issue the `CREATE USER` statement, as shown in the following example.

```
CREATE USER 'jane_doe' IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS'; 
```

The `IDENTIFIED WITH` clause allows MariaDB and MySQL to use the `AWSAuthenticationPlugin` to authenticate the database account (`jane_doe`). The `AS 'RDS'` clause refers to the authentication method. Make sure the specified database user name is the same as a resource in the IAM policy for IAM database access. For more information, see [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md). 

**Note**  
If you see the following message, it means that the AWS-provided plugin is not available for the current DB instance.  
`ERROR 1524 (HY000): Plugin 'AWSAuthenticationPlugin' is not loaded`  
To troubleshoot this error, verify that you are using a supported configuration and that you have enabled IAM database authentication on your DB instance. For more information, see [Region and version availability](UsingWithRDS.IAMDBAuth.md#UsingWithRDS.IAMDBAuth.Availability) and [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md).

After you create an account using `AWSAuthenticationPlugin`, you manage it in the same way as other database accounts. For example, you can modify account privileges with `GRANT` and `REVOKE` statements, or modify various account attributes with the `ALTER USER` statement. 

Database network traffic is encrypted using SSL/TLS when using IAM. To allow SSL connections, modify the user account with the following command.

```
ALTER USER 'jane_doe'@'%' REQUIRE SSL;     
```

 

## Using IAM authentication with PostgreSQL
<a name="UsingWithRDS.IAMDBAuth.DBAccounts.PostgreSQL"></a>

To use IAM authentication with PostgreSQL, connect to the DB instance as the master user or a different user who can create users and grant privileges. After connecting, create database users and then grant them the `rds_iam` role as shown in the following example.

```
CREATE USER db_userx; 
GRANT rds_iam TO db_userx;
```

Make sure the specified database user name is the same as a resource in the IAM policy for IAM database access. For more information, see [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md). You must grant the `rds_iam` role to use IAM authentication. You can use nested memberships or indirect grants of the role as well. 

# Connecting to your DB instance using IAM authentication
<a name="UsingWithRDS.IAMDBAuth.Connecting"></a>

With IAM database authentication, you use an authentication token when you connect to your DB instance. An *authentication token* is a string of characters that you use instead of a password. After you generate an authentication token, it's valid for 15 minutes before it expires. If you try to connect using an expired token, the connection request is denied.

Every authentication token must be accompanied by a valid signature, using AWS signature version 4. (For more information, see [Signature Version 4 signing process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) in the* AWS General Reference.*) The AWS CLI and an AWS SDK, such as the AWS SDK for Java or AWS SDK for Python (Boto3), can automatically sign each token you create.

You can use an authentication token when you connect to Amazon RDS from another AWS service, such as AWS Lambda. By using a token, you can avoid placing a password in your code. Alternatively, you can use an AWS SDK to programmatically create and programmatically sign an authentication token.

After you have a signed IAM authentication token, you can connect to an Amazon RDS DB instance. Following, you can find out how to do this using either a command line tool or an AWS SDK, such as the AWS SDK for Java or AWS SDK for Python (Boto3).

For more information, see the following blog posts:
+ [Use IAM authentication to connect with SQL Workbench/J to Aurora MySQL or Amazon RDS for MySQL](https://aws.amazon.com/blogs/database/use-iam-authentication-to-connect-with-sql-workbenchj-to-amazon-aurora-mysql-or-amazon-rds-for-mysql/)
+ [Using IAM authentication to connect with pgAdmin Amazon Aurora PostgreSQL or Amazon RDS for PostgreSQL](https://aws.amazon.com/blogs/database/using-iam-authentication-to-connect-with-pgadmin-amazon-aurora-postgresql-or-amazon-rds-for-postgresql/)

**Prerequisites**  
The following are prerequisites for connecting to your DB instance using IAM authentication:
+ [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md)
+ [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md)
+ [Creating a database account using IAM authentication](UsingWithRDS.IAMDBAuth.DBAccounts.md)

**Topics**
+ [

# Connecting to your DB instance using IAM authentication with the AWS drivers
](IAMDBAuth.Connecting.Drivers.md)
+ [

# Connecting to your DB instance using IAM authentication from the command line: AWS CLI and mysql client
](UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.md)
+ [

# Connecting to your DB instance using IAM authentication from the command line: AWS CLI and psql client
](UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.PostgreSQL.md)
+ [

# Connecting to your DB instance using IAM authentication and the AWS SDK for .NET
](UsingWithRDS.IAMDBAuth.Connecting.NET.md)
+ [

# Connecting to your DB instance using IAM authentication and the AWS SDK for Go
](UsingWithRDS.IAMDBAuth.Connecting.Go.md)
+ [

# Connecting to your DB instance using IAM authentication and the AWS SDK for Java
](UsingWithRDS.IAMDBAuth.Connecting.Java.md)
+ [

# Connecting to your DB instance using IAM authentication and the AWS SDK for Python (Boto3)
](UsingWithRDS.IAMDBAuth.Connecting.Python.md)

# Connecting to your DB instance using IAM authentication with the AWS drivers
<a name="IAMDBAuth.Connecting.Drivers"></a>

The AWS suite of drivers has been designed to provide support for faster switchover and failover times, and authentication with AWS Secrets Manager, AWS Identity and Access Management (IAM), and Federated Identity. The AWS drivers rely on monitoring DB instance status and being aware of the instance topology to determine the new writer. This approach reduces switchover and failover times to single-digit seconds, compared to tens of seconds for open-source drivers.

For more information on the AWS drivers, see the corresponding language driver for your [RDS for MariaDB](MariaDB.Connecting.Drivers.md#MariaDB.Connecting.JDBCDriver), [RDS for MySQL](MySQL.Connecting.Drivers.md#MySQL.Connecting.JDBCDriver), or [RDS for PostgreSQL](PostgreSQL.Connecting.JDBCDriver.md) DB instance.

**Note**  
The only features supported for RDS for MariaDB are authentication with AWS Secrets Manager, AWS Identity and Access Management (IAM), and Federated Identity.

# Connecting to your DB instance using IAM authentication from the command line: AWS CLI and mysql client
<a name="UsingWithRDS.IAMDBAuth.Connecting.AWSCLI"></a>

You can connect from the command line to an Amazon RDS DB instance with the AWS CLI and `mysql` command line tool as described following.

**Prerequisites**  
The following are prerequisites for connecting to your DB instance using IAM authentication:
+ [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md)
+ [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md)
+ [Creating a database account using IAM authentication](UsingWithRDS.IAMDBAuth.DBAccounts.md)

**Note**  
For information about connecting to your database using SQL Workbench/J with IAM authentication, see the blog post [Use IAM authentication to connect with SQL Workbench/J to Aurora MySQL or Amazon RDS for MySQL](https://aws.amazon.com/blogs/database/use-iam-authentication-to-connect-with-sql-workbenchj-to-amazon-aurora-mysql-or-amazon-rds-for-mysql/).

**Topics**
+ [

## Generating an IAM authentication token
](#UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.AuthToken)
+ [

## Connecting to a DB instance
](#UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.Connect)

## Generating an IAM authentication token
<a name="UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.AuthToken"></a>

The following example shows how to get a signed authentication token using the AWS CLI.

```
aws rds generate-db-auth-token \
   --hostname rdsmysql.123456789012.us-west-2.rds.amazonaws.com \
   --port 3306 \
   --region us-west-2 \
   --username jane_doe
```

In the example, the parameters are as follows:
+ `--hostname` – The host name of the DB instance that you want to access
+ `--port` – The port number used for connecting to your DB instance
+ `--region` – The AWS Region where the DB instance is running
+ `--username` – The database account that you want to access

The first several characters of the token look like the following.

```
rdsmysql.123456789012.us-west-2.rds.amazonaws.com:3306/?Action=connect&DBUser=jane_doe&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Expires=900...
```

**Note**  
You cannot use a custom Route 53 DNS record instead of the DB instance endpoint to generate the authentication token.

## Connecting to a DB instance
<a name="UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.Connect"></a>

The general format for connecting is shown following.

```
mysql --host=hostName --port=portNumber --ssl-ca=full_path_to_ssl_certificate --enable-cleartext-plugin --user=userName --password=authToken
```

The parameters are as follows:
+ `--host` – The host name of the DB instance that you want to access
+ `--port` – The port number used for connecting to your DB instance
+ `--ssl-ca` – The full path to the SSL certificate file that contains the public key

  For more information about SSL/TLS support for MariaDB, see [SSL/TLS support for MariaDB DB instances on Amazon RDS](MariaDB.Concepts.SSLSupport.md).

  For more information about SSL/TLS support for MySQL, see [SSL/TLS support for MySQL DB instances on Amazon RDS](MySQL.Concepts.SSLSupport.md).

  To download an SSL certificate, see [Using SSL/TLS to encrypt a connection to a DB instance or cluster ](UsingWithRDS.SSL.md).
+ `--enable-cleartext-plugin` – A value that specifies that `AWSAuthenticationPlugin` must be used for this connection

  If you are using a MariaDB client, the `--enable-cleartext-plugin` option isn't required.
+ `--user` – The database account that you want to access
+ `--password` – A signed IAM authentication token

The authentication token consists of several hundred characters. It can be unwieldy on the command line. One way to work around this is to save the token to an environment variable, and then use that variable when you connect. The following example shows one way to perform this workaround. In the example, */sample\$1dir/* is the full path to the SSL certificate file that contains the public key.

```
RDSHOST="mysqldb.123456789012.us-east-1.rds.amazonaws.com"
TOKEN="$(aws rds generate-db-auth-token --hostname $RDSHOST --port 3306 --region us-west-2 --username jane_doe )"

mysql --host=$RDSHOST --port=3306 --ssl-ca=/sample_dir/global-bundle.pem --enable-cleartext-plugin --user=jane_doe --password=$TOKEN
```

When you connect using `AWSAuthenticationPlugin`, the connection is secured using SSL. To verify this, type the following at the `mysql>` command prompt.

```
show status like 'Ssl%';
```

The following lines in the output show more details.

```
+---------------+-------------+
| Variable_name | Value                                                                                                                                                                                                                                |
+---------------+-------------+
| ...           | ...
| Ssl_cipher    | AES256-SHA                                                                                                                                                                                                                           |
| ...           | ...
| Ssl_version   | TLSv1.1                                                                                                                                                                                                                              |
| ...           | ...
+-----------------------------+
```

If you want to connect to a DB instance through a proxy, see [Connecting to a database using IAM authentication](rds-proxy-connecting.md#rds-proxy-connecting-iam).

# Connecting to your DB instance using IAM authentication from the command line: AWS CLI and psql client
<a name="UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.PostgreSQL"></a>

You can connect from the command line to an Amazon RDS for PostgreSQL DB instance with the AWS CLI and psql command line tool as described following.

**Prerequisites**  
The following are prerequisites for connecting to your DB instance using IAM authentication:
+ [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md)
+ [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md)
+ [Creating a database account using IAM authentication](UsingWithRDS.IAMDBAuth.DBAccounts.md)

**Note**  
For information about connecting to your database using pgAdmin with IAM authentication, see the blog post [Using IAM authentication to connect with pgAdmin Amazon Aurora PostgreSQL or Amazon RDS for PostgreSQL](https://aws.amazon.com/blogs/database/using-iam-authentication-to-connect-with-pgadmin-amazon-aurora-postgresql-or-amazon-rds-for-postgresql/).

**Topics**
+ [

## Generating an IAM authentication token
](#UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.AuthToken.PostgreSQL)
+ [

## Connecting to an Amazon RDS PostgreSQL instance
](#UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.Connect.PostgreSQL)

## Generating an IAM authentication token
<a name="UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.AuthToken.PostgreSQL"></a>

The authentication token consists of several hundred characters so it can be unwieldy on the command line. One way to work around this is to save the token to an environment variable, and then use that variable when you connect. The following example shows how to use the AWS CLI to get a signed authentication token using the `generate-db-auth-token` command, and store it in a `PGPASSWORD` environment variable.

```
export RDSHOST="rdspostgres.123456789012.us-west-2.rds.amazonaws.com"
export PGPASSWORD="$(aws rds generate-db-auth-token --hostname $RDSHOST --port 5432 --region us-west-2 --username jane_doe )"
```

In the example, the parameters to the `generate-db-auth-token` command are as follows:
+ `--hostname` – The host name of the DB instance that you want to access
+ `--port` – The port number used for connecting to your DB instance
+ `--region` – The AWS Region where the DB instance is running
+ `--username` – The database account that you want to access

The first several characters of the generated token look like the following.

```
rdspostgres.123456789012.us-west-2.rds.amazonaws.com:5432/?Action=connect&DBUser=jane_doe&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Expires=900...
```

**Note**  
You cannot use a custom Route 53 DNS record instead of the DB instance endpoint to generate the authentication token.

## Connecting to an Amazon RDS PostgreSQL instance
<a name="UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.Connect.PostgreSQL"></a>

The general format for using psql to connect is shown following.

```
psql "host=hostName port=portNumber sslmode=verify-full sslrootcert=full_path_to_ssl_certificate dbname=DBName user=userName password=authToken"
```

The parameters are as follows:
+ `host` – The host name of the DB instance that you want to access
+ `port` – The port number used for connecting to your DB instance
+ `sslmode` – The SSL mode to use

  When you use `sslmode=verify-full`, the SSL connection verifies the DB instance endpoint against the endpoint in the SSL certificate.
+ `sslrootcert` – The full path to the SSL certificate file that contains the public key

  For more information, see [Using SSL with a PostgreSQL DB instance](PostgreSQL.Concepts.General.SSL.md).

  To download an SSL certificate, see [Using SSL/TLS to encrypt a connection to a DB instance or cluster ](UsingWithRDS.SSL.md).
+ `dbname` – The database that you want to access
+ `user` – The database account that you want to access
+ `password` – A signed IAM authentication token

**Note**  
You cannot use a custom Route 53 DNS record instead of the DB instance endpoint to generate the authentication token.

The following example shows using psql to connect. In the example, psql uses the environment variable `RDSHOST` for the host and the environment variable `PGPASSWORD` for the generated token. Also, */sample\$1dir/* is the full path to the SSL certificate file that contains the public key.

```
export RDSHOST="rdspostgres.123456789012.us-west-2.rds.amazonaws.com"
export PGPASSWORD="$(aws rds generate-db-auth-token --hostname $RDSHOST --port 5432 --region us-west-2 --username jane_doe )"
                    
psql "host=$RDSHOST port=5432 sslmode=verify-full sslrootcert=/sample_dir/global-bundle.pem dbname=DBName user=jane_doe password=$PGPASSWORD"
```

If you want to connect to a DB instance through a proxy, see [Connecting to a database using IAM authentication](rds-proxy-connecting.md#rds-proxy-connecting-iam).

# Connecting to your DB instance using IAM authentication and the AWS SDK for .NET
<a name="UsingWithRDS.IAMDBAuth.Connecting.NET"></a>

You can connect to an RDS for MariaDB, MySQL, or PostgreSQL DB instance with the AWS SDK for .NET as described following.

**Prerequisites**  
The following are prerequisites for connecting to your DB instance using IAM authentication:
+ [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md)
+ [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md)
+ [Creating a database account using IAM authentication](UsingWithRDS.IAMDBAuth.DBAccounts.md)

**Examples**  
The following code examples show how to generate an authentication token, and then use it to connect to a DB instance.

To run this code example, you need the [AWS SDK for .NET](http://aws.amazon.com/sdk-for-net/), found on the AWS site. The `AWSSDK.CORE` and the `AWSSDK.RDS` packages are required. To connect to a DB instance, use the .NET database connector for the DB engine, such as MySqlConnector for MariaDB or MySQL, or Npgsql for PostgreSQL.

This code connects to a MariaDB or MySQL DB instance. Modify the values of the following variables as needed:
+ `server` – The endpoint of the DB instance that you want to access
+ `user` – The database account that you want to access
+ `database` – The database that you want to access
+ `port` – The port number used for connecting to your DB instance
+ `SslMode` – The SSL mode to use

  When you use `SslMode=Required`, the SSL connection verifies the DB instance endpoint against the endpoint in the SSL certificate.
+ `SslCa` – The full path to the SSL certificate for Amazon RDS

  To download a certificate, see [Using SSL/TLS to encrypt a connection to a DB instance or cluster ](UsingWithRDS.SSL.md).

**Note**  
You cannot use a custom Route 53 DNS record instead of the DB instance endpoint to generate the authentication token.

```
using System;
using System.Data;
using MySql.Data;
using MySql.Data.MySqlClient;
using Amazon;

namespace ubuntu
{
  class Program
  {
    static void Main(string[] args)
    {
      var pwd = Amazon.RDS.Util.RDSAuthTokenGenerator.GenerateAuthToken(RegionEndpoint.USEast1, "mysqldb.123456789012.us-east-1.rds.amazonaws.com", 3306, "jane_doe");
      // for debug only Console.Write("{0}\n", pwd);  //this verifies the token is generated

      MySqlConnection conn = new MySqlConnection($"server=mysqldb.123456789012.us-east-1.rds.amazonaws.com;user=jane_doe;database=mydB;port=3306;password={pwd};SslMode=Required;SslCa=full_path_to_ssl_certificate");
      conn.Open();

      // Define a query
      MySqlCommand sampleCommand = new MySqlCommand("SHOW DATABASES;", conn);

      // Execute a query
      MySqlDataReader mysqlDataRdr = sampleCommand.ExecuteReader();

      // Read all rows and output the first column in each row
      while (mysqlDataRdr.Read())
        Console.WriteLine(mysqlDataRdr[0]);

      mysqlDataRdr.Close();
      // Close connection
      conn.Close();
    }
  }
}
```

This code connects to a PostgreSQL DB instance.

Modify the values of the following variables as needed:
+ `Server` – The endpoint of the DB instance that you want to access
+ `User ID` – The database account that you want to access
+ `Database` – The database that you want to access
+ `Port` – The port number used for connecting to your DB instance
+ `SSL Mode` – The SSL mode to use

  When you use `SSL Mode=Required`, the SSL connection verifies the DB instance endpoint against the endpoint in the SSL certificate.
+ `Root Certificate` – The full path to the SSL certificate for Amazon RDS

  To download a certificate, see [Using SSL/TLS to encrypt a connection to a DB instance or cluster ](UsingWithRDS.SSL.md).

**Note**  
You cannot use a custom Route 53 DNS record instead of the DB instance endpoint to generate the authentication token.

```
using System;
using Npgsql;
using Amazon.RDS.Util;

namespace ConsoleApp1
{
    class Program
    {
        static void Main(string[] args)
        {
            var pwd = RDSAuthTokenGenerator.GenerateAuthToken("postgresmydb.123456789012.us-east-1.rds.amazonaws.com", 5432, "jane_doe");
// for debug only Console.Write("{0}\n", pwd);  //this verifies the token is generated

            NpgsqlConnection conn = new NpgsqlConnection($"Server=postgresmydb.123456789012.us-east-1.rds.amazonaws.com;User Id=jane_doe;Password={pwd};Database=mydb;SSL Mode=Require;Root Certificate=full_path_to_ssl_certificate");
            conn.Open();

            // Define a query
                   NpgsqlCommand cmd = new NpgsqlCommand("select count(*) FROM pg_user", conn);

            // Execute a query
            NpgsqlDataReader dr = cmd.ExecuteReader();

            // Read all rows and output the first column in each row
            while (dr.Read())
                Console.Write("{0}\n", dr[0]);

            // Close connection
            conn.Close();
        }
    }
}
```

If you want to connect to a DB instance through a proxy, see [Connecting to a database using IAM authentication](rds-proxy-connecting.md#rds-proxy-connecting-iam).

# Connecting to your DB instance using IAM authentication and the AWS SDK for Go
<a name="UsingWithRDS.IAMDBAuth.Connecting.Go"></a>

You can connect to an RDS for MariaDB, MySQL, or PostgreSQL DB instance with the AWS SDK for Go as described following.

**Prerequisites**  
The following are prerequisites for connecting to your DB instance using IAM authentication:
+ [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md)
+ [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md)
+ [Creating a database account using IAM authentication](UsingWithRDS.IAMDBAuth.DBAccounts.md)

**Examples**  
To run these code examples, you need the [AWS SDK for Go](http://aws.amazon.com/sdk-for-go/), found on the AWS site.

Modify the values of the following variables as needed:
+ `dbName` – The database that you want to access
+ `dbUser` – The database account that you want to access
+ `dbHost` – The endpoint of the DB instance that you want to access
**Note**  
You cannot use a custom Route 53 DNS record instead of the DB instance endpoint to generate the authentication token.
+ `dbPort` – The port number used for connecting to your DB instance
+ `region` – The AWS Region where the DB instance is running

In addition, make sure the imported libraries in the sample code exist on your system.

**Important**  
The examples in this section use the following code to provide credentials that access a database from a local environment:  
`creds := credentials.NewEnvCredentials()`  
If you are accessing a database from an AWS service, such as Amazon EC2 or Amazon ECS, you can replace the code with the following code:  
`sess := session.Must(session.NewSession())`  
`creds := sess.Config.Credentials`  
If you make this change, make sure you add the following import:  
`"github.com/aws/aws-sdk-go/aws/session"`

**Topics**
+ [

## Connecting using IAM authentication and the AWS SDK for Go V2
](#UsingWithRDS.IAMDBAuth.Connecting.GoV2)
+ [

## Connecting using IAM authentication and the AWS SDK for Go V1.
](#UsingWithRDS.IAMDBAuth.Connecting.GoV1)

## Connecting using IAM authentication and the AWS SDK for Go V2
<a name="UsingWithRDS.IAMDBAuth.Connecting.GoV2"></a>

You can connect to a DB instance using IAM authentication and the AWS SDK for Go V2.

The following code examples show how to generate an authentication token, and then use it to connect to a DB instance. 

This code connects to a MariaDB or MySQL DB instance.

```
package main
                
import (
     "context"
     "database/sql"
     "fmt"

     "github.com/aws/aws-sdk-go-v2/config"
     "github.com/aws/aws-sdk-go-v2/feature/rds/auth"
     _ "github.com/go-sql-driver/mysql"
)

func main() {

     var dbName string = "DatabaseName"
     var dbUser string = "DatabaseUser"
     var dbHost string = "mysqldb.123456789012.us-east-1.rds.amazonaws.com"
     var dbPort int = 3306
     var dbEndpoint string = fmt.Sprintf("%s:%d", dbHost, dbPort)
     var region string = "us-east-1"

    cfg, err := config.LoadDefaultConfig(context.TODO())
    if err != nil {
    	panic("configuration error: " + err.Error())
    }

    authenticationToken, err := auth.BuildAuthToken(
    	context.TODO(), dbEndpoint, region, dbUser, cfg.Credentials)
    if err != nil {
	    panic("failed to create authentication token: " + err.Error())
    }

    dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?tls=true&allowCleartextPasswords=true",
        dbUser, authenticationToken, dbEndpoint, dbName,
    )

    db, err := sql.Open("mysql", dsn)
    if err != nil {
        panic(err)
    }

    err = db.Ping()
    if err != nil {
        panic(err)
    }
}
```

This code connects to a PostgreSQL DB instance.

```
package main

import (
     "context"
     "database/sql"
     "fmt"

     "github.com/aws/aws-sdk-go-v2/config"
     "github.com/aws/aws-sdk-go-v2/feature/rds/auth"
     _ "github.com/lib/pq"
)

func main() {

     var dbName string = "DatabaseName"
     var dbUser string = "DatabaseUser"
     var dbHost string = "postgresmydb.123456789012.us-east-1.rds.amazonaws.com"
     var dbPort int = 5432
     var dbEndpoint string = fmt.Sprintf("%s:%d", dbHost, dbPort)
     var region string = "us-east-1"

    cfg, err := config.LoadDefaultConfig(context.TODO())
    if err != nil {
    	panic("configuration error: " + err.Error())
    }

    authenticationToken, err := auth.BuildAuthToken(
    	context.TODO(), dbEndpoint, region, dbUser, cfg.Credentials)
    if err != nil {
	    panic("failed to create authentication token: " + err.Error())
    }

    dsn := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s",
        dbHost, dbPort, dbUser, authenticationToken, dbName,
    )

    db, err := sql.Open("postgres", dsn)
    if err != nil {
        panic(err)
    }

    err = db.Ping()
    if err != nil {
        panic(err)
    }
}
```

If you want to connect to a DB instance through a proxy, see [Connecting to a database using IAM authentication](rds-proxy-connecting.md#rds-proxy-connecting-iam).

## Connecting using IAM authentication and the AWS SDK for Go V1.
<a name="UsingWithRDS.IAMDBAuth.Connecting.GoV1"></a>

You can connect to a DB instance using IAM authentication and the AWS SDK for Go V1

The following code examples show how to generate an authentication token, and then use it to connect to a DB instance. 

This code connects to a MariaDB or MySQL DB instance.

```
package main
         
import (
    "database/sql"
    "fmt"
    "log"

    "github.com/aws/aws-sdk-go/aws/credentials"
    "github.com/aws/aws-sdk-go/service/rds/rdsutils"
    _ "github.com/go-sql-driver/mysql"
)

func main() {
    dbName := "app"
    dbUser := "jane_doe"
    dbHost := "mysqldb.123456789012.us-east-1.rds.amazonaws.com"
    dbPort := 3306
    dbEndpoint := fmt.Sprintf("%s:%d", dbHost, dbPort)
    region := "us-east-1"

    creds := credentials.NewEnvCredentials()
    authToken, err := rdsutils.BuildAuthToken(dbEndpoint, region, dbUser, creds)
    if err != nil {
        panic(err)
    }

    dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?tls=true&allowCleartextPasswords=true",
        dbUser, authToken, dbEndpoint, dbName,
    )

    db, err := sql.Open("mysql", dsn)
    if err != nil {
        panic(err)
    }

    err = db.Ping()
    if err != nil {
        panic(err)
    }
}
```

This code connects to a PostgreSQL DB instance.

```
package main

import (
	"database/sql"
	"fmt"

	"github.com/aws/aws-sdk-go/aws/credentials"
	"github.com/aws/aws-sdk-go/service/rds/rdsutils"
	_ "github.com/lib/pq"
)

func main() {
    dbName := "app"
    dbUser := "jane_doe"
    dbHost := "postgresmydb.123456789012.us-east-1.rds.amazonaws.com"
    dbPort := 5432
    dbEndpoint := fmt.Sprintf("%s:%d", dbHost, dbPort)
    region := "us-east-1"

    creds := credentials.NewEnvCredentials()
    authToken, err := rdsutils.BuildAuthToken(dbEndpoint, region, dbUser, creds)
    if err != nil {
        panic(err)
    }

    dsn := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s",
        dbHost, dbPort, dbUser, authToken, dbName,
    )

    db, err := sql.Open("postgres", dsn)
    if err != nil {
        panic(err)
    }

    err = db.Ping()
    if err != nil {
        panic(err)
    }
}
```

If you want to connect to a DB instance through a proxy, see [Connecting to a database using IAM authentication](rds-proxy-connecting.md#rds-proxy-connecting-iam).

# Connecting to your DB instance using IAM authentication and the AWS SDK for Java
<a name="UsingWithRDS.IAMDBAuth.Connecting.Java"></a>

You can connect to an RDS for MariaDB, MySQL, or PostgreSQL DB instance with the AWS SDK for Java as described following.

**Prerequisites**  
The following are prerequisites for connecting to your DB instance using IAM authentication:
+ [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md)
+ [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md)
+ [Creating a database account using IAM authentication](UsingWithRDS.IAMDBAuth.DBAccounts.md)
+ [Set up the AWS SDK for Java](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-install.html)

For examples on how to use the SDK for Java 2.x, see [Amazon RDS examples using SDK for Java 2.x](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/java_rds_code_examples.html). You can also use the AWS Advanced JDBC Wrapper, see [AWS Advanced JDBC Wrapper documentation](https://github.com/aws/aws-advanced-jdbc-wrapper/blob/main/docs/Documentation.md).

**Topics**
+ [

## Generating an IAM authentication token
](#UsingWithRDS.IAMDBAuth.Connecting.Java.AuthToken)
+ [

## Manually constructing an IAM authentication token
](#UsingWithRDS.IAMDBAuth.Connecting.Java.AuthToken2)
+ [

## Connecting to a DB instance
](#UsingWithRDS.IAMDBAuth.Connecting.Java.AuthToken.Connect)

## Generating an IAM authentication token
<a name="UsingWithRDS.IAMDBAuth.Connecting.Java.AuthToken"></a>

If you are writing programs using the AWS SDK for Java, you can get a signed authentication token using the `RdsIamAuthTokenGenerator` class. Using this class requires that you provide AWS credentials. To do this, you create an instance of the `DefaultAWSCredentialsProviderChain` class. `DefaultAWSCredentialsProviderChain` uses the first AWS access key and secret key that it finds in the [default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default). For more information about AWS access keys, see [Managing access keys for users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html).

**Note**  
You cannot use a custom Route 53 DNS record instead of the DB instance endpoint to generate the authentication token.

After you create an instance of `RdsIamAuthTokenGenerator`, you can call the `getAuthToken` method to obtain a signed token. Provide the AWS Region, host name, port number, and user name. The following code example illustrates how to do this.

```
package com.amazonaws.codesamples;

import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.amazonaws.services.rds.auth.GetIamAuthTokenRequest;
import com.amazonaws.services.rds.auth.RdsIamAuthTokenGenerator;

public class GenerateRDSAuthToken {

    public static void main(String[] args) {

	    String region = "us-west-2";
	    String hostname = "rdsmysql.123456789012.us-west-2.rds.amazonaws.com";
	    String port = "3306";
	    String username = "jane_doe";
	
	    System.out.println(generateAuthToken(region, hostname, port, username));
    }

    static String generateAuthToken(String region, String hostName, String port, String username) {

	    RdsIamAuthTokenGenerator generator = RdsIamAuthTokenGenerator.builder()
		    .credentials(new DefaultAWSCredentialsProviderChain())
		    .region(region)
		    .build();

	    String authToken = generator.getAuthToken(
		    GetIamAuthTokenRequest.builder()
		    .hostname(hostName)
		    .port(Integer.parseInt(port))
		    .userName(username)
		    .build());
	    
	    return authToken;
    }

}
```

## Manually constructing an IAM authentication token
<a name="UsingWithRDS.IAMDBAuth.Connecting.Java.AuthToken2"></a>

In Java, the easiest way to generate an authentication token is to use `RdsIamAuthTokenGenerator`. This class creates an authentication token for you, and then signs it using AWS signature version 4. For more information, see [Signature version 4 signing process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) in the *AWS General Reference.*

However, you can also construct and sign an authentication token manually, as shown in the following code example.

```
package com.amazonaws.codesamples;

import com.amazonaws.SdkClientException;
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.amazonaws.auth.SigningAlgorithm;
import com.amazonaws.util.BinaryUtils;
import org.apache.commons.lang3.StringUtils;

import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
import java.nio.charset.Charset;
import java.security.MessageDigest;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.SortedMap;
import java.util.TreeMap;

import static com.amazonaws.auth.internal.SignerConstants.AWS4_TERMINATOR;
import static com.amazonaws.util.StringUtils.UTF8;

public class CreateRDSAuthTokenManually {
    public static String httpMethod = "GET";
    public static String action = "connect";
    public static String canonicalURIParameter = "/";
    public static SortedMap<String, String> canonicalQueryParameters = new TreeMap();
    public static String payload = StringUtils.EMPTY;
    public static String signedHeader = "host";
    public static String algorithm = "AWS4-HMAC-SHA256";
    public static String serviceName = "rds-db";
    public static String requestWithoutSignature;

    public static void main(String[] args) throws Exception {

        String region = "us-west-2";
        String instanceName = "rdsmysql.123456789012.us-west-2.rds.amazonaws.com";
        String port = "3306";
        String username = "jane_doe";
	
        Date now = new Date();
        String date = new SimpleDateFormat("yyyyMMdd").format(now);
        String dateTimeStamp = new SimpleDateFormat("yyyyMMdd'T'HHmmss'Z'").format(now);
        DefaultAWSCredentialsProviderChain creds = new DefaultAWSCredentialsProviderChain();
	    String awsAccessKey = creds.getCredentials().getAWSAccessKeyId();
	    String awsSecretKey = creds.getCredentials().getAWSSecretKey();
        String expiryMinutes = "900";
        
        System.out.println("Step 1:  Create a canonical request:");
        String canonicalString = createCanonicalString(username, awsAccessKey, date, dateTimeStamp, region, expiryMinutes, instanceName, port);
        System.out.println(canonicalString);
        System.out.println();

        System.out.println("Step 2:  Create a string to sign:");        
        String stringToSign = createStringToSign(dateTimeStamp, canonicalString, awsAccessKey, date, region);
        System.out.println(stringToSign);
        System.out.println();

        System.out.println("Step 3:  Calculate the signature:");        
        String signature = BinaryUtils.toHex(calculateSignature(stringToSign, newSigningKey(awsSecretKey, date, region, serviceName)));
        System.out.println(signature);
        System.out.println();

        System.out.println("Step 4:  Add the signing info to the request");                
        System.out.println(appendSignature(signature));
        System.out.println();
        
    }

    //Step 1: Create a canonical request date should be in format YYYYMMDD and dateTime should be in format YYYYMMDDTHHMMSSZ
    public static String createCanonicalString(String user, String accessKey, String date, String dateTime, String region, String expiryPeriod, String hostName, String port) throws Exception {
        canonicalQueryParameters.put("Action", action);
        canonicalQueryParameters.put("DBUser", user);
        canonicalQueryParameters.put("X-Amz-Algorithm", "AWS4-HMAC-SHA256");
        canonicalQueryParameters.put("X-Amz-Credential", accessKey + "%2F" + date + "%2F" + region + "%2F" + serviceName + "%2Faws4_request");
        canonicalQueryParameters.put("X-Amz-Date", dateTime);
        canonicalQueryParameters.put("X-Amz-Expires", expiryPeriod);
        canonicalQueryParameters.put("X-Amz-SignedHeaders", signedHeader);
        String canonicalQueryString = "";
        while(!canonicalQueryParameters.isEmpty()) {
            String currentQueryParameter = canonicalQueryParameters.firstKey();
            String currentQueryParameterValue = canonicalQueryParameters.remove(currentQueryParameter);
            canonicalQueryString = canonicalQueryString + currentQueryParameter + "=" + currentQueryParameterValue;
            if (!currentQueryParameter.equals("X-Amz-SignedHeaders")) {
                canonicalQueryString += "&";
            }
        }
        String canonicalHeaders = "host:" + hostName + ":" + port + '\n';
        requestWithoutSignature = hostName + ":" + port + "/?" + canonicalQueryString;

        String hashedPayload = BinaryUtils.toHex(hash(payload));
        return httpMethod + '\n' + canonicalURIParameter + '\n' + canonicalQueryString + '\n' + canonicalHeaders + '\n' + signedHeader + '\n' + hashedPayload;

    }

    //Step 2: Create a string to sign using sig v4
    public static String createStringToSign(String dateTime, String canonicalRequest, String accessKey, String date, String region) throws Exception {
        String credentialScope = date + "/" + region + "/" + serviceName + "/aws4_request";
        return algorithm + '\n' + dateTime + '\n' + credentialScope + '\n' + BinaryUtils.toHex(hash(canonicalRequest));

    }

    //Step 3: Calculate signature
    /**
     * Step 3 of the &AWS; Signature version 4 calculation. It involves deriving
     * the signing key and computing the signature. Refer to
     * http://docs.aws.amazon
     * .com/general/latest/gr/sigv4-calculate-signature.html
     */
    public static byte[] calculateSignature(String stringToSign,
                                            byte[] signingKey) {
        return sign(stringToSign.getBytes(Charset.forName("UTF-8")), signingKey,
                SigningAlgorithm.HmacSHA256);
    }

    public static byte[] sign(byte[] data, byte[] key,
                          SigningAlgorithm algorithm) throws SdkClientException {
        try {
            Mac mac = algorithm.getMac();
            mac.init(new SecretKeySpec(key, algorithm.toString()));
            return mac.doFinal(data);
        } catch (Exception e) {
            throw new SdkClientException(
                    "Unable to calculate a request signature: "
                            + e.getMessage(), e);
        }
    }

    public static byte[] newSigningKey(String secretKey,
                                   String dateStamp, String regionName, String serviceName) {
        byte[] kSecret = ("AWS4" + secretKey).getBytes(Charset.forName("UTF-8"));
        byte[] kDate = sign(dateStamp, kSecret, SigningAlgorithm.HmacSHA256);
        byte[] kRegion = sign(regionName, kDate, SigningAlgorithm.HmacSHA256);
        byte[] kService = sign(serviceName, kRegion,
                SigningAlgorithm.HmacSHA256);
        return sign(AWS4_TERMINATOR, kService, SigningAlgorithm.HmacSHA256);
    }

    public static byte[] sign(String stringData, byte[] key,
                       SigningAlgorithm algorithm) throws SdkClientException {
        try {
            byte[] data = stringData.getBytes(UTF8);
            return sign(data, key, algorithm);
        } catch (Exception e) {
            throw new SdkClientException(
                    "Unable to calculate a request signature: "
                            + e.getMessage(), e);
        }
    }

    //Step 4: append the signature
    public static String appendSignature(String signature) {
        return requestWithoutSignature + "&X-Amz-Signature=" + signature;
    }

    public static byte[] hash(String s) throws Exception {
        try {
            MessageDigest md = MessageDigest.getInstance("SHA-256");
            md.update(s.getBytes(UTF8));
            return md.digest();
        } catch (Exception e) {
            throw new SdkClientException(
                    "Unable to compute hash while signing request: "
                            + e.getMessage(), e);
        }
    }
}
```

## Connecting to a DB instance
<a name="UsingWithRDS.IAMDBAuth.Connecting.Java.AuthToken.Connect"></a>

The following code example shows how to generate an authentication token, and then use it to connect to an instance running MariaDB or MySQL. 

To run this code example, you need the [AWS SDK for Java](http://aws.amazon.com/sdk-for-java/), found on the AWS site. In addition, you need the following:
+ MySQL Connector/J. This code example was tested with `mysql-connector-java-5.1.33-bin.jar`.
+ An intermediate certificate for Amazon RDS that is specific to an AWS Region. (For more information, see [Using SSL/TLS to encrypt a connection to a DB instance or cluster ](UsingWithRDS.SSL.md).) At runtime, the class loader looks for the certificate in the same directory as this Java code example, so that the class loader can find it.
+ Modify the values of the following variables as needed:
  + `RDS_INSTANCE_HOSTNAME` – The host name of the DB instance that you want to access.
  + `RDS_INSTANCE_PORT` – The port number used for connecting to your PostgreSQL DB instance.
  + `REGION_NAME` – The AWS Region where the DB instance is running.
  + `DB_USER` – The database account that you want to access.
  + `SSL_CERTIFICATE` – An SSL certificate for Amazon RDS that is specific to an AWS Region.

    To download a certificate for your AWS Region, see [Using SSL/TLS to encrypt a connection to a DB instance or cluster ](UsingWithRDS.SSL.md). Place the SSL certificate in the same directory as this Java program file, so that the class loader can find the certificate at runtime.

This code example obtains AWS credentials from the [default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

**Note**  
Specify a password for `DEFAULT_KEY_STORE_PASSWORD` other than the prompt shown here as a security best practice.

```
package com.amazonaws.samples;

import com.amazonaws.services.rds.auth.RdsIamAuthTokenGenerator;
import com.amazonaws.services.rds.auth.GetIamAuthTokenRequest;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.amazonaws.auth.AWSStaticCredentialsProvider;

import java.io.File;
import java.io.FileOutputStream;
import java.io.InputStream;
import java.security.KeyStore;
import java.security.cert.CertificateFactory;
import java.security.cert.X509Certificate;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
import java.util.Properties;

import java.net.URL;

public class IAMDatabaseAuthenticationTester {
    //&AWS; Credentials of the IAM user with policy enabling IAM Database Authenticated access to the db by the db user.
    private static final DefaultAWSCredentialsProviderChain creds = new DefaultAWSCredentialsProviderChain();
    private static final String AWS_ACCESS_KEY = creds.getCredentials().getAWSAccessKeyId();
    private static final String AWS_SECRET_KEY = creds.getCredentials().getAWSSecretKey();

    //Configuration parameters for the generation of the IAM Database Authentication token
    private static final String RDS_INSTANCE_HOSTNAME = "rdsmysql.123456789012.us-west-2.rds.amazonaws.com";
    private static final int RDS_INSTANCE_PORT = 3306;
    private static final String REGION_NAME = "us-west-2";
    private static final String DB_USER = "jane_doe";
    private static final String JDBC_URL = "jdbc:mysql://" + RDS_INSTANCE_HOSTNAME + ":" + RDS_INSTANCE_PORT;

    private static final String SSL_CERTIFICATE = "rds-ca-2019-us-west-2.pem";

    private static final String KEY_STORE_TYPE = "JKS";
    private static final String KEY_STORE_PROVIDER = "SUN";
    private static final String KEY_STORE_FILE_PREFIX = "sys-connect-via-ssl-test-cacerts";
    private static final String KEY_STORE_FILE_SUFFIX = ".jks";
    private static final String DEFAULT_KEY_STORE_PASSWORD = "changeit";

    public static void main(String[] args) throws Exception {
        //get the connection
        Connection connection = getDBConnectionUsingIam();

        //verify the connection is successful
        Statement stmt= connection.createStatement();
        ResultSet rs=stmt.executeQuery("SELECT 'Success!' FROM DUAL;");
        while (rs.next()) {
        	    String id = rs.getString(1);
            System.out.println(id); //Should print "Success!"
        }

        //close the connection
        stmt.close();
        connection.close();
        
        clearSslProperties();
        
    }

    /**
     * This method returns a connection to the db instance authenticated using IAM Database Authentication
     * @return
     * @throws Exception
     */
    private static Connection getDBConnectionUsingIam() throws Exception {
        setSslProperties();
        return DriverManager.getConnection(JDBC_URL, setMySqlConnectionProperties());
    }

    /**
     * This method sets the mysql connection properties which includes the IAM Database Authentication token
     * as the password. It also specifies that SSL verification is required.
     * @return
     */
    private static Properties setMySqlConnectionProperties() {
        Properties mysqlConnectionProperties = new Properties();
        mysqlConnectionProperties.setProperty("verifyServerCertificate","true");
        mysqlConnectionProperties.setProperty("useSSL", "true");
        mysqlConnectionProperties.setProperty("user",DB_USER);
        mysqlConnectionProperties.setProperty("password",generateAuthToken());
        return mysqlConnectionProperties;
    }

    /**
     * This method generates the IAM Auth Token.
     * An example IAM Auth Token would look like follows:
     * btusi123---cmz7kenwo2ye---rds---cn-north-1.amazonaws.com.rproxy.goskope.com.cn:3306/?Action=connect&DBUser=iamtestuser&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20171003T010726Z&X-Amz-SignedHeaders=host&X-Amz-Expires=899&X-Amz-Credential=AKIAPFXHGVDI5RNFO4AQ%2F20171003%2Fcn-north-1%2Frds-db%2Faws4_request&X-Amz-Signature=f9f45ef96c1f770cdad11a53e33ffa4c3730bc03fdee820cfdf1322eed15483b
     * @return
     */
    private static String generateAuthToken() {
        BasicAWSCredentials awsCredentials = new BasicAWSCredentials(AWS_ACCESS_KEY, AWS_SECRET_KEY);

        RdsIamAuthTokenGenerator generator = RdsIamAuthTokenGenerator.builder()
                .credentials(new AWSStaticCredentialsProvider(awsCredentials)).region(REGION_NAME).build();
        return generator.getAuthToken(GetIamAuthTokenRequest.builder()
                .hostname(RDS_INSTANCE_HOSTNAME).port(RDS_INSTANCE_PORT).userName(DB_USER).build());
    }

    /**
     * This method sets the SSL properties which specify the key store file, its type and password:
     * @throws Exception
     */
    private static void setSslProperties() throws Exception {
        System.setProperty("javax.net.ssl.trustStore", createKeyStoreFile());
        System.setProperty("javax.net.ssl.trustStoreType", KEY_STORE_TYPE);
        System.setProperty("javax.net.ssl.trustStorePassword", DEFAULT_KEY_STORE_PASSWORD);
    }

    /**
     * This method returns the path of the Key Store File needed for the SSL verification during the IAM Database Authentication to
     * the db instance.
     * @return
     * @throws Exception
     */
    private static String createKeyStoreFile() throws Exception {
        return createKeyStoreFile(createCertificate()).getPath();
    }

    /**
     *  This method generates the SSL certificate
     * @return
     * @throws Exception
     */
    private static X509Certificate createCertificate() throws Exception {
        CertificateFactory certFactory = CertificateFactory.getInstance("X.509");
        URL url = new File(SSL_CERTIFICATE).toURI().toURL();
        if (url == null) {
            throw new Exception();
        }
        try (InputStream certInputStream = url.openStream()) {
            return (X509Certificate) certFactory.generateCertificate(certInputStream);
        }
    }

    /**
     * This method creates the Key Store File
     * @param rootX509Certificate - the SSL certificate to be stored in the KeyStore
     * @return
     * @throws Exception
     */
    private static File createKeyStoreFile(X509Certificate rootX509Certificate) throws Exception {
        File keyStoreFile = File.createTempFile(KEY_STORE_FILE_PREFIX, KEY_STORE_FILE_SUFFIX);
        try (FileOutputStream fos = new FileOutputStream(keyStoreFile.getPath())) {
            KeyStore ks = KeyStore.getInstance(KEY_STORE_TYPE, KEY_STORE_PROVIDER);
            ks.load(null);
            ks.setCertificateEntry("rootCaCertificate", rootX509Certificate);
            ks.store(fos, DEFAULT_KEY_STORE_PASSWORD.toCharArray());
        }
        return keyStoreFile;
    }
    
    /**
     * This method clears the SSL properties.
     * @throws Exception
     */
    private static void clearSslProperties() throws Exception {
           System.clearProperty("javax.net.ssl.trustStore");
           System.clearProperty("javax.net.ssl.trustStoreType");
           System.clearProperty("javax.net.ssl.trustStorePassword"); 
    }
    
}
```

If you want to connect to a DB instance through a proxy, see [Connecting to a database using IAM authentication](rds-proxy-connecting.md#rds-proxy-connecting-iam).

# Connecting to your DB instance using IAM authentication and the AWS SDK for Python (Boto3)
<a name="UsingWithRDS.IAMDBAuth.Connecting.Python"></a>

You can connect to an RDS for MariaDB, MySQL, or PostgreSQL DB instance with the AWS SDK for Python (Boto3) as described following.

**Prerequisites**  
The following are prerequisites for connecting to your DB instance using IAM authentication:
+ [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md)
+ [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md)
+ [Creating a database account using IAM authentication](UsingWithRDS.IAMDBAuth.DBAccounts.md)

In addition, make sure the imported libraries in the sample code exist on your system.

**Examples**  
The code examples use profiles for shared credentials. For information about the specifying credentials, see [Credentials](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) in the AWS SDK for Python (Boto3) documentation.

The following code examples show how to generate an authentication token, and then use it to connect to a DB instance. 

To run this code example, you need the [AWS SDK for Python (Boto3)](http://aws.amazon.com/sdk-for-python/), found on the AWS site.

Modify the values of the following variables as needed:
+ `ENDPOINT` – The endpoint of the DB instance that you want to access
+ `PORT` – The port number used for connecting to your DB instance
+ `USER` – The database account that you want to access
+ `REGION` – The AWS Region where the DB instance is running
+ `DBNAME` – The database that you want to access
+ `SSLCERTIFICATE` – The full path to the SSL certificate for Amazon RDS

  For `ssl_ca`, specify an SSL certificate. To download an SSL certificate, see [Using SSL/TLS to encrypt a connection to a DB instance or cluster ](UsingWithRDS.SSL.md).

**Note**  
You cannot use a custom Route 53 DNS record instead of the DB instance endpoint to generate the authentication token.

This code connects to a MariaDB or MySQL DB instance.

Before running this code, install the PyMySQL driver by following the instructions in the [ Python Package Index](https://pypi.org/project/PyMySQL/).

```
import pymysql
import sys
import boto3
import os

ENDPOINT="mysqldb.123456789012.us-east-1.rds.amazonaws.com"
PORT="3306"
USER="jane_doe"
REGION="us-east-1"
DBNAME="mydb"
os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'

#gets the credentials from .aws/credentials
session = boto3.Session(profile_name='default')
client = session.client('rds')

token = client.generate_db_auth_token(DBHostname=ENDPOINT, Port=PORT, DBUsername=USER, Region=REGION)

try:
    conn =  pymysql.connect(auth_plugin_map={'mysql_clear_password':None},host=ENDPOINT, user=USER, password=token, port=PORT, database=DBNAME, ssl_ca='SSLCERTIFICATE', ssl_verify_identity=True, ssl_verify_cert=True)
    cur = conn.cursor()
    cur.execute("""SELECT now()""")
    query_results = cur.fetchall()
    print(query_results)
except Exception as e:
    print("Database connection failed due to {}".format(e))
```

This code connects to a PostgreSQL DB instance.

Before running this code, install `psycopg2` by following the instructions in [Psycopg documentation](https://pypi.org/project/psycopg2/).

```
import psycopg2
import sys
import boto3
import os

ENDPOINT="postgresmydb.123456789012.us-east-1.rds.amazonaws.com"
PORT="5432"
USER="jane_doe"
REGION="us-east-1"
DBNAME="mydb"

#gets the credentials from .aws/credentials
session = boto3.Session(profile_name='RDSCreds')
client = session.client('rds')

token = client.generate_db_auth_token(DBHostname=ENDPOINT, Port=PORT, DBUsername=USER, Region=REGION)

try:
    conn = psycopg2.connect(host=ENDPOINT, port=PORT, database=DBNAME, user=USER, password=token, sslrootcert="SSLCERTIFICATE")
    cur = conn.cursor()
    cur.execute("""SELECT now()""")
    query_results = cur.fetchall()
    print(query_results)
except Exception as e:
    print("Database connection failed due to {}".format(e))
```

If you want to connect to a DB instance through a proxy, see [Connecting to a database using IAM authentication](rds-proxy-connecting.md#rds-proxy-connecting-iam).

# Troubleshooting for IAM DB authentication
<a name="UsingWithRDS.IAMDBAuth.Troubleshooting"></a>

Following, you can find troubleshooting ideas for some common IAM DB authentication issues and information on CloudWatch logs and metrics for IAM DB authentication.

## Exporting IAM DB authentication error logs to CloudWatch Logs
<a name="UsingWithRDS.IAMDBAuth.Troubleshooting.ErrorLogs"></a>

IAM DB authentication error logs are stored on the database host, and you can export these logs your CloudWatch Logs account. Use the logs and remediation methods in this page to troubleshoot IAM DB authentication issues.

You can enable log exports to CloudWatch Logs from the console, AWS CLI, and RDS API. For console instructions, see [Publishing database logs to Amazon CloudWatch Logs](USER_LogAccess.Procedural.UploadtoCloudWatch.md).

To export your IAM DB authentication error logs to CloudWatch Logs when creating a  DB instance from the AWS CLI, use the following command:

```
aws rds create-db-instance --db-instance-identifier mydbinstance \
--region us-east-1 \
--db-instance-class db.t3.large \
--allocated-storage 50 \
--engine postgres \
--engine-version 16 \
--port 5432 \
--master-username master \
--master-user-password password \
--publicly-accessible \
--enable-iam-database-authentication \
--enable-cloudwatch-logs-exports=iam-db-auth-error
```

To export your IAM DB authentication error logs to CloudWatch Logs when modifying a DB instance from the AWS CLI, use the following command:

```
aws rds modify-db-instance --db-instance-identifier mydbinstance \
--region us-east-1 \
--cloudwatch-logs-export-configuration '{"EnableLogTypes":["iam-db-auth-error"]}'
```

To verify if your DB instance is exporting IAM DB authentication logs to CloudWatch Logs, check if the `EnabledCloudwatchLogsExports` parameter is set to `iam-db-auth-error` in the output for the `describe-db-instances` command.

```
aws rds describe-db-instances --region us-east-1 --db-instance-identifier mydbinstance
            ...
            
             "EnabledCloudwatchLogsExports": [
                "iam-db-auth-error"
            ],
            ...
```

## IAM DB authentication CloudWatch metrics
<a name="UsingWithRDS.IAMDBAuth.Troubleshooting.CWMetrics"></a>

Amazon RDS delivers near-real time metrics about IAM DB authentication to your Amazon CloudWatch account. The following table lists the IAM DB authentication metrics available using CloudWatch:


| Metric | Description | 
| --- | --- | 
|  `IamDbAuthConnectionRequests`  |  Total number of connection requests made with IAM DB authentication.  | 
|  `IamDbAuthConnectionSuccess`  |  Total number of successful IAM DB authentication requests.  | 
|  `IamDbAuthConnectionFailure`  |  Total number of failed IAM DB authentication requests.  | 
|  `IamDbAuthConnectionFailureInvalidToken`  | Total number of failed IAM DB authentication requests due to invalid token. | 
|  `IamDbAuthConnectionFailureInsufficientPermissions`  |  Total number of failed IAM DB authentication requests due to incorrect policies or permissions.  | 
|  `IamDbAuthConnectionFailureThrottling`  |  Total number of failed IAM DB authentication requests due to IAM DB authentication throttling.  | 
|  `IamDbAuthConnectionFailureServerError`  |  Total number of failed IAM DB authentication requests due to an internal server error in the IAM DB authentication feature.  | 

## Common issues and solutions
<a name="UsingWithRDS.IAMDBAuth.Troubleshooting.IssuesSolutions"></a>

 You might encounter the following issues when using IAM DB authention. Use the remediation steps in the table to solve the issues:


| Error | Metric(s) | Cause | Solution | 
| --- | --- | --- | --- | 
|  `[ERROR] Failed to authenticate the connection request for user db_user because the provided token is malformed or otherwise invalid. (Status Code: 400, Error Code: InvalidToken)`  |  `IamDbAuthConnectionFailure` `IamDbAuthConnectionFailureInvalidToken`  |  The IAM DB authentiation token in the connection request is either not a valid SigV4a token, or it is not formatted correctly.  |  Check your token generation strategy in your application. In some cases, make sure you are passing the token with valid formatting. Truncating the token (or incorrect string formatting) will make the token invalid.   | 
|  `[ERROR] Failed to authenticate the connection request for user db_user because the token age is longer than 15 minutes. (Status Code: 400, Error Code:ExpiredToken)`  |  `IamDbAuthConnectionFailure` `IamDbAuthConnectionFailureInvalidToken`  |  The IAM DB authentication token has expired. Tokens are only valid for 15 minutes.  |  Check your token caching and/or token re-use logic in your application. You should not re-use tokens that are older than 15 minutes.  | 
|  `[ERROR] Failed to authorize the connection request for user db_user because the IAM policy assumed by the caller 'arn:aws:sts::123456789012:assumed-role/ <RoleName>/ <RoleSession>' is not authorized to perform `rds-db:connect` on the DB instance. (Status Code: 403, Error Code:NotAuthorized)`  |  `IamDbAuthConnectionFailure` `IamDbAuthConnectionFailureInsufficientPermissions`  |  This error might be due to the following reasons: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Troubleshooting.html)  |  Verify that the IAM role and/or policy you are assuming in your application. Make sure you assume the same policy to generate the token as to connect to the DB.   | 
|  `[ERROR] Failed to authorize the connection request for user db_user due to IAM DB authentication throttling. (Status Code: 429, Error Code: ThrottlingException)`  |  `IamDbAuthConnectionFailure` `IamDbAuthConnectionFailureThrottling`  | You are making too many connection requests to your DB in a short amount of time. IAM DB authentication throttling limit is 200 connections per second. |  Reduce the rate of establishing new connections with IAM authentication. Consider implementing connection pooling using RDS Proxy in order to reuse established connections in your application.  | 
|  `[ERROR] Failed to authorize the connection request for user db_user due to an internal IAM DB authentication error. (Status Code: 500, Error Code: InternalError)`  |  `IamDbAuthConnectionFailure` `IamDbAuthConnectionFailureThrottling` |  There was an internal error while authorizing the DB conneciton with IAM DB authentication.  |  Reach out to https://aws.amazon.com/premiumsupport/ to investigate the issue.  | 

# Troubleshooting Amazon RDS identity and access
<a name="security_iam_troubleshoot"></a>

Use the following information to help you diagnose and fix common issues that you might encounter when working with Amazon RDS and IAM.

**Topics**
+ [

## I'm not authorized to perform an action in Amazon RDS
](#security_iam_troubleshoot-no-permissions)
+ [

## I'm not authorized to perform iam:PassRole
](#security_iam_troubleshoot-passrole)
+ [

## I want to allow people outside of my AWS account to access my Amazon RDS resources
](#security_iam_troubleshoot-cross-account-access)

## I'm not authorized to perform an action in Amazon RDS
<a name="security_iam_troubleshoot-no-permissions"></a>

If the AWS Management Console tells you that you're not authorized to perform an action, then you must contact your administrator for assistance. Your administrator is the person that provided you with your sign-in credentials.

The following example error occurs when the `mateojackson` user tries to use the console to view details about a *widget* but does not have `rds:GetWidget` permissions.

```
User: arn:aws:iam::123456789012:user/mateojackson is not authorized to perform: rds:GetWidget on resource: my-example-widget
```

In this case, Mateo asks his administrator to update his policies to allow him to access the `my-example-widget` resource using the `rds:GetWidget` action.

## I'm not authorized to perform iam:PassRole
<a name="security_iam_troubleshoot-passrole"></a>

If you receive an error that you're not authorized to perform the `iam:PassRole` action, then you must contact your administrator for assistance. Your administrator is the person that provided you with your sign-in credentials. Ask that person to update your policies to allow you to pass a role to Amazon RDS.

Some AWS services allow you to pass an existing role to that service, instead of creating a new service role or service-linked role. To do this, you must have permissions to pass the role to the service.

The following example error occurs when a user named `marymajor` tries to use the console to perform an action in Amazon RDS. However, the action requires the service to have permissions granted by a service role. Mary does not have permissions to pass the role to the service.

```
User: arn:aws:iam::123456789012:user/marymajor is not authorized to perform: iam:PassRole
```

In this case, Mary asks her administrator to update her policies to allow her to perform the `iam:PassRole` action.

## I want to allow people outside of my AWS account to access my Amazon RDS resources
<a name="security_iam_troubleshoot-cross-account-access"></a>

You can create a role that users in other accounts or people outside of your organization can use to access your resources. You can specify who is trusted to assume the role. For services that support resource-based policies or access control lists (ACLs), you can use those policies to grant people access to your resources.

To learn more, consult the following:
+ To learn whether Amazon RDS supports these features, see [How Amazon RDS works with IAM](security_iam_service-with-iam.md).
+ To learn how to provide access to your resources across AWS accounts that you own, see [Providing access to an IAM user in another AWS account that you own](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_aws-accounts.html) in the *IAM User Guide*.
+ To learn how to provide access to your resources to third-party AWS accounts, see [Providing access to AWS accounts owned by third parties](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html) in the *IAM User Guide*.
+ To learn how to provide access through identity federation, see [Providing access to externally authenticated users (identity federation)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_federated-users.html) in the *IAM User Guide*.
+ To learn the difference between using roles and resource-based policies for cross-account access, see [How IAM roles differ from resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_compare-resource-policies.html) in the *IAM User Guide*.

# Logging and monitoring in Amazon RDS
<a name="Overview.LoggingAndMonitoring"></a>

Monitoring is an important part of maintaining the reliability, availability, and performance of Amazon RDS and your AWS solutions. You should collect monitoring data from all of the parts of your AWS solution so that you can more easily debug a multi-point failure if one occurs. AWS provides several tools for monitoring your Amazon RDS resources and responding to potential incidents:

**Amazon CloudWatch Alarms**  
Using Amazon CloudWatch alarms, you watch a single metric over a time period that you specify. If the metric exceeds a given threshold, a notification is sent to an Amazon SNS topic or AWS Auto Scaling policy. CloudWatch alarms do not invoke actions because they are in a particular state. Rather the state must have changed and been maintained for a specified number of periods.

**AWS CloudTrail Logs**  
CloudTrail provides a record of actions taken by a user, role, or an AWS service in Amazon RDS . CloudTrail captures all API calls for Amazon RDS as events, including calls from the console and from code calls to Amazon RDS API operations. Using the information collected by CloudTrail, you can determine the request that was made to Amazon RDS , the IP address from which the request was made, who made the request, when it was made, and additional details. For more information, see [Monitoring Amazon RDS API calls in AWS CloudTrail](logging-using-cloudtrail.md) .

**Enhanced Monitoring**  
 Amazon RDS provides metrics in real time for the operating system (OS) that your DB instance runs on. You can view the metrics for your DB instance using the console, or consume the Enhanced Monitoring JSON output from Amazon CloudWatch Logs in a monitoring system of your choice. For more information, see [Monitoring OS metrics with Enhanced Monitoring](USER_Monitoring.OS.md) .

**Amazon RDS Performance Insights**  
Performance Insights expands on existing Amazon RDS monitoring features to illustrate your database's performance and help you analyze any issues that affect it. With the Performance Insights dashboard, you can visualize the database load and filter the load by waits, SQL statements, hosts, or users. For more information, see [Monitoring DB load with Performance Insights on Amazon RDS](USER_PerfInsights.md) .

**Database Logs**  
You can view, download, and watch database logs using the AWS Management Console, AWS CLI, or RDS API. For more information, see [Monitoring Amazon RDS log files](USER_LogAccess.md) .

** Amazon RDS Recommendations**  
 Amazon RDS provides automated recommendations for database resources. These recommendations provide best practice guidance by analyzing DB instance configuration, usage, and performance data. For more information, see [Recommendations from Amazon RDS](monitoring-recommendations.md) .

** Amazon RDS Event Notification**  
 Amazon RDS uses the Amazon Simple Notification Service (Amazon SNS) to provide notification when an Amazon RDS event occurs. These notifications can be in any notification form supported by Amazon SNS for an AWS Region, such as an email, a text message, or a call to an HTTP endpoint. For more information, see [Working with Amazon RDS event notification](USER_Events.md) .

**AWS Trusted Advisor**  
Trusted Advisor draws upon best practices learned from serving hundreds of thousands of AWS customers. Trusted Advisor inspects your AWS environment and then makes recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. All AWS customers have access to five Trusted Advisor checks. Customers with a Business or Enterprise support plan can view all Trusted Advisor checks.   
Trusted Advisor has the following Amazon RDS -related checks:  
+  Amazon RDS Idle DB Instances
+  Amazon RDS Security Group Access Risk
+  Amazon RDS Backups
+  Amazon RDS Multi-AZ
For more information on these checks, see [Trusted Advisor best practices (checks)](https://aws.amazon.com/premiumsupport/trustedadvisor/best-practices/). 

For more information about monitoring Amazon RDS , see [Monitoring metrics in an Amazon RDS instance](CHAP_Monitoring.md) .

# Compliance validation for Amazon RDS
<a name="RDS-compliance"></a>

Third-party auditors assess the security and compliance of Amazon RDS as part of multiple AWS compliance programs. These include SOC, PCI, FedRAMP, HIPAA, and others. 

For a list of AWS services in scope of specific compliance programs, see [AWS services in scope by compliance program](https://aws.amazon.com/compliance/services-in-scope/). For general information, see [AWS compliance programs](https://aws.amazon.com/compliance/programs/).

You can download third-party audit reports using AWS Artifact. For more information, see [Downloading reports in AWS Artifact](https://docs.aws.amazon.com/artifact/latest/ug/downloading-documents.html). 

Your compliance responsibility when using Amazon RDS is determined by the sensitivity of your data, your organization's compliance objectives, and applicable laws and regulations. AWS provides the following resources to help with compliance: 
+ [Security and compliance quick start guides](https://aws.amazon.com/quickstart/?awsf.quickstart-homepage-filter=categories%23security-identity-compliance) – These deployment guides discuss architectural considerations and provide steps for deploying security- and compliance-focused baseline environments on AWS.
+ [Architecting for HIPAA Security and Compliance on Amazon Web Services ](https://docs.aws.amazon.com/pdfs/whitepapers/latest/architecting-hipaa-security-and-compliance-on-aws/architecting-hipaa-security-and-compliance-on-aws.pdf) – This whitepaper describes how companies can use AWS to create HIPAA-compliant applications.
+ [AWS compliance resources](https://aws.amazon.com/compliance/resources/) – This collection of workbooks and guides that might apply to your industry and location.
+ [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html) – This AWS service assesses how well your resource configurations comply with internal practices, industry guidelines, and regulations.
+ [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) – This AWS service provides a comprehensive view of your security state within AWS. Security Hub CSPM uses security controls to evaluate your AWS resources and to check your compliance against security industry standards and best practices. For a list of supported services and controls, see [Security Hub CSPM controls reference](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-controls-reference.html).

# Resilience in Amazon RDS
<a name="disaster-recovery-resiliency"></a>

The AWS global infrastructure is built around AWS Regions and Availability Zones. AWS Regions provide multiple physically separated and isolated Availability Zones, which are connected with low-latency, high-throughput, and highly redundant networking. With Availability Zones, you can design and operate applications and databases that automatically fail over between Availability Zones without interruption. Availability Zones are more highly available, fault tolerant, and scalable than traditional single or multiple data center infrastructures. 

For more information about AWS Regions and Availability Zones, see [AWS global infrastructure](https://aws.amazon.com/about-aws/global-infrastructure/).

In addition to the AWS global infrastructure, Amazon RDS offers features to help support your data resiliency and backup needs.

## Backup and restore
<a name="disaster-recovery-resiliency.backup-restore"></a>

Amazon RDS creates and saves automated backups of your DB instance. Amazon RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance and not just individual databases.

Amazon RDS creates automated backups of your DB instance during the backup window of your DB instance. Amazon RDS saves the automated backups of your DB instance according to the backup retention period that you specify. If necessary, you can recover your database to any point in time during the backup retention period. You can also back up your DB instance manually, by manually creating a DB snapshot.

You can create a DB instance by restoring from this DB snapshot as a disaster recovery solution if the source DB instance fails.

For more information, see [Backing up, restoring, and exporting data](CHAP_CommonTasks.BackupRestore.md).

## Replication
<a name="disaster-recovery-resiliency.replication"></a>

Amazon RDS uses the MariaDB, MySQL, Oracle, and PostgreSQL DB engines' built-in replication functionality to create a special type of DB instance called a read replica from a source DB instance. Updates made to the source DB instance are asynchronously copied to the read replica. You can reduce the load on your source DB instance by routing read queries from your applications to the read replica. Using read replicas, you can elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can promote a read replica to a standalone instance as a disaster recovery solution if the source DB instance fails. For some DB engines, Amazon RDS also supports other replication options.

For more information, see [Working with DB instance read replicas](USER_ReadRepl.md).

## Failover
<a name="disaster-recovery-resiliency.failover"></a>

Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments. Amazon RDS uses several different technologies to provide failover support. Multi-AZ deployments for Oracle, PostgreSQL, MySQL, and MariaDB DB instances use Amazon's failover technology. SQL Server DB instances use SQL Server Database Mirroring (DBM).

For more information, see [Configuring and managing a Multi-AZ deployment for Amazon RDS](Concepts.MultiAZ.md).

# Infrastructure security in Amazon RDS
<a name="infrastructure-security"></a>

As a managed service, Amazon Relational Database Service is protected by AWS global network security. For information about AWS security services and how AWS protects infrastructure, see [AWS Cloud Security](https://aws.amazon.com/security/). To design your AWS environment using the best practices for infrastructure security, see [Infrastructure Protection](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/infrastructure-protection.html) in *Security Pillar AWS Well‐Architected Framework*.

You use AWS published API calls to access Amazon RDS through the network. Clients must support the following:
+ Transport Layer Security (TLS). We require TLS 1.2 and recommend TLS 1.3.
+ Cipher suites with perfect forward secrecy (PFS) such as DHE (Ephemeral Diffie-Hellman) or ECDHE (Elliptic Curve Ephemeral Diffie-Hellman). Most modern systems such as Java 7 and later support these modes.

In addition, Amazon RDS offers features to help support infrastructure security.

## Security groups
<a name="infrastructure-security.security-groups"></a>

Security groups control the access that traffic has in and out of a DB instance. By default, network access is turned off to a DB instance. You can specify rules in a security group that allow access from an IP address range, port, or security group. After ingress rules are configured, the same rules apply to all DB instances that are associated with that security group.

For more information, see [Controlling access with security groups](Overview.RDSSecurityGroups.md).

## Public accessibility
<a name="infrastructure-security.publicly-accessible"></a>

When you launch a DB instance inside a virtual private cloud (VPC) based on the Amazon VPC service, you can turn on or off public accessibility for that DB instance. To designate whether the DB instance that you create has a DNS name that resolves to a public IP address, you use the *Public accessibility* parameter. By using this parameter, you can designate whether there is public access to the DB instance. You can modify a DB instance to turn on or off public accessibility by modifying the *Public accessibility* parameter.

For more information, see [Hiding a DB instance in a VPC from the internet](USER_VPC.WorkingWithRDSInstanceinaVPC.md#USER_VPC.Hiding).

**Note**  
If your DB instance is in a VPC but isn't publicly accessible, you can also use an AWS Site-to-Site VPN connection or an Direct Connect connection to access it from a private network. For more information, see [Internetwork traffic privacy](inter-network-traffic-privacy.md).

# Amazon RDS API and interface VPC endpoints (AWS PrivateLink)
<a name="vpc-interface-endpoints"></a>

You can establish a private connection between your VPC and Amazon RDS API endpoints by creating an *interface VPC endpoint*. Interface endpoints are powered by [AWS PrivateLink](https://aws.amazon.com/privatelink). 

AWS PrivateLink enables you to privately access Amazon RDS API operations without an internet gateway, NAT device, VPN connection, or Direct Connect connection. DB instances in your VPC don't need public IP addresses to communicate with Amazon RDS API endpoints to launch, modify, or terminate DB instances. Your DB instances also don't need public IP addresses to use any of the available RDS API operations. Traffic between your VPC and Amazon RDS doesn't leave the Amazon network. 

Each interface endpoint is represented by one or more elastic network interfaces in your subnets. For more information on elastic network interfaces, see [Elastic network interfaces](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) in the *Amazon EC2 User Guide.* 

For more information about VPC endpoints, see [Interface VPC endpoints (AWS PrivateLink)](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html) in the *Amazon VPC User Guide*. For more information about RDS API operations, see [Amazon RDS API Reference](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/).

You don't need an interface VPC endpoint to connect to a DB instance. For more information, see [Scenarios for accessing a DB instance in a VPC](USER_VPC.Scenarios.md).

## Considerations for VPC endpoints
<a name="vpc-endpoint-considerations"></a>

Before you set up an interface VPC endpoint for Amazon RDS API endpoints, ensure that you review [Interface endpoint properties and limitations](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#vpce-interface-limitations) in the *Amazon VPC User Guide*. 

All RDS API operations relevant to managing Amazon RDS resources are available from your VPC using AWS PrivateLink.

VPC endpoint policies are supported for RDS API endpoints. By default, full access to RDS API operations is allowed through the endpoint. For more information, see [Controlling access to services with VPC endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-access.html) in the *Amazon VPC User Guide*.

## Availability
<a name="rds-and-vpc-interface-endpoints-availability"></a>

Amazon RDS API currently supports VPC endpoints in the following AWS Regions:
+ US East (Ohio)
+ US East (N. Virginia)
+ US West (N. California)
+ US West (Oregon)
+ Africa (Cape Town)
+ Asia Pacific (Hong Kong)
+ Asia Pacific (Mumbai)
+ Asia Pacific (New Zealand)
+ Asia Pacific (Osaka)
+ Asia Pacific (Seoul)
+ Asia Pacific (Singapore)
+ Asia Pacific (Sydney)
+ Asia Pacific (Taipei)
+ Asia Pacific (Thailand)
+ Asia Pacific (Tokyo)
+ Canada (Central)
+ Canada West (Calgary)
+ China (Beijing)
+ China (Ningxia)
+ Europe (Frankfurt)
+ Europe (Zurich)
+ Europe (Ireland)
+ Europe (London)
+ Europe (Paris)
+ Europe (Stockholm)
+ Europe (Milan)
+ Israel (Tel Aviv)
+ Mexico (Central)
+ Middle East (Bahrain)
+ South America (São Paulo)
+ AWS GovCloud (US-East)
+ AWS GovCloud (US-West)

## Creating an interface VPC endpoint for Amazon RDS API
<a name="vpc-endpoint-create"></a>

You can create a VPC endpoint for the Amazon RDS API using either the Amazon VPC console or the AWS Command Line Interface (AWS CLI). For more information, see [Creating an interface endpoint](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#create-interface-endpoint) in the *Amazon VPC User Guide*.

Create a VPC endpoint for Amazon RDS API using the service name `com.amazonaws.region.rds`.

Excluding AWS Regions in China, if you enable private DNS for the endpoint, you can make API requests to Amazon RDS with the VPC endpoint using its default DNS name for the AWS Region, for example `rds.us-east-1.amazonaws.com`. For the China (Beijing) and China (Ningxia) AWS Regions, you can make API requests with the VPC endpoint using `rds-api---cn-north-1.amazonaws.com.rproxy.goskope.com.cn` and `rds-api---cn-northwest-1.amazonaws.com.rproxy.goskope.com.cn`, respectively. 

For more information, see [Accessing a service through an interface endpoint](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#access-service-though-endpoint) in the *Amazon VPC User Guide*.

## Creating a VPC endpoint policy for Amazon RDS API
<a name="vpc-endpoint-policy"></a>

You can attach an endpoint policy to your VPC endpoint that controls access to Amazon RDS API. The policy specifies the following information:
+ The principal that can perform actions.
+ The actions that can be performed.
+ The resources on which actions can be performed.

For more information, see [Controlling access to services with VPC endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-access.html) in the *Amazon VPC User Guide*. 

**Example: VPC endpoint policy for Amazon RDS API actions**  
The following is an example of an endpoint policy for Amazon RDS API. When attached to an endpoint, this policy grants access to the listed Amazon RDS API actions for all principals on all resources.

```
{
   "Statement":[
      {
         "Principal":"*",
         "Effect":"Allow",
         "Action":[
            "rds:CreateDBInstance",
            "rds:ModifyDBInstance",
            "rds:CreateDBSnapshot"
         ],
         "Resource":"*"
      }
   ]
}
```

**Example: VPC endpoint policy that denies all access from a specified AWS account**  
The following VPC endpoint policy denies AWS account `123456789012` all access to resources using the endpoint. The policy allows all actions from other accounts.

```
{
  "Statement": [
    {
      "Action": "*",
      "Effect": "Allow",
      "Resource": "*",
      "Principal": "*"
    },
    {
      "Action": "*",
      "Effect": "Deny",
      "Resource": "*",
      "Principal": { "AWS": [ "123456789012" ] }
     }
   ]
}
```

# Security best practices for Amazon RDS
<a name="CHAP_BestPractices.Security"></a>

Use AWS Identity and Access Management (IAM) accounts to control access to Amazon RDS API operations, especially operations that create, modify, or delete Amazon RDS resources. Such resources include DB instances , security groups, and parameter groups. Also use IAM to control actions that perform common administrative actions such as backing up and restoring DB instances . 
+ Create an individual user for each person who manages Amazon RDS resources, including yourself. Don't use AWS root credentials to manage Amazon RDS resources.
+ Grant each user the minimum set of permissions required to perform his or her duties.
+ Use IAM groups to effectively manage permissions for multiple users.
+ Rotate your IAM credentials regularly.
+ Configure AWS Secrets Manager to automatically rotate the secrets for Amazon RDS . For more information, see [Rotating your AWS Secrets Manager secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) in the *AWS Secrets Manager User Guide*. You can also retrieve the credential from AWS Secrets Manager programmatically. For more information, see [Retrieving the secret value](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_retrieve-secret.html) in the *AWS Secrets Manager User Guide*. 

For more information about Amazon RDS security, see [Security in Amazon RDS ](UsingWithRDS.md) . For more information about IAM, see [AWS Identity and Access Management](https://docs.aws.amazon.com/IAM/latest/UserGuide/Welcome.html). For information on IAM best practices, see [IAM best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/IAMBestPractices.html). 

AWS Security Hub CSPM uses security controls to evaluate resource configurations and security standards to help you comply with various compliance frameworks. For more information about using Security Hub CSPM to evaluate RDS resources, see [Amazon Relational Database Service controls](https://docs.aws.amazon.com/securityhub/latest/userguide/rds-controls.html) in the AWS Security Hub User Guide.

You can monitor your usage of RDS as it relates to security best practices by using Security Hub CSPM. For more information, see [What is AWS Security Hub CSPM?](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html). 

Use the AWS Management Console, the AWS CLI, or the RDS API to change the password for your master user. If you use another tool, such as a SQL client, to change the master user password, it might result in privileges being revoked for the user unintentionally.

# Controlling access with security groups
<a name="Overview.RDSSecurityGroups"></a>

VPC security groups control the access that traffic has in and out of a DB instance . By default, network access is turned off for a DB instance . You can specify rules in a security group that allow access from an IP address range, port, or security group. After ingress rules are configured, the same rules apply to all DB instances that are associated with that security group. You can specify up to 20 rules in a security group.

## Overview of VPC security groups
<a name="Overview.RDSSecurityGroups.VPCSec"></a>

Each VPC security group rule makes it possible for a specific source to access a DB instance in a VPC that is associated with that VPC security group. The source can be a range of addresses (for example, 203.0.113.0/24), or another VPC security group. By specifying a VPC security group as the source, you allow incoming traffic from all instances (typically application servers) that use the source VPC security group. VPC security groups can have rules that govern both inbound and outbound traffic. However, the outbound traffic rules typically don't apply to DB instances . Outbound traffic rules apply only if the DB instance acts as a client. For example, outbound traffic rules apply to an Oracle DB instance with outbound database links. You must use the [Amazon EC2 API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/Welcome.html) or the **Security Group** option on the VPC console to create VPC security groups. 

When you create rules for your VPC security group that allow access to the instances in your VPC, you must specify a port for each range of addresses that the rule allows access for. For example, if you want to turn on Secure Shell (SSH) access for instances in the VPC, create a rule allowing access to TCP port 22 for the specified range of addresses.

You can configure multiple VPC security groups that allow access to different ports for different instances in your VPC. For example, you can create a VPC security group that allows access to TCP port 80 for web servers in your VPC. You can then create another VPC security group that allows access to TCP port 3306 for RDS for MySQL DB instances in your VPC.

For more information on VPC security groups, see [Security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) in the *Amazon Virtual Private Cloud User Guide*. 

**Note**  
If your DB instance is in a VPC but isn't publicly accessible, you can also use an AWS Site-to-Site VPN connection or an Direct Connect connection to access it from a private network. For more information, see [Internetwork traffic privacy](inter-network-traffic-privacy.md) .

## Security group scenario
<a name="Overview.RDSSecurityGroups.Scenarios"></a>

A common use of a DB instance in a VPC is to share data with an application server running in an Amazon EC2 instance in the same VPC, which is accessed by a client application outside the VPC. For this scenario, you use the RDS and VPC pages on the AWS Management Console or the RDS and EC2 API operations to create the necessary instances and security groups: 

1. Create a VPC security group (for example, `sg-0123ec2example`) and define inbound rules that use the IP addresses of the client application as the source. This security group allows your client application to connect to EC2 instances in a VPC that uses this security group.

1. Create an EC2 instance for the application and add the EC2 instance to the VPC security group (`sg-0123ec2example`) that you created in the previous step.

1. Create a second VPC security group (for example, `sg-6789rdsexample`) and create a new rule by specifying the VPC security group that you created in step 1 (`sg-0123ec2example`) as the source.

1. Create a new DB instance and add the DB instance to the VPC security group (`sg-6789rdsexample`) that you created in the previous step. When you create the DB instance , use the same port number as the one specified for the VPC security group (`sg-6789rdsexample`) rule that you created in step 3.

The following diagram shows this scenario.

![\[DB instance and EC2 instance in a VPC\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/con-VPC-sec-grp.png)


For detailed instructions about configuring a VPC for this scenario, see [Tutorial: Create a VPC for use with a DB instance (IPv4 only)](CHAP_Tutorials.WebServerDB.CreateVPC.md) . For more information about using a VPC, see [Amazon VPC and Amazon RDS](USER_VPC.md) .

## Creating a VPC security group
<a name="Overview.RDSSecurityGroups.Create"></a>

You can create a VPC security group for a DB instance by using the VPC console. For information about creating a security group, see [Provide access to your DB instance in your VPC by creating a security group](CHAP_SettingUp.md#CHAP_SettingUp.SecurityGroup) and [Security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) in the *Amazon Virtual Private Cloud User Guide*. 

## Associating a security group with a DB instance
<a name="Overview.RDSSecurityGroups.Associate"></a>

You can associate a security group with a DB instance by using **Modify** on the RDS console, the `ModifyDBInstance` Amazon RDS API, or the `modify-db-instance` AWS CLI command.

The following CLI example associates a specific VPC security group and removes DB security groups from the DB instance

```
aws rds modify-db-instance --db-instance-identifier dbName --vpc-security-group-ids sg-ID
```

 For information about modifying a DB instance, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md) . For security group considerations when you restore a DB instance from a DB snapshot, see [Security group considerations](USER_RestoreFromSnapshot.md#USER_RestoreFromSnapshot.Security) .

**Note**  
The RDS console displays different security group rule names for your database if the Port value is configured to a non-default value.

For RDS for Oracle DB instances, additional security groups can be associated by populating the security group options setting for the Oracle Enterprise Manager Database Express (OEM), Oracle Management Agent for Enterprise Manager Cloud Control (OEM Agent) and the Oracle Secure Sockets Layer options. In this case, both security groups associated with the DB instance and options settings apply to the DB instance. For more information about these option groups, see [Oracle Enterprise Manager](Oracle.Options.OEM.md) ,[Oracle Management Agent for Enterprise Manager Cloud Control](Oracle.Options.OEMAgent.md) , and [Oracle Secure Sockets Layer](Appendix.Oracle.Options.SSL.md) .

# Master user account privileges
<a name="UsingWithRDS.MasterAccounts"></a>

When you create a new DB instance , the default master user that you use gets certain privileges for that DB instance . You can't change the master user name after the DB instance is created.

**Important**  
We strongly recommend that you do not use the master user directly in your applications. Instead, adhere to the best practice of using a database user created with the minimal privileges required for your application.

**Note**  
If you accidentally delete the permissions for the master user, you can restore them by modifying the DB instance and setting a new master user password. For more information about modifying a DB instance , see  [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md)  . 

The following table shows the privileges and database roles the master user gets for each of the database engines. 


|  Database engine  |  System privilege  |  Database role  | 
| --- | --- | --- | 
|  RDS for Db2  |  The master user is assigned to the `masterdba` group and assigned the `master_user_role`.   `SYSMON`,`DBADM` with `DATAACCESS` AND `ACCCESSCTRL`, `BINDADD`,`CONNECT`, `CREATETAB`,`CREATE_SECURE_OBJECT`, `EXPLAIN`,`IMPLICIT_SCHEMA`, `LOAD`,`SQLADM`,`WLMADM`   |   `DBA`,`DBA_RESTRICTED`, `DEVELOPER`,`ROLE_NULLID_PACKAGES`, `ROLE_PROCEDURES`,`ROLE_TABLESPACES`  For more information, see [Amazon RDS for Db2 default roles](db2-default-roles.md) .  | 
|  RDS for MariaDB  |   `SELECT`,`INSERT`,`UPDATE`,`DELETE`, `CREATE`,`DROP`,`RELOAD`, `PROCESS`,`REFERENCES`,`INDEX`, `ALTER`,`SHOW DATABASES`,`CREATE TEMPORARY TABLES`,`LOCK TABLES`, `EXECUTE`,`REPLICATION CLIENT`,`CREATE VIEW`,`SHOW VIEW`,`CREATE ROUTINE`, `ALTER ROUTINE`,`CREATE USER`, `EVENT`,`TRIGGER`,`REPLICATION SLAVE`  Starting with RDS for MariaDB version 11.4, the master user also gets the `SHOW CREATE ROUTINE` privilege.  |  —  | 
|  RDS for MySQL 8.0.36 and higher  |   `SELECT`,`INSERT`,`UPDATE`, `DELETE`,`CREATE`,`DROP`, `RELOAD`,`PROCESS`, `REFERENCES`,`INDEX`,`ALTER`, `SHOW DATABASES`,`CREATE TEMPORARY TABLES`,`LOCK TABLES`,`EXECUTE`, `REPLICATION SLAVE`,`REPLICATION CLIENT`, `CREATE VIEW`,`SHOW VIEW`,`CREATE ROUTINE`,`ALTER ROUTINE`,`CREATE USER`,`EVENT`,`TRIGGER`, `CREATE ROLE`,`DROP ROLE`, `APPLICATION_PASSWORD_ADMIN`, `ROLE_ADMIN`,`SET_USER_ID`, `XA_RECOVER_ADMIN`   |   `rds_superuser_role`  For more information about `rds_superuser_role`, see [Role-based privilege model for RDS for MySQL](Appendix.MySQL.CommonDBATasks.privilege-model.md) .  | 
|  RDS for MySQL versions lower than 8.0.36  |   `SELECT`,`INSERT`,`UPDATE`, `DELETE`,`CREATE`,`DROP`, `RELOAD`,`PROCESS`, `REFERENCES`,`INDEX`,`ALTER`, `SHOW DATABASES`,`CREATE TEMPORARY TABLES`,`LOCK TABLES`,`EXECUTE`, `REPLICATION CLIENT`,`CREATE VIEW`, `SHOW VIEW`,`CREATE ROUTINE`,`ALTER ROUTINE`,`CREATE USER`,`EVENT`, `TRIGGER`,`REPLICATION SLAVE`   |  —  | 
|  RDS for PostgreSQL  |   `CREATE ROLE`,`CREATE DB`, `PASSWORD VALID UNTIL INFINITY`,`CREATE EXTENSION`,`ALTER EXTENSION`,`DROP EXTENSION`,`CREATE TABLESPACE`,`ALTER <OBJECT> OWNER`,`CHECKPOINT`, `PG_CANCEL_BACKEND()`, `PG_TERMINATE_BACKEND()`,`SELECT PG_STAT_REPLICATION`,`EXECUTE PG_STAT_STATEMENTS_RESET()`,`OWN POSTGRES_FDW_HANDLER()`,`OWN POSTGRES_FDW_VALIDATOR()`,`OWN POSTGRES_FDW`, `EXECUTE PG_BUFFERCACHE_PAGES()`,`SELECT PG_BUFFERCACHE`   |   `RDS_SUPERUSER`  For more information about RDS\$1SUPERUSER, see [Understanding PostgreSQL roles and permissions](Appendix.PostgreSQL.CommonDBATasks.Roles.md) .   | 
|  RDS for Oracle  |   `ADMINISTER DATABASE TRIGGER`,`ALTER DATABASE LINK`,`ALTER PUBLIC DATABASE LINK`, `AUDIT SYSTEM`,`CHANGE NOTIFICATION`, `DROP ANY DIRECTORY`,`EXEMPT ACCESS POLICY`,`EXEMPT IDENTITY POLICY`,`EXEMPT REDACTION POLICY`,`FLASHBACK ANY TABLE`, `GRANT ANY OBJECT PRIVILEGE`,`RESTRICTED SESSION`,`SELECT ANY TABLE`,`UNLIMITED TABLESPACE`   |   `DBA`   The `DBA` role is exempt from the following privileges:  `ALTER DATABASE`,`ALTER SYSTEM`, `CREATE ANY DIRECTORY`,`CREATE EXTERNAL JOB`,`CREATE PLUGGABLE DATABASE`, `GRANT ANY PRIVILEGE`,`GRANT ANY ROLE`,`READ ANY FILE GROUP`    | 
|  Amazon RDS for Microsoft SQL Server  |   `ADMINISTER BULK OPERATIONS`,`ALTER ANY CONNECTION`,`ALTER ANY CREDENTIAL`, `ALTER ANY EVENT SESSION`,`ALTER ANY LINKED SERVER`,`ALTER ANY LOGIN`,`ALTER ANY SERVER AUDIT`,`ALTER ANY SERVER ROLE`, `ALTER SERVER STATE`,`ALTER TRACE`, `CONNECT SQL`,`CREATE ANY DATABASE`, `VIEW ANY DATABASE`,`VIEW ANY DEFINITION`,`VIEW SERVER STATE`,`ALTER ON ROLE SQLAgentOperatorRole`   |   `DB_OWNER` (database-level role), `PROCESSADMIN` (server-level role), `SETUPADMIN` (server-level role), `SQLAgentUserRole` (database-level role), `SQLAgentReaderRole` (database-level role), and `SQLAgentOperatorRole` (database-level role)  | 

# Using service-linked roles for Amazon RDS
<a name="UsingWithRDS.IAM.ServiceLinkedRoles"></a>

Amazon RDS uses AWS Identity and Access Management (IAM)[ service-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-service-linked-role). A service-linked role is a unique type of IAM role that is linked directly to Amazon RDS. Service-linked roles are predefined by Amazon RDS and include all the permissions that the service requires to call other AWS services on your behalf. 

A service-linked role makes using Amazon RDS easier because you don't have to manually add the necessary permissions. Amazon RDS defines the permissions of its service-linked roles, and unless defined otherwise, only Amazon RDS can assume its roles. The defined permissions include the trust policy and the permissions policy, and that permissions policy cannot be attached to any other IAM entity.

You can delete the roles only after first deleting their related resources. This protects your Amazon RDS resources because you can't inadvertently remove permission to access the resources.

For information about other services that support service-linked roles, see [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) and look for the services that have **Yes **in the **Service-Linked Role** column. Choose a **Yes** with a link to view the service-linked role documentation for that service.

## Service-linked role permissions for Amazon RDS
<a name="service-linked-role-permissions"></a>

Amazon RDS uses the service-linked role named AWSServiceRoleForRDS to allow Amazon RDS to call AWS services on behalf of your DB instances.

The AWSServiceRoleForRDS service-linked role trusts the following services to assume the role:
+ `rds.amazonaws.com`

This service-linked role has a permissions policy attached to it called `AmazonRDSServiceRolePolicy` that grants it permissions to operate in your account.

For more information about this policy, including the JSON policy document, see [AmazonRDSServiceRolePolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSServiceRolePolicy.html) in the *AWS Managed Policy Reference Guide*.

**Note**  
You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or delete a service-linked role. If you encounter the following error message:  
**Unable to create the resource. Verify that you have permission to create service linked role. Otherwise wait and try again later.**  
 Make sure you have the following permissions enabled:   

```
{
    "Action": "iam:CreateServiceLinkedRole",
    "Effect": "Allow",
    "Resource": "arn:aws:iam::*:role/aws-service-role/rds.amazonaws.com/AWSServiceRoleForRDS",
    "Condition": {
        "StringLike": {
            "iam:AWSServiceName":"rds.amazonaws.com"
        }
    }
}
```
 For more information, see [Service-linked role permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#service-linked-role-permissions) in the *IAM User Guide*.

### Creating a service-linked role for Amazon RDS
<a name="create-service-linked-role"></a>

You don't need to manually create a service-linked role. When you create a DB instance, Amazon RDS creates the service-linked role for you. 

**Important**  
If you were using the Amazon RDS service before December 1, 2017, when it began supporting service-linked roles, then Amazon RDS created the AWSServiceRoleForRDS role in your account. To learn more, see [A new role appeared in my AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_roles.html#troubleshoot_roles_new-role-appeared).

If you delete this service-linked role, and then need to create it again, you can use the same process to recreate the role in your account. When you create a DB instance, Amazon RDS creates the service-linked role for you again.

### Editing a service-linked role for Amazon RDS
<a name="edit-service-linked-role"></a>

Amazon RDS does not allow you to edit the AWSServiceRoleForRDS service-linked role. After you create a service-linked role, you cannot change the name of the role because various entities might reference the role. However, you can edit the description of the role using IAM. For more information, see [Editing a service-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#edit-service-linked-role) in the *IAM User Guide*.

### Deleting a service-linked role for Amazon RDS
<a name="delete-service-linked-role"></a>

If you no longer need to use a feature or service that requires a service-linked role, we recommend that you delete that role. That way you don't have an unused entity that is not actively monitored or maintained. However, you must delete all of your DB instances before you can delete the service-linked role.

#### Cleaning up a service-linked role
<a name="service-linked-role-review-before-delete"></a>

Before you can use IAM to delete a service-linked role, you must first confirm that the role has no active sessions and remove any resources used by the role.

**To check whether the service-linked role has an active session in the IAM console**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane of the IAM console, choose **Roles**. Then choose the name (not the check box) of the AWSServiceRoleForRDS role.

1. On the **Summary** page for the chosen role, choose the **Last Accessed** tab.

1. On the **Last Accessed** tab, review recent activity for the service-linked role.
**Note**  
If you are unsure whether Amazon RDS is using the AWSServiceRoleForRDS role, you can try to delete the role. If the service is using the role, then the deletion fails and you can view the AWS Regions where the role is being used. If the role is being used, then you must wait for the session to end before you can delete the role. You cannot revoke the session for a service-linked role. 

If you want to remove the AWSServiceRoleForRDS role, you must first delete *all* of your DB instances .

##### Deleting all of your instances
<a name="delete-service-linked-role.delete-rds-instances"></a>

Use one of these procedures to delete each of your instances.

**To delete an instance (console)**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the instance that you want to delete.

1. For **Actions**, choose **Delete**.

1. If you are prompted for **Create final Snapshot?**, choose **Yes** or **No**.

1. If you chose **Yes** in the previous step, for **Final snapshot name** enter the name of your final snapshot.

1. Choose **Delete**.

**To delete an instance (CLI)**  
See `[delete-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/delete-db-instance.html)` in the *AWS CLI Command Reference*.

**To delete an instance (API)**  
See `[DeleteDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DeleteDBInstance.html)` in the *Amazon RDS API Reference*.

You can use the IAM console, the IAM CLI, or the IAM API to delete the AWSServiceRoleForRDS service-linked role. For more information, see [Deleting a service-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#delete-service-linked-role) in the *IAM User Guide*.

## Service-linked role permissions for Amazon RDS Custom
<a name="slr-permissions-custom"></a>

Amazon RDS Custom uses the service-linked role named `AWSServiceRoleForRDSCustom` to allow RDS Custom to call AWS services on behalf of your RDS DB resources.

The AWSServiceRoleForRDSCustom service-linked role trusts the following services to assume the role:
+ `custom.rds.amazonaws.com`

This service-linked role has a permissions policy attached to it called `AmazonRDSCustomServiceRolePolicy` that grants it permissions to operate in your account.

Creating, editing, or deleting the service-linked role for RDS Custom works the same as for Amazon RDS. For more information, see [AWS managed policy: AmazonRDSCustomServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSCustomServiceRolePolicy).

**Note**  
You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or delete a service-linked role. If you encounter the following error message:  
**Unable to create the resource. Verify that you have permission to create service linked role. Otherwise wait and try again later.**  
 Make sure you have the following permissions enabled:   

```
{
    "Action": "iam:CreateServiceLinkedRole",
    "Effect": "Allow",
    "Resource": "arn:aws:iam::*:role/aws-service-role/custom.rds.amazonaws.com/AmazonRDSCustomServiceRolePolicy",
    "Condition": {
        "StringLike": {
            "iam:AWSServiceName":"custom.rds.amazonaws.com"
        }
    }
}
```
 For more information, see [Service-linked role permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#service-linked-role-permissions) in the *IAM User Guide*.

## Service-linked role permissions for Amazon RDS Beta
<a name="slr-permissions-rdsbeta"></a>

Amazon RDS uses the service-linked role named `AWSServiceRoleForRDSBeta` to allow Amazon RDS to call AWS services on behalf of your RDS DB resources.

The AWSServiceRoleForRDSBeta service-linked role trusts the following services to assume the role:
+ `rds.amazonaws.com`

This service-linked role has a permissions policy attached to it called `AmazonRDSBetaServiceRolePolicy` that grants it permissions to operate in your account. For more information, see [AWS managed policy: AmazonRDSBetaServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSBetaServiceRolePolicy).

**Note**  
You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or delete a service-linked role. If you encounter the following error message:  
**Unable to create the resource. Verify that you have permission to create service linked role. Otherwise wait and try again later.**  
 Make sure you have the following permissions enabled:   

```
{
    "Action": "iam:CreateServiceLinkedRole",
    "Effect": "Allow",
    "Resource": "arn:aws:iam::*:role/aws-service-role/custom.rds.amazonaws.com/AmazonRDSBetaServiceRolePolicy",
    "Condition": {
        "StringLike": {
            "iam:AWSServiceName":"custom.rds.amazonaws.com"
        }
    }
}
```
 For more information, see [Service-linked role permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#service-linked-role-permissions) in the *IAM User Guide*.

## Service-linked role for Amazon RDS Preview
<a name="slr-permissions-rdspreview"></a>

Amazon RDS uses the service-linked role named `AWSServiceRoleForRDSPreview` to allow Amazon RDS to call AWS services on behalf of your RDS DB resources.

The AWSServiceRoleForRDSPreview service-linked role trusts the following services to assume the role:
+ `rds.amazonaws.com`

This service-linked role has a permissions policy attached to it called `AmazonRDSPreviewServiceRolePolicy` that grants it permissions to operate in your account. For more information, see [AWS managed policy: AmazonRDSPreviewServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSPreviewServiceRolePolicy).

**Note**  
You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or delete a service-linked role. If you encounter the following error message:  
**Unable to create the resource. Verify that you have permission to create service linked role. Otherwise wait and try again later.**  
 Make sure you have the following permissions enabled:   

```
{
    "Action": "iam:CreateServiceLinkedRole",
    "Effect": "Allow",
    "Resource": "arn:aws:iam::*:role/aws-service-role/custom.rds.amazonaws.com/AmazonRDSPreviewServiceRolePolicy",
    "Condition": {
        "StringLike": {
            "iam:AWSServiceName":"custom.rds.amazonaws.com"
        }
    }
}
```
 For more information, see [Service-linked role permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#service-linked-role-permissions) in the *IAM User Guide*.

# Amazon VPC and Amazon RDS
<a name="USER_VPC"></a>

Amazon Virtual Private Cloud (Amazon VPC) makes it possible for you to launch AWS resources, such as Amazon RDS DB instances, into a virtual private cloud (VPC). 

When you use a VPC, you have control over your virtual networking environment. You can choose your own IP address range, create subnets, and configure routing and access control lists. There is no additional cost to run your DB instance in a VPC. 

Accounts have a default VPC. All new DB instances are created in the default VPC unless you specify otherwise.

**Topics**
+ [

# Working with a DB instance in a VPC
](USER_VPC.WorkingWithRDSInstanceinaVPC.md)
+ [

# Updating the VPC for a DB instance
](USER_VPC.VPC2VPC.md)
+ [

# Scenarios for accessing a DB instance in a VPC
](USER_VPC.Scenarios.md)
+ [

# Tutorial: Create a VPC for use with a DB instance (IPv4 only)
](CHAP_Tutorials.WebServerDB.CreateVPC.md)
+ [

# Tutorial: Create a VPC for use with a DB instance (dual-stack mode)
](CHAP_Tutorials.CreateVPCDualStack.md)
+ [

# Moving a DB instance not in a VPC into a VPC
](USER_VPC.Non-VPC2VPC.md)

Following, you can find a discussion about VPC functionality relevant to Amazon RDS DB instances. For more information about Amazon VPC, see [Amazon VPC Getting Started Guide](https://docs.aws.amazon.com/AmazonVPC/latest/GettingStartedGuide/) and [Amazon VPC User Guide](https://docs.aws.amazon.com/vpc/latest/userguide/).

# Working with a DB instance in a VPC
<a name="USER_VPC.WorkingWithRDSInstanceinaVPC"></a>

Your DB instance is in a virtual private cloud (VPC). A VPC is a virtual network that is logically isolated from other virtual networks in the AWS Cloud. Amazon VPC makes it possible for you to launch AWS resources, such as an Amazon RDS DB instance or Amazon EC2 instance, into a VPC. The VPC can either be a default VPC that comes with your account or one that you create. All VPCs are associated with your AWS account. 

Your default VPC has three subnets that you can use to isolate resources inside the VPC. The default VPC also has an internet gateway that can be used to provide access to resources inside the VPC from outside the VPC. 

For a list of scenarios involving Amazon RDS DB instances in a VPC and outside of a VPC, see [Scenarios for accessing a DB instance in a VPC](USER_VPC.Scenarios.md). 

**Topics**
+ [

## Working with a DB instance in a VPC
](#Overview.RDSVPC.Create)
+ [

## VPC encryption control
](#USER_VPC.EncryptionControl)
+ [

## Working with DB subnet groups
](#USER_VPC.Subnets)
+ [

## Shared subnets
](#USER_VPC.Shared_subnets)
+ [

## Amazon RDS IP addressing
](#USER_VPC.IP_addressing)
+ [

## Hiding a DB instance in a VPC from the internet
](#USER_VPC.Hiding)
+ [

## Creating a DB instance in a VPC
](#USER_VPC.InstanceInVPC)

In the following tutorials, you can learn to create a VPC that you can use for a common Amazon RDS scenario:
+ [Tutorial: Create a VPC for use with a DB instance (IPv4 only)](CHAP_Tutorials.WebServerDB.CreateVPC.md)
+ [Tutorial: Create a VPC for use with a DB instance (dual-stack mode)](CHAP_Tutorials.CreateVPCDualStack.md)

## Working with a DB instance in a VPC
<a name="Overview.RDSVPC.Create"></a>

Here are some tips on working with a DB instance in a VPC:
+ Your VPC must have at least two subnets. These subnets must be in two different Availability Zones in the AWS Region where you want to deploy your DB instance. A *subnet* is a segment of a VPC's IP address range that you can specify and that you can use to group DB instances based on your security and operational needs. 

  For Multi-AZ deployments, defining a subnet for two or more Availability Zones in an AWS Region allows Amazon RDS to create a new standby in another Availability Zone as needed. Make sure to do this even for Single-AZ deployments, just in case you want to convert them to Multi-AZ deployments at some point.
**Note**  
The DB subnet group for a Local Zone can have only one subnet.
+ If you want your DB instance in the VPC to be publicly accessible, make sure to turn on the VPC attributes *DNS hostnames* and *DNS resolution*. 
+ Your VPC must have a DB subnet group that you create. You create a DB subnet group by specifying the subnets you created. Amazon RDS chooses a subnet and an IP address within that subnet group to associate with your DB instance. The DB instance uses the Availability Zone that contains the subnet.
+ Your VPC must have a VPC security group that allows access to the DB instance.

  For more information, see [Scenarios for accessing a DB instance in a VPC](USER_VPC.Scenarios.md).
+ The CIDR blocks in each of your subnets must be large enough to accommodate spare IP addresses for Amazon RDS to use during maintenance activities, including failover and compute scaling. For example, a range such as 10.0.0.0/24 and 10.0.1.0/24 is typically large enough.
+ A VPC can have an *instance tenancy* attribute of either *default* or *dedicated*. All default VPCs have the instance tenancy attribute set to default, and a default VPC can support any DB instance class.

  If you choose to have your DB instance in a dedicated VPC where the instance tenancy attribute is set to dedicated, the DB instance class of your DB instance must be one of the approved Amazon EC2 dedicated instance types. For example, the r5.large EC2 dedicated instance corresponds to the db.r5.large DB instance class. For information about instance tenancy in a VPC, see [Dedicated instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-instance.html) in the *Amazon Elastic Compute Cloud User Guide*.

  For more information about the instance types that can be in a dedicated instance, see [Amazon EC2 dedicated instances](https://aws.amazon.com/ec2/purchasing-options/dedicated-instances/) on the Amazon EC2 pricing page. 
**Note**  
When you set the instance tenancy attribute to dedicated for a DB instance, it doesn't guarantee that the DB instance will run on a dedicated host.
+ When an option group is assigned to a DB instance, it's associated with the DB instance's VPC. This linkage means that you can't use the option group assigned to a DB instance if you attempt to restore the DB instance into a different VPC.
+ If you restore a DB instance into a different VPC, make sure to either assign the default option group to the DB instance, assign an option group that is linked to that VPC, or create a new option group and assign it to the DB instance. With persistent or permanent options, such as Oracle TDE, you must create a new option group that includes the persistent or permanent option when restoring a DB instance into a different VPC.

## VPC encryption control
<a name="USER_VPC.EncryptionControl"></a>

VPC encryption controls allow you to enforce encryption-in-transit for all network traffic within your VPCs. Use encryption control to meet regulatory compliance requirements by ensuring that only encryption-capable Nitro-based hardware can be provisioned in designated VPCs. Encryption control also catches compatibility issues at API request time rather than during provisioning. Your existing workloads continue operating and only new incompatible requests are blocked.

Set your VPC encryption controls by setting the VPC control mode to :
+ *disabled* (default)
+ *monitor*
+ *enforced*

To check the current control mode for your VPC, use the AWS Management Console or [DescribeVpcs](https://docs.aws.amazon.com//AWSEC2/latest/APIReference/API_DescribeVpcs.html) CLI or API command.

If your VPC enforces encryption, you can only provision Nitro-based DB instances that support encryption in transit in that VPC. For more information see, [DB instance class types](Concepts.DBInstanceClass.Types.md). For information about Nitro instances, see [Instances built on the AWS Nitro System](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) in the *Amazon EC2 User Guide*.

**Note**  
If you try to provision incompatible DB instances in an encryption-enforced VPC, Amazon RDS returns a `VpcEncryptionControlViolationException` exception.

## Working with DB subnet groups
<a name="USER_VPC.Subnets"></a>

*Subnets* are segments of a VPC's IP address range that you designate to group your resources based on security and operational needs. A *DB subnet group* is a collection of subnets (typically private) that you create in a VPC and that you then designate for your DB instances. By using a DB subnet group, you can specify a particular VPC when creating DB instances using the AWS CLI or RDS API. If you use the console, you can choose the VPC and subnet groups you want to use.

Each DB subnet group should have subnets in at least two Availability Zones in a given AWS Region. When creating a DB instance in a VPC, you choose a DB subnet group for it. From the DB subnet group, Amazon RDS chooses a subnet and an IP address within that subnet to associate with the DB instance. The DB uses the Availability Zone that contains the subnet. Amazon RDS always assigns an IP address from a subnet that has free IP address space.

If the primary DB instance of a Multi-AZ deployment fails, Amazon RDS can promote the corresponding standby and later create a new standby using an IP address of the subnet in one of the other Availability Zones.

The subnets in a DB subnet group are either public or private. The subnets are public or private, depending on the configuration that you set for their network access control lists (network ACLs) and routing tables. For a DB instance to be publicly accessible, all of the subnets in its DB subnet group must be public. If a subnet that's associated with a publicly accessible DB instance changes from public to private, it can affect DB instance availability.

To create a DB subnet group that supports dual-stack mode, make sure that each subnet that you add to the DB subnet group has an Internet Protocol version 6 (IPv6) CIDR block associated with it. For more information, see [Amazon RDS IP addressing](#USER_VPC.IP_addressing) and [Migrating to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the *Amazon VPC User Guide.*

**Note**  
The DB subnet group for a Local Zone can have only one subnet.

When Amazon RDS creates a DB instance in a VPC, it assigns a network interface to your DB instance by using an IP address from your DB subnet group. However, we strongly recommend that you use the Domain Name System (DNS) name to connect to your DB instance. We recommend this because the underlying IP address changes during failover. 

**Note**  
For each DB instance that you run in a VPC, make sure to reserve at least one address in each subnet in the DB subnet group for use by Amazon RDS for recovery actions. 

## Shared subnets
<a name="USER_VPC.Shared_subnets"></a>

You can create a DB instance in a shared VPC.

Some considerations to keep in mind while using shared VPCs:
+ You can move a DB instance from a shared VPC subnet to a non-shared VPC subnet and vice-versa.
+ Participants in a shared VPC must create a security group in the VPC to allow them to create a DB instance.
+ Owners and participants in a shared VPC can access the database by using SQL queries. However, only the creator of a resource can make any API calls on the resource.



## Amazon RDS IP addressing
<a name="USER_VPC.IP_addressing"></a>

IP addresses enable resources in your VPC to communicate with each other, and with resources over the internet. Amazon RDS supports both IPv4 and IPv6 addressing protocols. By default, Amazon RDS and Amazon VPC use the IPv4 addressing protocol. You can't turn off this behavior. When you create a VPC, make sure to specify an IPv4 CIDR block (a range of private IPv4 addresses). You can optionally assign an IPv6 CIDR block to your VPC and subnets, and assign IPv6 addresses from that block to DB instances in your subnet.

Support for the IPv6 protocol expands the number of supported IP addresses. By using the IPv6 protocol, you ensure that you have sufficient available addresses for the future growth of the internet. New and existing RDS resources can use IPv4 and IPv6 addresses within your VPC. Configuring, securing, and translating network traffic between the two protocols used in different parts of an application can cause operational overhead. You can standardize on the IPv6 protocol for Amazon RDS resources to simplify your network configuration.

**Topics**
+ [

### IPv4 addresses
](#USER_VPC.IP_addressing.IPv4)
+ [

### IPv6 addresses
](#USER_VPC.IP_addressing.IPv6)
+ [

### Dual-stack mode
](#USER_VPC.IP_addressing.dual-stack-mode)

### IPv4 addresses
<a name="USER_VPC.IP_addressing.IPv4"></a>

When you create a VPC, you must specify a range of IPv4 addresses for the VPC in the form of a CIDR block, such as `10.0.0.0/16`. A *DB subnet group* defines the range of IP addresses in this CIDR block that a DB instance can use. These IP addresses can be private or public.

A private IPv4 address is an IP address that's not reachable over the internet. You can use private IPv4 addresses for communication between your DB instance and other resources, such as Amazon EC2 instances, in the same VPC. Each DB instance has a private IP address for communication in the VPC.

A public IP address is an IPv4 address that's reachable from the internet. You can use public addresses for communication between your DB instance and resources on the internet, such as a SQL client. You control whether your DB instance receives a public IP address.

Amazon RDS uses Public Elastic IPv4 addresses from EC2's public IPv4 address pool for publicly accessible database instances. These IP addresses are visible in your AWS account when using the `describe-addresses` CLI, API or viewing the Elastic IPs (EIP) section in the AWS Management Console. Each RDS-managed IP address is marked with a `service_managed` attribute set to `"rds"`.

While these IPs are visible in your account, they remain fully managed by Amazon RDS and cannot be modified or released. Amazon RDS releases IPs back into the public IPv4 address pool when no longer in use.

CloudTrail logs API calls related to RDS's EIP, such as the `AllocateAddress`. These API calls are invoked by the Service Principal `rds.amazonaws.com`.

**Note**  
IPs allocated by Amazon RDS do not count against your account's EIP limits.

For a tutorial that shows you how to create a VPC with only private IPv4 addresses that you can use for a common Amazon RDS scenario, see [Tutorial: Create a VPC for use with a DB instance (IPv4 only)](CHAP_Tutorials.WebServerDB.CreateVPC.md). 

### IPv6 addresses
<a name="USER_VPC.IP_addressing.IPv6"></a>

You can optionally associate an IPv6 CIDR block with your VPC and subnets, and assign IPv6 addresses from that block to the resources in your VPC. Each IPv6 address is globally unique. 

The IPv6 CIDR block for your VPC is automatically assigned from Amazon's pool of IPv6 addresses. You can't choose the range yourself.

When connecting to an IPv6 address, make sure that the following conditions are met:
+ The client is configured so that client to database traffic over IPv6 is allowed.
+ RDS security groups used by the DB instance are configured correctly so that client to database traffic over IPv6 is allowed.
+ The client operating system stack allows traffic on the IPv6 address, and operating system drivers and libraries are configured to choose the correct default DB instance endpoint (either IPv4 or IPv6).

For more information about IPv6, see [ IP Addressing](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html) in the *Amazon VPC User Guide*.

### Dual-stack mode
<a name="USER_VPC.IP_addressing.dual-stack-mode"></a>

A DB instance runs in dual-stack mode when it can communicate over both IPv4 and IPv6 addressing protocols. Resources can then communicate with the DB instance using either IPv4, IPv6, or both protocols. Private dual-stack mode DB instances have IPv6 endpoints that RDS restricts to VPC access only, ensuring your IPv6 endpoints remain private. Public dual-stack mode DB instances provide both IPv4 and IPv6 endpoints that you can access from the internet.

**Topics**
+ [

#### Dual-stack mode and DB subnet groups
](#USER_VPC.IP_addressing.dual-stack-db-subnet-groups)
+ [

#### Working with dual-stack mode DB instances
](#USER_VPC.IP_addressing.dual-stack-working-with)
+ [

#### Modifying IPv4-only DB instances to use dual-stack mode
](#USER_VPC.IP_addressing.dual-stack-modifying-ipv4)
+ [

#### Region and version availability
](#USER_VPC.IP_addressing.RegionVersionAvailability)
+ [

#### Limitations for dual-stack network DB instances
](#USER_VPC.IP_addressing.dual-stack-limitations)

For a tutorial that shows you how to create a VPC with both IPv4 and IPv6 addresses that you can use for a common Amazon RDS scenario, see [Tutorial: Create a VPC for use with a DB instance (dual-stack mode)](CHAP_Tutorials.CreateVPCDualStack.md). 

#### Dual-stack mode and DB subnet groups
<a name="USER_VPC.IP_addressing.dual-stack-db-subnet-groups"></a>

To use dual-stack mode, make sure that each subnet in the DB subnet group that you associate with the DB instance has an IPv6 CIDR block associated with it. You can create a new DB subnet group or modify an existing DB subnet group to meet this requirement. After a DB instance is in dual-stack mode, clients can connect to it normally. Make sure that client security firewalls and RDS DB instance security groups are accurately configured to allow traffic over IPv6. To connect, clients use the DB instance's endpoint. Client applications can specify which protocol is preferred when connecting to a database. In dual-stack mode, the DB instance detects the client's preferred network protocol, either IPv4 or IPv6, and uses that protocol for the connection.

If a DB subnet group stops supporting dual-stack mode because of subnet deletion or CIDR disassociation, there's a risk of an incompatible network state for DB instances that are associated with the DB subnet group. Also, you can't use the DB subnet group when you create a new dual-stack mode DB instance.

To determine whether a DB subnet group supports dual-stack mode by using the AWS Management Console, view the **Network type** on the details page of the DB subnet group. To determine whether a DB subnet group supports dual-stack mode by using the AWS CLI, run the [describe-db-subnet-groups](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-subnet-groups.html) command and view `SupportedNetworkTypes` in the output.

Read replicas are treated as independent DB instances and can have a network type that's different from the primary DB instance. If you change the network type of a read replica's primary DB instance, the read replica isn't affected. When you are restoring a DB instance, you can restore it to any network type that's supported.

#### Working with dual-stack mode DB instances
<a name="USER_VPC.IP_addressing.dual-stack-working-with"></a>

When you create or modify a DB instance, you can specify dual-stack mode to allow your resources to communicate with your DB instance over IPv4, IPv6, or both.

When you use the AWS Management Console to create or modify a DB instance, you can specify dual-stack mode in the **Network type** section. The following image shows the **Network type** section in the console.

![\[Network type section in the console with Dual-stack mode selected.\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/dual-stack-mode.png)


When you use the AWS CLI to create or modify a DB instance, set the `--network-type` option to `DUAL` to use dual-stack mode. When you use the RDS API to create or modify a DB instance, set the `NetworkType` parameter to `DUAL` to use dual-stack mode. When you are modifying the network type of a DB instance, downtime is possible. If dual-stack mode isn't supported by the specified DB engine version or DB subnet group, the `NetworkTypeNotSupported` error is returned.

For more information about creating a DB instance, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md). For more information about modifying a DB instance, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

To determine whether a DB instance is in dual-stack mode by using the console, view the **Network type** on the **Connectivity & security** tab for the DB instance.

#### Modifying IPv4-only DB instances to use dual-stack mode
<a name="USER_VPC.IP_addressing.dual-stack-modifying-ipv4"></a>

You can modify an IPv4-only DB instance to use dual-stack mode. To do so, change the network type of the DB instance. The modification might result in downtime.

It is recommended that you change the network type of your Amazon RDS DB instances during a maintenance window. Currently, setting the network type of new instances to dual-stack mode isn't supported. You can set network type manually by using the `modify-db-instance` command. 

Before modifying a DB instance to use dual-stack mode, make sure that its DB subnet group supports dual-stack mode. If the DB subnet group associated with the DB instance doesn't support dual-stack mode, specify a different DB subnet group that supports it when you modify the DB instance. Modifying the DB subnet group of a DB instance can cause downtime.

If you modify the DB subnet group of a DB instance before you change the DB instance to use dual-stack mode, make sure that the DB subnet group is valid for the DB instance before and after the change. 

For RDS for PostgreSQL, RDS for MySQL, RDS for Oracle, and RDS for MariaDB Single-AZ instances, we recommend that you run the [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) command with only the `--network-type` parameter set to `DUAL` to change the network to dual-stack mode. Adding other parameters along with the `--network-type` parameter in the same API call could result in downtime. To modify multiple parameters, ensure that the network type modification is successfully completed before sending another `modify-db-instance` request with other parameters. 

Network type modifications for RDS for PostgreSQL, RDS for MySQL, RDS for Oracle, and RDS for MariaDB Multi-AZ DB instances cause a brief downtime and trigger a failover if you only use the `--network-type` parameter or if you combine parameters in a modify-db-instance command.

Network type modifications on RDS for SQL Server Single-AZ or Multi-AZ DB instances cause downtime if you only use the `--network-type` parameter or if you combine parameters in a `modify-db-instance` command. Network type modifications cause failover in an SQL Server Multi-AZ instance.

If you can't connect to the DB instance after the change, make sure that the client and database security firewalls and route tables are accurately configured to allow traffic to the database on the selected network (either IPv4 or IPv6). You might also need to modify operating system parameter, libraries, or drivers to connect using an IPv6 address.

When you modify a DB instance to use dual-stack mode, there can't be a pending change from a Single-AZ deployment to a Multi-AZ deployment, or from a Multi-AZ deployment to a Single-AZ deployment.

**To modify an IPv4-only DB instance to use dual-stack mode**

1. Modify a DB subnet group to support dual-stack mode, or create a DB subnet group that supports dual-stack mode:

   1. Associate an IPv6 CIDR block with your VPC.

      For instructions, see [ Add an IPv6 CIDR block to your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/modify-vpcs.html#vpc-associate-ipv6-cidr) in the *Amazon VPC User Guide*.

   1. Attach the IPv6 CIDR block to all of the subnets in your the DB subnet group.

      For instructions, see [ Add an IPv6 CIDR block to your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/modify-subnets.html#subnet-associate-ipv6-cidr) in the *Amazon VPC User Guide*.

   1. Confirm that the DB subnet group supports dual-stack mode.

      If you are using the AWS Management Console, select the DB subnet group, and make sure that the **Supported network types** value is **Dual, IPv4**.

      If you are using the AWS CLI, run the [describe-db-subnet-groups](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-subnet-groups.html) command, and make sure that the `SupportedNetworkType` value for the DB instance is `Dual, IPv4`.

1. Modify the security group associated with the DB instance to allow IPv6 connections to the database, or create a new security group that allows IPv6 connections.

   For instructions, see [ Security group rules](https://docs.aws.amazon.com/vpc/latest/userguide/security-group-rules.html) in the *Amazon VPC User Guide*.

1. Modify the DB instance to support dual-stack mode. To do so, set the **Network type** to **Dual-stack mode**.

   If you are using the console, make sure that the following settings are correct:
   + **Network type** – **Dual-stack mode**  
![\[Network type section in the console with Dual-stack mode selected.\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/dual-stack-mode.png)
   + **DB subnet group** – The DB subnet group that you configured in a previous step
   + **Security group** – The security that you configured in a previous step

   If you are using the AWS CLI, make sure that the following settings are correct:
   + `--network-type` – `dual`
   + `--db-subnet-group-name` – The DB subnet group that you configured in a previous step
   + `--vpc-security-group-ids` – The VPC security group that you configured in a previous step

   For example: 

   ```
   aws rds modify-db-instance --db-instance-identifier my-instance --network-type "DUAL"
   ```

1. Confirm that the DB instance supports dual-stack mode.

   If you are using the console, choose the **Connectivity & security** tab for the DB instance. On that tab, make sure that the **Network type** value is **Dual-stack mode**.

   If you are using the AWS CLI, run the [ describe-db-instances](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html) command, and make sure that the `NetworkType` value for the DB instance is `dual`.

   Run the `dig` command on the DB instance endpoint to identify the IPv6 address associated with it.

   ```
   dig db-instance-endpoint AAAA
   ```

   Use the DB instance endpoint, not the IPv6 address, to connect to the DB instance.

#### Region and version availability
<a name="USER_VPC.IP_addressing.RegionVersionAvailability"></a>

Feature availability and support varies across specific versions of each database engine, and across AWS Regions. For more information on version and Region availability with dual-stack mode, see [Supported Regions and DB engines for dual-stack mode in Amazon RDS](Concepts.RDS_Fea_Regions_DB-eng.Feature.DualStackMode.md). 

#### Limitations for dual-stack network DB instances
<a name="USER_VPC.IP_addressing.dual-stack-limitations"></a>

The following limitations apply to dual-stack network DB instances:
+ DB instances can't use the IPv6 protocol exclusively. They can use IPv4 exclusively, or they can use the IPv4 and IPv6 protocol (dual-stack mode).
+ Amazon RDS doesn't support native IPv6 subnets.
+ For RDS for SQL Server, dual-stack mode DB instances that use Always On AGs availability group listener endpoints only present IPv4 addresses.
+ You can't use RDS Proxy with dual-stack mode DB instances.
+ You can't use dual-stack mode with RDS on AWS Outposts DB instances.
+ You can't use dual-stack mode with DB instances in a Local Zone.

## Hiding a DB instance in a VPC from the internet
<a name="USER_VPC.Hiding"></a>

One common Amazon RDS scenario is to have a VPC in which you have an Amazon EC2 instance with a public-facing web application and a DB instance with a database that isn't publicly accessible. For example, you can create a VPC that has a public subnet and a private subnet. EC2 instances that function as web servers can be deployed in the public subnet. The DB instances are deployed in the private subnet. In such a deployment, only the web servers have access to the DB instances. For an illustration of this scenario, see [A DB instance in a VPC accessed by an Amazon EC2 instance in the same VPC](USER_VPC.Scenarios.md#USER_VPC.Scenario1). 

When you launch a DB instance inside a VPC, the DB instance has a private IP address for traffic inside the VPC. This private IP address isn't publicly accessible. You can use the **Public access** option to designate whether the DB instance also has a public IP address in addition to the private IP address. If the DB instance is designated as publicly accessible, its DNS endpoint resolves to the private IP address from within the VPC. It resolves to the public IP address from outside of the VPC. Access to the DB instance is ultimately controlled by the security group it uses. That public access is not permitted if the security group assigned to the DB instance doesn't include inbound rules that permit it. In addition, for a DB instance to be publicly accessible, the subnets in its DB subnet group must have an internet gateway. For more information, see [Can't connect to Amazon RDS DB instance](CHAP_Troubleshooting.md#CHAP_Troubleshooting.Connecting)

You can modify a DB instance to turn on or off public accessibility by modifying the **Public access** option. The following illustration shows the **Public access** option in the **Additional connectivity configuration** section. To set the option, open the **Additional connectivity configuration** section in the **Connectivity** section. 

![\[Set your database Public access option in the Additional connectivity configuration section to No.\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/VPC-example4.png)


For information about modifying a DB instance to set the **Public access** option, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

## Creating a DB instance in a VPC
<a name="USER_VPC.InstanceInVPC"></a>

The following procedures help you create a DB instance in a VPC. To use the default VPC, you can begin with step 2, and use the VPC and DB subnet group have already been created for you. If you want to create an additional VPC, you can create a new VPC. 

**Note**  
If you want your DB instance in the VPC to be publicly accessible, you must update the DNS information for the VPC by enabling the VPC attributes *DNS hostnames* and *DNS resolution*. For information about updating the DNS information for a VPC instance, see [Updating DNS support for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html). 

Follow these steps to create a DB instance in a VPC:
+ [Step 1: Create a VPC](#USER_VPC.CreatingVPC) 
+  [Step 2: Create a DB subnet group](#USER_VPC.CreateDBSubnetGroup)
+  [Step 3: Create a VPC security group](#USER_VPC.CreateVPCSecurityGroup)
+  [Step 4: Create a DB instance in the VPC](#USER_VPC.CreateDBInstanceInVPC) 

### Step 1: Create a VPC
<a name="USER_VPC.CreatingVPC"></a>

Create a VPC with subnets in at least two Availability Zones. You use these subnets when you create a DB subnet group. If you have a default VPC, a subnet is automatically created for you in each Availability Zone in the AWS Region.

For more information, see [Create a VPC with private and public subnets](CHAP_Tutorials.WebServerDB.CreateVPC.md#CHAP_Tutorials.WebServerDB.CreateVPC.VPCAndSubnets), or see [Create a VPC](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-vpcs.html#Create-VPC) in the *Amazon VPC User Guide*. 

### Step 2: Create a DB subnet group
<a name="USER_VPC.CreateDBSubnetGroup"></a>

A DB subnet group is a collection of subnets (typically private) that you create for a VPC and that you then designate for your DB instances. A DB subnet group allows you to specify a particular VPC when you create DB instances using the AWS CLI or RDS API. If you use the console, you can just choose the VPC and subnets you want to use. Each DB subnet group must have at least one subnet in at least two Availability Zones in the AWS Region. As a best practice, each DB subnet group should have at least one subnet for every Availability Zone in the AWS Region.

For Multi-AZ deployments, defining a subnet for all Availability Zones in an AWS Region enables Amazon RDS to create a new standby replica in another Availability Zone if necessary. You can follow this best practice even for Single-AZ deployments, because you might convert them to Multi-AZ deployments in the future.

For a DB instance to be publicly accessible, the subnets in the DB subnet group must have an internet gateway. For more information about internet gateways for subnets, see [Connect to the internet using an internet gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html) in the *Amazon VPC User Guide*. 

**Note**  
The DB subnet group for a Local Zone can have only one subnet.

When you create a DB instance in a VPC, you can choose a DB subnet group. Amazon RDS chooses a subnet and an IP address within that subnet to associate with your DB instance. If no DB subnet groups exist, Amazon RDS creates a default subnet group when you create a DB instance. Amazon RDS creates and associates an Elastic Network Interface to your DB instance with that IP address. The DB instance uses the Availability Zone that contains the subnet.

For Multi-AZ deployments, defining a subnet for two or more Availability Zones in an AWS Region allows Amazon RDS to create a new standby in another Availability Zone should the need arise. You need to do this even For Single-AZ deployments, just in case you want to convert them to Multi-AZ deployments at some point.

In this step, you create a DB subnet group and add the subnets that you created for your VPC.

**To create a DB subnet group**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Subnet groups**.

1. Choose **Create DB Subnet Group**.

1. For **Name**, type the name of your DB subnet group.

1. For **Description**, type a description for your DB subnet group. 

1. For **VPC**, choose the default VPC or the VPC that you created.

1. In the **Add subnets** section, choose the Availability Zones that include the subnets from **Availability Zones**, and then choose the subnets from **Subnets**.  
![\[Create a DB subnet group.\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/RDSVPC101.png)
**Note**  
If you have enabled a Local Zone, you can choose an Availability Zone group on the **Create DB subnet group** page. In this case, choose the **Availability Zone group**, **Availability Zones**, and **Subnets**.

1. Choose **Create**. 

   Your new DB subnet group appears in the DB subnet groups list on the RDS console. You can choose the DB subnet group to see details, including all of the subnets associated with the group, in the details pane at the bottom of the window. 

### Step 3: Create a VPC security group
<a name="USER_VPC.CreateVPCSecurityGroup"></a>

Before you create your DB instance, you can create a VPC security group to associate with your DB instance. If you don't create a VPC security group, you can use the default security group when you create a DB instance. For instructions on how to create a security group for your DB instance, see [Create a VPC security group for a private DB instance](CHAP_Tutorials.WebServerDB.CreateVPC.md#CHAP_Tutorials.WebServerDB.CreateVPC.SecurityGroupDB), or see [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) in the *Amazon VPC User Guide*. 

### Step 4: Create a DB instance in the VPC
<a name="USER_VPC.CreateDBInstanceInVPC"></a>

In this step, you create a DB instance and use the VPC name, the DB subnet group, and the VPC security group you created in the previous steps.

**Note**  
If you want your DB instance in the VPC to be publicly accessible, you must enable the VPC attributes *DNS hostnames* and *DNS resolution*. For more information, see [DNS attributes for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html) in the *Amazon VPC User Guide*.

For details on how to create a DB instance, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md).

When prompted in the **Connectivity** section, enter the VPC name, the DB subnet group, and the VPC security group.

# Updating the VPC for a DB instance
<a name="USER_VPC.VPC2VPC"></a>

You can use the AWS Management Console to move your DB instance to a different VPC.

For information about modifying a DB instance, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md). In the **Connectivity** section of the modify page, shown following, enter the new DB subnet group for **DB subnet group**. The new subnet group must be a subnet group in a new VPC.

![\[Modify the DB instance subnet group.\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/EC2-VPC.png)


You can't change the VPC for a DB instance if the following conditions apply:
+ The DB instance is in multiple Availability Zones. You can convert the DB instance to a single Availability Zone, move it to a new VPC, and then convert it back to a Multi-AZ DB instance. For more information, see [Configuring and managing a Multi-AZ deployment for Amazon RDS](Concepts.MultiAZ.md).
+ The DB instance has one or more read replicas. You can remove the read replicas, move the DB instance to a new VPC, and then add the read replicas again. For more information, see [Working with DB instance read replicas](USER_ReadRepl.md).
+ The DB instance is a read replica. You can promote the read replica, and then move the standalone DB instance to a new VPC. For more information, see [Promoting a read replica to be a standalone DB instance](USER_ReadRepl.Promote.md).
+ The subnet group in the target VPC doesn't have subnets in the DB instance's the Availability Zone. You can add subnets in the DB instance's Availability Zone to the DB subnet group, and then move the DB instance to the new VPC. For more information, see [Working with DB subnet groups](USER_VPC.WorkingWithRDSInstanceinaVPC.md#USER_VPC.Subnets).

# Scenarios for accessing a DB instance in a VPC
<a name="USER_VPC.Scenarios"></a>

Amazon RDS supports the following scenarios for accessing a DB instance in a VPC:
+ [An Amazon EC2 instance in the same VPC](#USER_VPC.Scenario1)
+ [An EC2 instance in a different VPC](#USER_VPC.Scenario3)
+ [A client application through the internet](#USER_VPC.Scenario4)
+ [A private network](#USER_VPC.NotPublic)

## A DB instance in a VPC accessed by an Amazon EC2 instance in the same VPC
<a name="USER_VPC.Scenario1"></a>

A common use of a DB instance in a VPC is to share data with an application server that is running in an Amazon EC2 instance in the same VPC.

The following diagram shows this scenario.

![\[VPC scenario with a public web server and a private database.\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/con-VPC-sec-grp.png)


The simplest way to manage access between EC2 instances and DB instances in the same VPC is to do the following:
+ Create a VPC security group for your DB instances to be in. This security group can be used to restrict access to the DB instances. For example, you can create a custom rule for this security group. This might allow TCP access using the port that you assigned to the DB instance when you created it and an IP address you use to access the DB instance for development or other purposes.
+ Create a VPC security group for your EC2 instances (web servers and clients) to be in. This security group can, if needed, allow access to the EC2 instance from the internet by using the VPC's routing table. For example, you can set rules on this security group to allow TCP access to the EC2 instance over port 22.
+ Create custom rules in the security group for your DB instances that allow connections from the security group you created for your EC2 instances. These rules might allow any member of the security group to access the DB instances.

There is an additional public and private subnet in a separate Availability Zone. An RDS DB subnet group requires a subnet in at least two Availability Zones. The additional subnet makes it easy to switch to a Multi-AZ DB instance deployment in the future.

For a tutorial that shows you how to create a VPC with both public and private subnets for this scenario, see [Tutorial: Create a VPC for use with a DB instance (IPv4 only)](CHAP_Tutorials.WebServerDB.CreateVPC.md). 

**Tip**  
You can set up network connectivity between an Amazon EC2 instance and a DB instance automatically when you create the DB instance. For more information, see [Configure automatic network connectivity with an EC2 instance](USER_CreateDBInstance.md#USER_CreateDBInstance.Prerequisites.VPC.Automatic).

**To create a rule in a VPC security group that allows connections from another security group, do the following:**

1.  Sign in to the AWS Management Console and open the Amazon VPC console at [https://console.aws.amazon.com/vpc](https://console.aws.amazon.com/vpc).

1.  In the navigation pane, choose **Security groups**.

1. Choose or create a security group for which you want to allow access to members of another security group. In the preceding scenario, this is the security group that you use for your DB instances. Choose the **Inbound rules** tab, and then choose **Edit inbound rules**.

1. On the **Edit inbound rules** page, choose **Add rule**.

1. For **Type**, choose the entry that corresponds to the port you used when you created your DB instance, such as **MYSQL/Aurora**.

1. In the **Source** box, start typing the ID of the security group, which lists the matching security groups. Choose the security group with members that you want to have access to the resources protected by this security group. In the scenario preceding, this is the security group that you use for your EC2 instance.

1. If required, repeat the steps for the TCP protocol by creating a rule with **All TCP** as the **Type** and your security group in the **Source** box. If you intend to use the UDP protocol, create a rule with **All UDP** as the **Type** and your security group in **Source**.

1. Choose **Save rules**.

The following screen shows an inbound rule with a security group for its source.

![\[Adding a security group to another security group's rules.\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/con-vpc-add-sg-rule.png)


For more information about connecting to the DB instance from your EC2 instance, see [Connecting to an Amazon RDS DB instance](CHAP_CommonTasks.Connect.md) .

## A DB instance in a VPC accessed by an EC2 instance in a different VPC
<a name="USER_VPC.Scenario3"></a>

When your DB instances is in a different VPC from the EC2 instance you are using to access it, you can use VPC peering to access the DB instance.

The following diagram shows this scenario. 

![\[A DB instance in a VPC accessed by an Amazon EC2 instance in a different VPC.\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/RDSVPC2EC2VPC.png)


A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IP addresses. Resources in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region. To learn more about VPC peering, see [VPC peering](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-peering.html) in the *Amazon Virtual Private Cloud User Guide*.

## A DB instance in a VPC accessed by a client application through the internet
<a name="USER_VPC.Scenario4"></a>

To access a DB instances in a VPC from a client application through the internet, you configure a VPC with a single public subnet, and an internet gateway to enable communication over the internet.

The following diagram shows this scenario.

![\[A DB instance in a VPC accessed by a client application through the internet.\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/GS-VPC-network.png)


We recommend the following configuration:

 
+ A VPC of size /16 (for example CIDR: 10.0.0.0/16). This size provides 65,536 private IP addresses.
+ A subnet of size /24 (for example CIDR: 10.0.0.0/24). This size provides 256 private IP addresses.
+ An Amazon RDS DB instance that is associated with the VPC and the subnet. Amazon RDS assigns an IP address within the subnet to your DB instance.
+ An internet gateway which connects the VPC to the internet and to other AWS products.
+ A security group associated with the DB instance. The security group's inbound rules allow your client application to access to your DB instance.

For information about creating a DB instances in a VPC, see [Creating a DB instance in a VPC](USER_VPC.WorkingWithRDSInstanceinaVPC.md#USER_VPC.InstanceInVPC).

## A DB instance in a VPC accessed by a private network
<a name="USER_VPC.NotPublic"></a>

If your DB instance isn't publicly accessible, you have the following options for accessing it from a private network:
+ An AWS Site-to-Site VPN connection. For more information, see [What is AWS Site-to-Site VPN?](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html)
+ An Direct Connect connection. For more information, see [What is Direct Connect?](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html)
+ An AWS Client VPN connection. For more information, see [What is AWS Client VPN?](https://docs.aws.amazon.com//vpn/latest/clientvpn-admin/what-is.html)

The following diagram shows a scenario with an AWS Site-to-Site VPN connection. 

![\[DB instances in a VPC accessed by a private network.\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/site-to-site-vpn-connection.png)


For more information, see [Internetwork traffic privacy](inter-network-traffic-privacy.md).

# Tutorial: Create a VPC for use with a DB instance (IPv4 only)
<a name="CHAP_Tutorials.WebServerDB.CreateVPC"></a>

A common scenario includes a DB instance in a virtual private cloud (VPC) based on the Amazon VPC service. This VPC shares data with a web server that is running in the same VPC. In this tutorial, you create the VPC for this scenario.

The following diagram shows this scenario. For information about other scenarios, see [Scenarios for accessing a DB instance in a VPC](USER_VPC.Scenarios.md). 

![\[Single VPC scenario\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/con-VPC-sec-grp.png)


Your DB instance needs to be available only to your web server, and not to the public internet. Thus, you create a VPC with both public and private subnets. The web server is hosted in the public subnet, so that it can reach the public internet. The DB instance is hosted in a private subnet. The web server can connect to the DB instance because it is hosted within the same VPC. But the DB instance isn't available to the public internet, providing greater security.

This tutorial configures an additional public and private subnet in a separate Availability Zone. These subnets aren't used by the tutorial. An RDS DB subnet group requires a subnet in at least two Availability Zones. The additional subnet makes it easier to switch to a Multi-AZ DB instance deployment in the future. 

This tutorial describes configuring a VPC for Amazon RDS DB instances. For a tutorial that shows you how to create a web server for this VPC scenario, see [Tutorial: Create a web server and an Amazon RDS DB instance](TUT_WebAppWithRDS.md). For more information about Amazon VPC, see [Amazon VPC Getting Started Guide](https://docs.aws.amazon.com/AmazonVPC/latest/GettingStartedGuide/) and [Amazon VPC User Guide](https://docs.aws.amazon.com/vpc/latest/userguide/). 

**Tip**  
You can set up network connectivity between an Amazon EC2 instance and a DB instance automatically when you create the DB instance. The network configuration is similar to the one described in this tutorial. For more information, see [Configure automatic network connectivity with an EC2 instance](USER_CreateDBInstance.md#USER_CreateDBInstance.Prerequisites.VPC.Automatic). 

## Create a VPC with private and public subnets
<a name="CHAP_Tutorials.WebServerDB.CreateVPC.VPCAndSubnets"></a>

Use the following procedure to create a VPC with both public and private subnets. 

**To create a VPC and subnets**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the top-right corner of the AWS Management Console, choose the Region to create your VPC in. This example uses the US West (Oregon) Region.

1. In the upper-left corner, choose **VPC dashboard**. To begin creating a VPC, choose **Create VPC**.

1. For **Resources to create** under **VPC settings**, choose **VPC and more**.

1. For the **VPC settings**, set these values:
   + **Name tag auto-generation** – **tutorial**
   + **IPv4 CIDR block** – **10.0.0.0/16**
   + **IPv6 CIDR block** – **No IPv6 CIDR block**
   + **Tenancy** – **Default**
   + **Number of Availability Zones (AZs)** – **2**
   + **Customize AZs** – Keep the default values.
   + **Number of public subnet** – **2**
   + **Number of private subnets** – **2**
   + **Customize subnets CIDR blocks** – Keep the default values.
   + **NAT gateways (\$1)** – **None**
   + **VPC endpoints** – **None**
   + **DNS options** – Keep the default values.
**Note**  
Amazon RDS requires at least two subnets in two different Availability Zones to support Multi-AZ DB instance deployments. This tutorial creates a Single-AZ deployment, but the requirement makes it easier to convert to a Multi-AZ DB instance deployment in the future.

1. Choose **Create VPC**.

## Create a VPC security group for a public web server
<a name="CHAP_Tutorials.WebServerDB.CreateVPC.SecurityGroupEC2"></a>

Next, you create a security group for public access. To connect to public EC2 instances in your VPC, you add inbound rules to your VPC security group. These allow traffic to connect from the internet.

**To create a VPC security group**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. Choose **VPC Dashboard**, choose **Security Groups**, and then choose **Create security group**. 

1. On the **Create security group** page, set these values: 
   + **Security group name:** **tutorial-securitygroup**
   + **Description:** **Tutorial Security Group**
   + **VPC:** Choose the VPC that you created earlier, for example: **vpc-*identifier* (tutorial-vpc)** 

1. Add inbound rules to the security group.

   1. Determine the IP address to use to connect to EC2 instances in your VPC using Secure Shell (SSH). To determine your public IP address, in a different browser window or tab, you can use the service at [https://checkip.amazonaws.com](https://checkip.amazonaws.com). An example of an IP address is `203.0.113.25/32`.

      In many cases, you might connect through an internet service provider (ISP) or from behind your firewall without a static IP address. If so, find the range of IP addresses used by client computers.
**Warning**  
If you use `0.0.0.0/0` for SSH access, you make it possible for all IP addresses to access your public instances using SSH. This approach is acceptable for a short time in a test environment, but it's unsafe for production environments. In production, authorize only a specific IP address or range of addresses to access your instances using SSH.

   1. In the **Inbound rules** section, choose **Add rule**.

   1. Set the following values for your new inbound rule to allow SSH access to your Amazon EC2 instance. If you do this, you can connect to your Amazon EC2 instance to install the web server and other utilities. You also connect to your EC2 instance to upload content for your web server. 
      + **Type:** **SSH**
      + **Source:** The IP address or range from Step a, for example: **203.0.113.25/32**.

   1. Choose **Add rule**.

   1. Set the following values for your new inbound rule to allow HTTP access to your web server:
      + **Type:** **HTTP**
      + **Source:** **0.0.0.0/0**

1. Choose **Create security group** to create the security group.

   Note the security group ID because you need it later in this tutorial.

## Create a VPC security group for a private DB instance
<a name="CHAP_Tutorials.WebServerDB.CreateVPC.SecurityGroupDB"></a>

To keep your DB instance private, create a second security group for private access. To connect to private DB instances in your VPC, you add inbound rules to your VPC security group that allow traffic from your web server only.

**To create a VPC security group**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. Choose **VPC Dashboard**, choose **Security Groups**, and then choose **Create security group**.

1. On the **Create security group** page, set these values:
   + **Security group name:** **tutorial-db-securitygroup**
   + **Description:** **Tutorial DB Instance Security Group**
   + **VPC:** Choose the VPC that you created earlier, for example: **vpc-*identifier* (tutorial-vpc)**

1. Add inbound rules to the security group.

   1. In the **Inbound rules** section, choose **Add rule**.

   1. Set the following values for your new inbound rule to allow MySQL traffic on port 3306 from your Amazon EC2 instance. If you do this, you can connect from your web server to your DB instance. By doing so, you can store and retrieve data from your web application to your database. 
      + **Type:** **MySQL/Aurora**
      + **Source:** The identifier of the **tutorial-securitygroup** security group that you created previously in this tutorial, for example: **sg-9edd5cfb**.

1. Choose **Create security group** to create the security group.

## Create a DB subnet group
<a name="CHAP_Tutorials.WebServerDB.CreateVPC.DBSubnetGroup"></a>

A *DB subnet group* is a collection of subnets that you create in a VPC and that you then designate for your DB instances. A DB subnet group makes it possible for you to specify a particular VPC when creating DB instances.

**To create a DB subnet group**

1. Identify the private subnets for your database in the VPC.

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **Subnets**.

   1. Note the subnet IDs of the subnets named **tutorial-subnet-private1-us-west-2a** and **tutorial-subnet-private2-us-west-2b**.

      You need the subnet IDs when you create your DB subnet group.

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

   Make sure that you connect to the Amazon RDS console, not to the Amazon VPC console.

1. In the navigation pane, choose **Subnet groups**.

1. Choose **Create DB subnet group**.

1. On the **Create DB subnet group** page, set these values in **Subnet group details**:
   + **Name:** **tutorial-db-subnet-group**
   + **Description:** **Tutorial DB Subnet Group**
   + **VPC:** **tutorial-vpc (vpc-*identifier*)** 

1. In the **Add subnets** section, choose the **Availability Zones** and **Subnets**.

   For this tutorial, choose **us-west-2a** and **us-west-2b** for the **Availability Zones**. For **Subnets**, choose the private subnets you identified in the previous step.

1. Choose **Create**. 

   Your new DB subnet group appears in the DB subnet groups list on the RDS console. You can choose the DB subnet group to see details in the details pane at the bottom of the window. These details include all of the subnets associated with the group.

**Note**  
If you created this VPC to complete [Tutorial: Create a web server and an Amazon RDS DB instance](TUT_WebAppWithRDS.md), create the DB instance by following the instructions in [Create an Amazon RDS DB instance](CHAP_Tutorials.WebServerDB.CreateDBInstance.md).

## Deleting the VPC
<a name="CHAP_Tutorials.WebServerDB.CreateVPC.Delete"></a>

After you create the VPC and other resources for this tutorial, you can delete them if they are no longer needed.

**Note**  
If you added resources in the VPC that you created for this tutorial, you might need to delete these before you can delete the VPC. For example, these resources might include Amazon EC2 instances or Amazon RDS DB instances. For more information, see [Delete your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-vpcs.html#VPC_Deleting) in the *Amazon VPC User Guide*.

**To delete a VPC and related resources**

1. Delete the DB subnet group.

   1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

   1. In the navigation pane, choose **Subnet groups**.

   1. Select the DB subnet group you want to delete, such as **tutorial-db-subnet-group**.

   1. Choose **Delete**, and then choose **Delete** in the confirmation window.

1. Note the VPC ID.

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **VPCs**.

   1. In the list, identify the VPC that you created, such as **tutorial-vpc**.

   1. Note the **VPC ID** of the VPC that you created. You need the VPC ID in later steps.

1. Delete the security groups.

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **Security Groups**.

   1. Select the security group for the Amazon RDS DB instance, such as **tutorial-db-securitygroup**.

   1. For **Actions**, choose **Delete security groups**, and then choose **Delete** on the confirmation page.

   1. On the **Security Groups** page, select the security group for the Amazon EC2 instance, such as **tutorial-securitygroup**.

   1. For **Actions**, choose **Delete security groups**, and then choose **Delete** on the confirmation page.

1. Delete the VPC.

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **VPCs**.

   1. Select the VPC you want to delete, such as **tutorial-vpc**.

   1. For **Actions**, choose **Delete VPC**.

      The confirmation page shows other resources that are associated with the VPC that will also be deleted, including the subnets associated with it.

   1. On the confirmation page, enter **delete**, and then choose **Delete**.

# Tutorial: Create a VPC for use with a DB instance (dual-stack mode)
<a name="CHAP_Tutorials.CreateVPCDualStack"></a>

A common scenario includes a DB instance in a virtual private cloud (VPC) based on the Amazon VPC service. This VPC shares data with a public Amazon EC2 instance that is running in the same VPC.

In this tutorial, you create the VPC for this scenario that works with a database running in dual-stack mode. Dual-stack mode to enable connection over the IPv6 addressing protocol. For more information about IP addressing, see [Amazon RDS IP addressing](USER_VPC.WorkingWithRDSInstanceinaVPC.md#USER_VPC.IP_addressing).

Dual-stack network instances are supported in most regions. For more information see [Region and version availability](USER_VPC.WorkingWithRDSInstanceinaVPC.md#USER_VPC.IP_addressing.RegionVersionAvailability). To see the limitations of dual-stack mode, see [Limitations for dual-stack network DB instances](USER_VPC.WorkingWithRDSInstanceinaVPC.md#USER_VPC.IP_addressing.dual-stack-limitations).

The following diagram shows this scenario.

 

![\[VPC scenario for dual-stack mode\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/con-VPC-sec-grp-dual-stack.png)


For information about other scenarios, see [Scenarios for accessing a DB instance in a VPC](USER_VPC.Scenarios.md).

Your DB instance needs to be available only to your Amazon EC2 instance, and not to the public internet. Thus, you create a VPC with both public and private subnets. The Amazon EC2 instance is hosted in the public subnet, so that it can reach the public internet. The DB instance is hosted in a private subnet. The Amazon EC2 instance can connect to the DB instance because it's hosted within the same VPC. However, the DB instance is not available to the public internet, providing greater security.

This tutorial configures an additional public and private subnet in a separate Availability Zone. These subnets aren't used by the tutorial. An RDS DB subnet group requires a subnet in at least two Availability Zones. The additional subnet makes it easy to switch to a Multi-AZ DB instance deployment in the future. 

To create a DB instance that uses dual-stack mode, specify **Dual-stack mode** for the **Network type** setting. You can also modify a DB instance with the same setting. For more information, see [Creating an Amazon RDS DB instance](USER_CreateDBInstance.md) and [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md).

This tutorial describes configuring a VPC for Amazon RDS DB instances. For more information about Amazon VPC, see [Amazon VPC User Guide](https://docs.aws.amazon.com/vpc/latest/userguide/). 

## Create a VPC with private and public subnets
<a name="CHAP_Tutorials.CreateVPCDualStack.VPCAndSubnets"></a>

Use the following procedure to create a VPC with both public and private subnets. 

**To create a VPC and subnets**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the upper-right corner of the AWS Management Console, choose the Region to create your VPC in. This example uses the US East (Ohio) Region.

1. In the upper-left corner, choose **VPC dashboard**. To begin creating a VPC, choose **Create VPC**.

1. For **Resources to create** under **VPC settings**, choose **VPC and more**.

1. For the remaining **VPC settings**, set these values:
   + **Name tag auto-generation** – **tutorial-dual-stack**
   + **IPv4 CIDR block** – **10.0.0.0/16**
   + **IPv6 CIDR block** – **Amazon-provided IPv6 CIDR block**
   + **Tenancy** – **Default**
   + **Number of Availability Zones (AZs)** – **2**
   + **Customize AZs** – Keep the default values.
   + **Number of public subnet** – **2**
   + **Number of private subnets** – **2**
   + **Customize subnets CIDR blocks** – Keep the default values.
   + **NAT gateways (\$1)** – **None**
   + **Egress only internet gateway** – **No**
   + **VPC endpoints** – **None**
   + **DNS options** – Keep the default values.
**Note**  
Amazon RDS requires at least two subnets in two different Availability Zones to support Multi-AZ DB instance deployments. This tutorial creates a Single-AZ deployment, but the requirement makes it easy to convert to a Multi-AZ DB instance deployment in the future.

1. Choose **Create VPC**.

## Create a VPC security group for a public Amazon EC2 instance
<a name="CHAP_Tutorials.CreateVPCDualStack.SecurityGroupEC2"></a>

Next, you create a security group for public access. To connect to public EC2 instances in your VPC, add inbound rules to your VPC security group that allow traffic to connect from the internet.

**To create a VPC security group**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. Choose **VPC Dashboard**, choose **Security Groups**, and then choose **Create security group**. 

1. On the **Create security group** page, set these values:
   + **Security group name:** **tutorial-dual-stack-securitygroup**
   + **Description:** **Tutorial Dual-Stack Security Group**
   + **VPC:** Choose the VPC that you created earlier, for example: **vpc-*identifier* (tutorial-dual-stack-vpc)** 

1. Add inbound rules to the security group.

   1. Determine the IP address to use to connect to EC2 instances in your VPC using Secure Shell (SSH).

      An example of an Internet Protocol version 4 (IPv4) address is `203.0.113.25/32`. An example of an Internet Protocol version 6 (IPv6) address range is `2001:db8:1234:1a00::/64`.

      In many cases, you might connect through an internet service provider (ISP) or from behind your firewall without a static IP address. If so, find the range of IP addresses used by client computers.
**Warning**  
If you use `0.0.0.0/0` for IPv4 or `::0` for IPv6, you make it possible for all IP addresses to access your public instances using SSH. This approach is acceptable for a short time in a test environment, but it's unsafe for production environments. In production, authorize only a specific IP address or range of addresses to access your instances.

   1. In the **Inbound rules** section, choose **Add rule**.

   1. Set the following values for your new inbound rule to allow Secure Shell (SSH) access to your Amazon EC2 instance. If you do this, you can connect to your EC2 instance to install SQL clients and other applications. Specify an IP address so you can access your EC2 instance:
      + **Type:** **SSH**
      + **Source:** The IP address or range from step a. An example of an IPv4 IP address is **203.0.113.25/32**. An example of an IPv6 IP address is **2001:DB8::/32**.

1. Choose **Create security group** to create the security group.

   Note the security group ID because you need it later in this tutorial.

## Create a VPC security group for a private DB instance
<a name="CHAP_Tutorials.CreateVPCDualStack.SecurityGroupDB"></a>

To keep your DB instance private, create a second security group for private access. To connect to private DB instances in your VPC, add inbound rules to your VPC security group. These allow traffic from your Amazon EC2 instance only.

**To create a VPC security group**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. Choose **VPC Dashboard**, choose **Security Groups**, and then choose **Create security group**.

1. On the **Create security group** page, set these values:
   + **Security group name:** **tutorial-dual-stack-db-securitygroup**
   + **Description:** **Tutorial Dual-Stack DB Instance Security Group**
   + **VPC:** Choose the VPC that you created earlier, for example: **vpc-*identifier* (tutorial-dual-stack-vpc)**

1. Add inbound rules to the security group:

   1. In the **Inbound rules** section, choose **Add rule**.

   1. Set the following values for your new inbound rule to allow MySQL traffic on port 3306 from your Amazon EC2 instance. If you do, you can connect from your EC2 instance to your DB instance. Doing this means that you can send data from your EC2 instance to your database.
      + **Type:** **MySQL/Aurora**
      + **Source:** The identifier of the **tutorial-dual-stack-securitygroup** security group that you created previously in this tutorial, for example **sg-9edd5cfb**.

1. To create the security group, choose **Create security group**.

## Create a DB subnet group
<a name="CHAP_Tutorials.CreateVPCDualStack.DBSubnetGroup"></a>

A *DB subnet group* is a collection of subnets that you create in a VPC and that you then designate for your DB instances. By using a DB subnet group, you can specify a particular VPC when creating DB instances. To create a DB subnet group that is `DUAL` compatible, all subnets must be `DUAL` compatible. To be `DUAL` compatible, a subnet must have an IPv6 CIDR associated with it.

**To create a DB subnet group**

1. Identify the private subnets for your database in the VPC.

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **Subnets**.

   1. Note the subnet IDs of the subnets named **tutorial-dual-stack-subnet-private1-us-west-2a** and **tutorial-dual-stack-subnet-private2-us-west-2b**.

      You will need the subnet IDs when you create your DB subnet group.

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

   Make sure that you connect to the Amazon RDS console, not to the Amazon VPC console.

1. In the navigation pane, choose **Subnet groups**.

1. Choose **Create DB subnet group**.

1. On the **Create DB subnet group** page, set these values in **Subnet group details**:
   + **Name:** **tutorial-dual-stack-db-subnet-group**
   + **Description:** **Tutorial Dual-Stack DB Subnet Group**
   + **VPC:** **tutorial-dual-stack-vpc (vpc-*identifier*)** 

1. In the **Add subnets** section, choose values for the **Availability Zones** and **Subnets** options.

   For this tutorial, choose **us-east-2a** and **us-east-2b** for the **Availability Zones**. For **Subnets**, choose the private subnets you identified in the previous step.

1. Choose **Create**. 

Your new DB subnet group appears in the DB subnet groups list on the RDS console. You can choose the DB subnet group to see its details. These include the supported addressing protocols and all of the subnets associated with the group and the network type supported by the DB subnet group.

## Create an Amazon EC2 instance in dual-stack mode
<a name="CHAP_Tutorials.CreateVPCDualStack.CreateEC2Instance"></a>

To create an Amazon EC2 instance, follow the instructions in [Launch an instance using the new launch instance wizard](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-instance-wizard.html) in the *Amazon EC2 User Guide*.

On the **Configure Instance Details** page, set these values and keep the other values as their defaults:
+ **Network** – Choose an existing VPC with both public and private subnets, such as **tutorial-dual-stack-vpc** (vpc-*identifier*) created in [Create a VPC with private and public subnets](#CHAP_Tutorials.CreateVPCDualStack.VPCAndSubnets).
+ **Subnet** – Choose an existing public subnet, such as **subnet-*identifier* \$1 tutorial-dual-stack-subnet-public1-us-east-2a \$1 us-east-2a** created in [Create a VPC security group for a public Amazon EC2 instance](#CHAP_Tutorials.CreateVPCDualStack.SecurityGroupEC2).
+ **Auto-assign Public IP** – Choose **Enable**.
+ **Auto-assign IPv6 IP** – Choose **Enable**.
+ **Firewall (security groups)** – Choose **Select an existing security group**.
+ **Common security groups** – Choose an existing security group, such as the `tutorial-securitygroup` created in [Create a VPC security group for a public Amazon EC2 instance](#CHAP_Tutorials.CreateVPCDualStack.SecurityGroupEC2). Make sure that the security group that you choose includes inbound rules for Secure Shell (SSH) and HTTP access.

## Create a DB instance in dual-stack mode
<a name="CHAP_Tutorials.CreateVPCDualStack.CreateDBInstance"></a>

In this step, you create a DB instance that runs in dual-stack mode.

**To create a DB instance**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the upper-right corner of the console, choose the AWS Region where you want to create the DB instance. This example uses the US East (Ohio) Region.

1. In the navigation pane, choose **Databases**.

1. Choose **Create database**.

1. On the **Create database** page, make sure that the **Standard create** option is chosen, and then choose the MySQL DB engine type.

1. In the **Connectivity** section, set these values:
   + **Network type** – Choose **Dual-stack mode**.  
![\[Network type section in the console with Dual-stack mode selected\]](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/images/dual-stack-mode.png)
   + **Virtual private cloud (VPC)** – Choose an existing VPC with both public and private subnets, such as **tutorial-dual-stack-vpc** (vpc-*identifier*) created in [Create a VPC with private and public subnets](#CHAP_Tutorials.CreateVPCDualStack.VPCAndSubnets).

     The VPC must have subnets in different Availability Zones.
   + **DB subnet group** – Choose a DB subnet group for the VPC, such as **tutorial-dual-stack-db-subnet-group** created in [Create a DB subnet group](#CHAP_Tutorials.CreateVPCDualStack.DBSubnetGroup).
   + **Public access** – Choose **No**.
   + **VPC security group (firewall)** – Select **Choose existing**.
   + **Existing VPC security groups** – Choose an existing VPC security group that is configured for private access, such as **tutorial-dual-stack-db-securitygroup** created in [Create a VPC security group for a private DB instance](#CHAP_Tutorials.CreateVPCDualStack.SecurityGroupDB).

     Remove other security groups, such as the default security group, by choosing the **X** associated with each.
   + **Availability Zone** – Choose **us-west-2a**.

     To avoid cross-AZ traffic, make sure the DB instance and the EC2 instance are in the same Availability Zone.

1. For the remaining sections, specify your DB instance settings. For information about each setting, see [Settings for DB instances](USER_CreateDBInstance.Settings.md).

## Connect to your Amazon EC2 instance and DB instance
<a name="CHAP_Tutorials.CreateVPCDualStack.Connect"></a>

After you create your Amazon EC2 instance and DB instance in dual-stack mode, you can connect to each one using the IPv6 protocol. To connect to an Amazon EC2 instance using the IPv6 protocol, follow the instructions in [Connect to your Linux instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html) in the *Amazon EC2 User Guide*.

To connect to your RDS for MySQL DB instance from the Amazon EC2 instance, follow the instructions in [Connect to a MySQL DB instance](CHAP_GettingStarted.CreatingConnecting.MySQL.md#CHAP_GettingStarted.Connecting.MySQL).

## Deleting the VPC
<a name="CHAP_Tutorials.CreateVPCDualStack.Delete"></a>

After you create the VPC and other resources for this tutorial, you can delete them if they are no longer needed.

If you added resources in the VPC that you created for this tutorial, you might need to delete these before you can delete the VPC. Examples of resources are Amazon EC2 instances or DB instances. For more information, see [Delete your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-vpcs.html#VPC_Deleting) in the *Amazon VPC User Guide*.

**To delete a VPC and related resources**

1. Delete the DB subnet group:

   1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

   1. In the navigation pane, choose **Subnet groups**.

   1. Select the DB subnet group to delete, such as **tutorial-db-subnet-group**.

   1. Choose **Delete**, and then choose **Delete** in the confirmation window.

1. Note the VPC ID:

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **VPCs**.

   1. In the list, identify the VPC you created, such as **tutorial-dual-stack-vpc**.

   1. Note the **VPC ID** value of the VPC that you created. You need this VPC ID in subsequent steps.

1. Delete the security groups:

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **Security Groups**.

   1. Select the security group for the Amazon RDS DB instance, such as **tutorial-dual-stack-db-securitygroup**.

   1. For **Actions**, choose **Delete security groups**, and then choose **Delete** on the confirmation page.

   1. On the **Security Groups** page, select the security group for the Amazon EC2 instance, such as **tutorial-dual-stack-securitygroup**.

   1. For **Actions**, choose **Delete security groups**, and then choose **Delete** on the confirmation page.

1. Delete the NAT gateway:

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **NAT Gateways**.

   1. Select the NAT gateway of the VPC that you created. Use the VPC ID to identify the correct NAT gateway.

   1. For **Actions**, choose **Delete NAT gateway**.

   1. On the confirmation page, enter **delete**, and then choose **Delete**.

1. Delete the VPC:

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **VPCs**.

   1. Select the VPC that you want to delete, such as **tutorial-dual-stack-vpc**.

   1. For **Actions**, choose **Delete VPC**.

      The confirmation page shows other resources that are associated with the VPC that will also be deleted, including the subnets associated with it.

   1. On the confirmation page, enter **delete**, and then choose **Delete**.

1. Release the Elastic IP addresses:

   1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

   1. Choose **EC2 Dashboard**, and then choose **Elastic IPs**.

   1. Select the Elastic IP address that you want to release.

   1. For **Actions**, choose **Release Elastic IP addresses**.

   1. On the confirmation page, choose **Release**.

# Moving a DB instance not in a VPC into a VPC
<a name="USER_VPC.Non-VPC2VPC"></a>

Some legacy DB instances on the EC2-Classic platform are not in a VPC. If your DB instance is not in a VPC, you can use the AWS Management Console to easily move your DB instance into a VPC. Before you can move a DB instance not in a VPC, into a VPC, you must create the VPC. 


|  | 
| --- |
| EC2-Classic was retired on August 15, 2022. If you haven't migrated from EC2-Classic to a VPC, we recommend that you migrate as soon as possible. For more information, see [ Migrate from EC2-Classic to a VPC](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/vpc-migrate.html) in the Amazon EC2 User Guide and the blog [EC2-Classic Networking is Retiring – Here’s How to Prepare](https://aws.amazon.com/blogs/aws/ec2-classic-is-retiring-heres-how-to-prepare/). | 

**Important**  
If you are a new Amazon RDS customer, if you have never created a DB instance before, or if you are creating a DB instance in an AWS Region you have not used before, in almost all cases you are on the *EC2-VPC* platform and have a default VPC. For information about working with DB instances in a VPC, see [Working with a DB instance in a VPC](USER_VPC.WorkingWithRDSInstanceinaVPC.md).

Follow these steps to create a VPC for your DB instance. 
+ [Step 1: Create a VPC](USER_VPC.WorkingWithRDSInstanceinaVPC.md#USER_VPC.CreatingVPC)
+  [Step 2: Create a DB subnet group](USER_VPC.WorkingWithRDSInstanceinaVPC.md#USER_VPC.CreateDBSubnetGroup)
+  [Step 3: Create a VPC security group](USER_VPC.WorkingWithRDSInstanceinaVPC.md#USER_VPC.CreateVPCSecurityGroup)

After you create the VPC, follow these steps to move your DB instance into the VPC. 
+ [Updating the VPC for a DB instance](USER_VPC.VPC2VPC.md)

We highly recommend that you create a backup of your DB instance immediately before the migration. Doing so ensures that you can restore the data if the migration fails. For more information, see [Backing up, restoring, and exporting data](CHAP_CommonTasks.BackupRestore.md).

The following are some limitations to moving your DB instance into the VPC. 
+ **Previous generation DB instance classes** – Previous generation DB instance classes might not be supported on the VPC platform. When moving a DB instance to a VPC, choose a db.m3 or db.r3 DB instance class. After you move the DB instance to a VPC, you can scale the DB instance to use a later DB instance class. For a full list of VPC supported instance classes, see [Amazon RDS instance types](https://aws.amazon.com/rds/instance-types/). 
+ **Multi-AZ** – Moving a Multi-AZ DB instance not in a VPC into a VPC is not currently supported. To move your DB instance to a VPC, first modify the DB instance so that it is a single-AZ deployment. Change the **Multi-AZ deployment** setting to **No**. After you move the DB instance to a VPC, modify it again to make it a Multi-AZ deployment. For more information, see [Modifying an Amazon RDS DB instance](Overview.DBInstance.Modifying.md). 
+ **Read replicas** – Moving a DB instance with read replicas not in a VPC into a VPC is not currently supported. To move your DB instance to a VPC, first delete all of its read replicas. After you move the DB instance to a VPC, recreate the read replicas. For more information, see [Working with DB instance read replicas](USER_ReadRepl.md).
+ **Option groups** – If you move your DB instance to a VPC, and the DB instance is using a custom option group, change the option group that is associated with your DB instance. Option groups are platform-specific, and moving to a VPC is a change in platform. To use a custom option group in this case, assign the default VPC option group to the DB instance, assign an option group that is used by other DB instances in the VPC you are moving to, or create a new option group and assign it to the DB instance. For more information, see [Working with option groups](USER_WorkingWithOptionGroups.md).

## Alternatives for moving a DB instance not in a VPC into a VPC with minimal downtime
<a name="USER_VPC.Non-VPC2VPC.Minimal-Downtime"></a>

Using the following alternatives, you can move a DB instance not in a VPC into a VPC with minimal downtime. These alternatives cause minimum disruption to the source DB instance and allow it to serve user traffic during the migration. However, the time required to migrate to a VPC will vary based on the database size and the live workload characteristics. 
+ **AWS Database Migration Service (AWS DMS)** – AWS DMS enables the live migration of data while keeping the source DB instance fully operational, but it replicates only a limited set of DDL statements. AWS DMS doesn't propagate items such as indexes, users, privileges, stored procedures, and other database changes not directly related to table data. In addition, AWS DMS doesn't automatically use RDS snapshots for the initial DB instance creation, which can increase migration time. For more information, see [AWS Database Migration Service](https://aws.amazon.com/dms/). 
+ **DB snapshot restore or point-in-time recovery** – You can move a DB instance to a VPC by restoring a snapshot of the DB instance or by restoring a DB instance to a point in time. For more information, see [Restoring to a DB instance](USER_RestoreFromSnapshot.md) and [Restoring a DB instance to a specified time for Amazon RDS](USER_PIT.md). 