

# Security in Amazon Aurora
<a name="UsingWithRDS"></a>

Cloud security at AWS is the highest priority. As an AWS customer, you benefit from a data center and network architecture that are built to meet the requirements of the most security-sensitive organizations.

Security is a shared responsibility between AWS and you. The [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/) describes this as security *of* the cloud and security *in* the cloud:
+  **Security of the cloud** – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. AWS also provides you with services that you can use securely. Third-party auditors regularly test and verify the effectiveness of our security as part of the [AWS compliance programs](https://aws.amazon.com/compliance/programs/). To learn about the compliance programs that apply to Amazon Aurora (Aurora), see [AWS services in scope by compliance program](https://aws.amazon.com/compliance/services-in-scope/). 
+  **Security in the cloud** – Your responsibility is determined by the AWS service that you use. You are also responsible for other factors including the sensitivity of your data, your organization's requirements, and applicable laws and regulations. 

This documentation helps you understand how to apply the shared responsibility model when using Amazon Aurora. The following topics show you how to configure Amazon Aurora to meet your security and compliance objectives. You also learn how to use other AWS services that help you monitor and secure your Amazon Aurora resources. 

You can manage access to your Amazon Aurora resources and your databases on a DB cluster. The method you use to manage access depends on what type of task the user needs to perform with Amazon Aurora: 
+ Run your DB cluster in a virtual private cloud (VPC) based on the Amazon VPC service for the greatest possible network access control. For more information about creating a DB cluster in a VPC, see [Amazon VPC and Amazon Aurora](USER_VPC.md) .
+ Use AWS Identity and Access Management (IAM) policies to assign permissions that determine who is allowed to manage Amazon Aurora resources. For example, you can use IAM to determine who is allowed to create, describe, modify, and delete DB clusters, tag resources, or modify security groups.

  To review IAM policy examples, see [Identity-based policy examples for Amazon Aurora](security_iam_id-based-policy-examples.md) .
+ Use security groups to control what IP addresses or Amazon EC2 instances can connect to your databases on a DB cluster. When you first create a DB cluster, its firewall prevents any database access except through rules specified by an associated security group. 
+ Use Secure Socket Layer (SSL) or Transport Layer Security (TLS) connections with DB clusters running the Aurora MySQL or Aurora PostgreSQL. For more information on using SSL/TLS with a DB cluster, see [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md) .
+ Use Amazon Aurora encryption to secure your DB clusters and snapshots at rest. Amazon Aurora encryption uses the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your DB cluster. For more information, see [Encrypting Amazon Aurora resources](Overview.Encryption.md) .
+ Use the security features of your DB engine to control who can log in to the databases on a DB cluster. These features work just as if the database was on your local network. 

  For information about security with Aurora MySQL, see [Security with Amazon Aurora MySQL](AuroraMySQL.Security.md) . For information about security with Aurora PostgreSQL, see [Security with Amazon Aurora PostgreSQL](AuroraPostgreSQL.Security.md) .

Aurora is part of the managed database service Amazon Relational Database Service (Amazon RDS). Amazon RDS is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. If you are not already familiar with Amazon RDS, see the [https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html). 

Aurora includes a high-performance storage subsystem. Its MySQL- and PostgreSQL-compatible database engines are customized to take advantage of that fast distributed storage. Aurora also automates and standardizes database clustering and replication, which are typically among the most challenging aspects of database configuration and administration. 

For both Amazon RDS and Aurora, you can access the RDS API programmatically, and you can use the AWS CLI to access the RDS API interactively. Some RDS API operations and AWS CLI commands apply to both Amazon RDS and Aurora, while others apply to either Amazon RDS or Aurora. For information about RDS API operations, see [Amazon RDS API reference](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/Welcome.html). For more information about the AWS CLI, see [AWS Command Line Interface reference for Amazon RDS](https://docs.aws.amazon.com/cli/latest/reference/rds/index.html). 

**Note**  
You have to configure security only for your use cases. You don't have to configure security access for processes that Amazon Aurora manages. These include creating backups, automatic failover, and other processes.

For more information on managing access to Amazon Aurora resources and your databases on a DB cluster, see the following topics.

**Topics**
+ [Database authentication with Amazon Aurora](database-authentication.md)
+ [Password management with Amazon Aurora and AWS Secrets Manager](rds-secrets-manager.md)
+ [Data protection in Amazon RDS](DataDurability.md)
+ [Identity and access management for Amazon Aurora](UsingWithRDS.IAM.md)
+ [Logging and monitoring in Amazon Aurora](Overview.LoggingAndMonitoring.md)
+ [Compliance validation for Amazon Aurora](RDS-compliance.md)
+ [Resilience in Amazon Aurora](disaster-recovery-resiliency.md)
+ [Infrastructure security in Amazon Aurora](infrastructure-security.md)
+ [Amazon RDS API and interface VPC endpoints (AWS PrivateLink)](vpc-interface-endpoints.md)
+ [Security best practices for Amazon Aurora](CHAP_BestPractices.Security.md)
+ [Controlling access with security groups](Overview.RDSSecurityGroups.md)
+ [Master user account privileges](UsingWithRDS.MasterAccounts.md)
+ [Using service-linked roles for Amazon Aurora](UsingWithRDS.IAM.ServiceLinkedRoles.md)
+ [Amazon VPC and Amazon Aurora](USER_VPC.md)

# Database authentication with Amazon Aurora
<a name="database-authentication"></a>

 Amazon Aurora supports several ways to authenticate database users.

Password authentication is available by default for all DB clusters. For Aurora MySQL and Aurora PostgreSQL, you can also add either or both IAM database authentication and Kerberos authentication for the same DB cluster.

Password, Kerberos, and IAM database authentication use different methods of authenticating to the database. Therefore, a specific user can log in to a database using only one authentication method. 

For PostgreSQL, use only one of the following role settings for a user of a specific database: 
+ To use IAM database authentication, assign the `rds_iam` role to the user.
+ To use Kerberos authentication, assign the `rds_ad` role to the user.
+ To use password authentication, don't assign either the `rds_iam` or `rds_ad` roles to the user.

Don't assign both the `rds_iam` and `rds_ad` roles to a user of a PostgreSQL database either directly or indirectly by nested grant access. If the `rds_iam` role is added to the master user, IAM authentication takes precedence over password authentication so the master user has to log in as an IAM user.

**Important**  
We strongly recommend that you do not use the master user directly in your applications. Instead, adhere to the best practice of using a database user created with the minimal privileges required for your application.

**Topics**
+ [Password authentication](#password-authentication)
+ [IAM database authentication](#iam-database-authentication)
+ [Kerberos authentication](#kerberos-authentication)

## Password authentication
<a name="password-authentication"></a>

With *password authentication,* your database performs all administration of user accounts. You create users with SQL statements such as `CREATE USER`, with the appropriate clause required by the DB engine for specifying passwords. For example, in MySQL the statement is `CREATE USER` *name* `IDENTIFIED BY` *password*, while in PostgreSQL, the statement is `CREATE USER` *name* `WITH PASSWORD` *password*. 

With password authentication, your database controls and authenticates user accounts. If a DB engine has strong password management features, they can enhance security. Database authentication might be easier to administer using password authentication when you have small user communities. Because clear text passwords are generated in this case, integrating with AWS Secrets Manager can enhance security.

For information about using Secrets Manager with Amazon Aurora, see [Creating a basic secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_create-basic-secret.html) and [Rotating secrets for supported Amazon RDS databases](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets-rds.html) in the *AWS Secrets Manager User Guide*. For information about programmatically retrieving your secrets in your custom applications, see [Retrieving the secret value](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_retrieve-secret.html) in the *AWS Secrets Manager User Guide*. 

## IAM database authentication
<a name="iam-database-authentication"></a>

You can authenticate to your DB cluster using AWS Identity and Access Management (IAM) database authentication. With this authentication method, you don't need to use a password when you connect to a DB cluster. Instead, you use an authentication token.

For more information about IAM database authentication, including information about availability for specific DB engines, see [IAM database authentication ](UsingWithRDS.IAMDBAuth.md) .

## Kerberos authentication
<a name="kerberos-authentication"></a>

 Amazon Aurora supports external authentication of database users using Kerberos and Microsoft Active Directory. Kerberos is a network authentication protocol that uses tickets and symmetric-key cryptography to eliminate the need to transmit passwords over the network. Kerberos has been built into Active Directory and is designed to authenticate users to network resources, such as databases.

 Amazon Aurora support for Kerberos and Active Directory provides the benefits of single sign-on and centralized authentication of database users. You can keep your user credentials in Active Directory. Active Directory provides a centralized place for storing and managing credentials for multiple DB clusters. 

To use credentials from your self-managed Active Directory, you need to setup a trust relationship to the Directory Service for Microsoft Active Directory that the DB cluster is joined to.

 Aurora PostgreSQL and Aurora MySQL support one-way and two-way forest trust relationships with forest-wide authentication or selective authentication.

In some scenarios, you can configure Kerberos authentication over an external trust relationship. This requires your self-managed Active Directory to have additional settings. This includes but is not limited to [ Kerberos Forest Search Order](https://learn.microsoft.com/en-us/troubleshoot/windows-server/active-directory/kfso-not-work-in-external-trust-event-is-17). 

Aurora supports Kerberos authentication for Aurora MySQL and Aurora PostgreSQL DB clusters. For more information, see [Using Kerberos authentication for Aurora MySQL](aurora-mysql-kerberos.md) and [Using Kerberos authentication with Aurora PostgreSQL](postgresql-kerberos.md) .

# Password management with Amazon Aurora and AWS Secrets Manager
<a name="rds-secrets-manager"></a>

Amazon Aurora integrates with Secrets Manager to manage master user passwords for your DB clusters.

**Topics**
+ [Region and version availability](#rds-secrets-manager-availability)
+ [Limitations for Secrets Manager integration with Amazon Aurora](#rds-secrets-manager-limitations)
+ [Overview of managing master user passwords with AWS Secrets Manager](#rds-secrets-manager-overview)
+ [Benefits of managing master user passwords with Secrets Manager](#rds-secrets-manager-benefits)
+ [Permissions required for Secrets Manager integration](#rds-secrets-manager-permissions)
+ [Enforcing Aurora management of the master user password in AWS Secrets Manager](#rds-secrets-manager-auth)
+ [Managing the master user password for a DB cluster with Secrets Manager](#rds-secrets-manager-db-cluster)
+ [Rotating the master user password secret for a DB cluster](#rds-secrets-manager-rotate-db-cluster)
+ [Viewing the details about a secret for a DB cluster](#rds-secrets-manager-view-db-cluster)

## Region and version availability
<a name="rds-secrets-manager-availability"></a>

Feature availability and support varies across specific versions of each database engine and across AWS Regions. For more information about version and Region availability with Secrets Manager integration with Amazon Aurora, see [Supported Regions and Aurora DB engines for Secrets Manager integration](Concepts.Aurora_Fea_Regions_DB-eng.Feature.SecretsManager.md). 

## Limitations for Secrets Manager integration with Amazon Aurora
<a name="rds-secrets-manager-limitations"></a>

Managing master user passwords with Secrets Manager isn't supported for the following features:
+ Amazon RDS Blue/Green Deployments
+ DB clusters that are part of an Aurora global database
+ Aurora Serverless v1 DB clusters
+ Cross-Region read replicas
+ Binary log external replication

## Overview of managing master user passwords with AWS Secrets Manager
<a name="rds-secrets-manager-overview"></a>

With AWS Secrets Manager, you can replace hard-coded credentials in your code, including database passwords, with an API call to Secrets Manager to retrieve the secret programmatically. For more information about Secrets Manager, see the [AWS Secrets Manager User Guide](https://docs.aws.amazon.com/secretsmanager/latest/userguide/). 

When you store database secrets in Secrets Manager, your AWS account incurs charges. For information about pricing, see [AWS Secrets Manager Pricing](https://aws.amazon.com/secrets-manager/pricing).

You can specify that Aurora manages the master user password in Secrets Manager for an Amazon Aurora DB cluster when you perform one of the following operations:
+ Create a DB cluster
+ Modify a DB cluster
+ Restore a DB cluster from Amazon S3 (Aurora MySQL only)

When you specify that Aurora manages the master user password in Secrets Manager, Aurora generates the password and stores it in Secrets Manager. You can interact directly with the secret to retrieve the credentials for the master user. You can also specify a customer managed key to encrypt the secret, or use the KMS key that is provided by Secrets Manager.

Aurora manages the settings for the secret and rotates the secret every seven days by default. You can modify some of the settings, such as the rotation schedule. If you delete a DB cluster that manages a secret in Secrets Manager, the secret and its associated metadata are also deleted.

To connect to a DB cluster with the credentials in a secret, you can retrieve the secret from Secrets Manager. For more information, see [ Retrieve secrets from AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets.html) and [Connect to a SQL database with credentials in an AWS Secrets Manager secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets_jdbc.html) in the *AWS Secrets Manager User Guide*. 

## Benefits of managing master user passwords with Secrets Manager
<a name="rds-secrets-manager-benefits"></a>

Managing Aurora master user passwords with Secrets Manager provides the following benefits:
+ Aurora automatically generates database credentials.
+ Aurora automatically stores and manages database credentials in AWS Secrets Manager.
+ Aurora rotates database credentials regularly, without requiring application changes.
+ Secrets Manager secures database credentials from human access and plain text view.
+ Secrets Manager allows retrieval of database credentials in secrets for database connections.
+ Secrets Manager allows fine-grained control of access to database credentials in secrets using IAM.
+ You can optionally separate database encryption from credentials encryption with different KMS keys.
+ You can eliminate manual management and rotation of database credentials.
+ You can monitor database credentials easily with AWS CloudTrail and Amazon CloudWatch.

For more information about the benefits of Secrets Manager, see the [AWS Secrets Manager User Guide](https://docs.aws.amazon.com/secretsmanager/latest/userguide/).

## Permissions required for Secrets Manager integration
<a name="rds-secrets-manager-permissions"></a>

Users must have the required permissions to perform operations related to Secrets Manager integration. You can create IAM policies that grant permissions to perform specific API operations on the specified resources they need. You can then attach those policies to the IAM permission sets or roles that require those permissions. For more information, see [Identity and access management for Amazon Aurora](UsingWithRDS.IAM.md).

For create, modify, or restore operations, the user who specifies that Aurora manages the master user password in Secrets Manager must have permissions to perform the following operations:
+ `kms:DescribeKey`
+ `secretsmanager:CreateSecret`
+ `secretsmanager:TagResource`

The `kms:DescribeKey` permission is required to access your customer-managed key for the `MasterUserSecretKmsKeyId` and to describe `aws/secretsmanager`.

For create, modify, or restore operations, the user who specifies the customer managed key to encrypt the secret in Secrets Manager must have permissions to perform the following operations:
+ `kms:Decrypt`
+ `kms:GenerateDataKey`
+ `kms:CreateGrant`

For modify operations, the user who rotates the master user password in Secrets Manager must have permissions to perform the following operation:
+ `secretsmanager:RotateSecret`

## Enforcing Aurora management of the master user password in AWS Secrets Manager
<a name="rds-secrets-manager-auth"></a>

You can use IAM condition keys to enforce Aurora management of the master user password in AWS Secrets Manager. The following policy doesn't allow users to create or restore DB instances or DB clusters unless the master user password is managed by Aurora in Secrets Manager.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Action": ["rds:CreateDBInstance", "rds:CreateDBCluster", "rds:RestoreDBInstanceFromS3", "rds:RestoreDBClusterFromS3"],
            "Resource": "*",
            "Condition": {
                "Bool": {
                    "rds:ManageMasterUserPassword": false
                }
            }
        }
    ]
}
```

------

**Note**  
This policy enforces password management in AWS Secrets Manager at creation. However, you can still disable Secrets Manager integration and manually set a master password by modifying the cluster.  
To prevent this, include `rds:ModifyDBInstance`, `rds:ModifyDBCluster` in the action block of the policy. Be aware, this prevents the user from applying any further modifications to existing clusters that don't have Secrets Manager integration enabled. 

For more information about using condition keys in IAM policies, see [Policy condition keys for Aurora](security_iam_service-with-iam.md#UsingWithRDS.IAM.Conditions) and [Example policies: Using condition keys](UsingWithRDS.IAM.Conditions.Examples.md).

## Managing the master user password for a DB cluster with Secrets Manager
<a name="rds-secrets-manager-db-cluster"></a>

You can configure Aurora management of the master user password in Secrets Manager when you perform the following actions:
+ [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md)
+ [Modifying an Amazon Aurora DB cluster](Aurora.Modifying.md)
+ [Migrating data from an external MySQL database to an Amazon Aurora MySQL DB cluster](AuroraMySQL.Migrating.ExtMySQL.md)

You can use the RDS console, the AWS CLI, or the RDS API to perform these actions.

### Console
<a name="rds-secrets-manager-db-cluster-console"></a>

Follow the instructions for creating or modifying a DB cluster with the RDS console:
+ [Creating a DB cluster](Aurora.CreateInstance.md#Aurora.CreateInstance.Creating)
+ [Modifying a DB instance in a DB cluster](Aurora.Modifying.md#Aurora.Modifying.Instance)

  In the RDS console, you can modify any DB instance to specify the master user password management settings for the entire DB cluster.
+ [Restoring an Amazon Aurora MySQL DB cluster from an Amazon S3 bucket](AuroraMySQL.Migrating.ExtMySQL.S3.md#AuroraMySQL.Migrating.ExtMySQL.S3.Restore)

When you use the RDS console to perform one of these operations, you can specify that the master user password is managed by Aurora in Secrets Manager. To do so when you are creating or restoring a DB cluster, select **Manage master credentials in AWS Secrets Manager** in **Credential settings**. When you are modifying a DB cluster, select **Manage master credentials in AWS Secrets Manager** in **Settings**.

The following image is an example of the **Manage master credentials in AWS Secrets Manager** setting when you are creating or restoring a DB cluster.

![\[Manage master credentials in AWS Secrets Manager\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/secrets-manager-credential-settings.png)


When you select this option, Aurora generates the master user password and manages it throughout its lifecycle in Secrets Manager.

![\[Manage master credentials in AWS Secrets Manager selected\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/secrets-manager-integration-create.png)


You can choose to encrypt the secret with a KMS key that Secrets Manager provides or with a customer managed key that you create. After Aurora is managing the database credentials for a DB cluster, you can't change the KMS key that is used to encrypt the secret.

You can choose other settings to meet your requirements.

For more information about the available settings when you are creating a DB cluster, see [Settings for Aurora DB clusters](Aurora.CreateInstance.md#Aurora.CreateInstance.Settings). For more information about the available settings when you are modifying a DB cluster, see [Settings for Amazon Aurora](Aurora.Modifying.md#Aurora.Modifying.Settings).

### AWS CLI
<a name="rds-secrets-manager-db-cluster-cli"></a>

To specify that Aurora manages the master user password in Secrets Manager, specify the `--manage-master-user-password` option in one of the following commands:
+ [create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html)
+ [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html)
+ [restore-db-cluster-from-s3](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-from-s3.html)

When you specify the `--manage-master-user-password` option in these commands, Aurora generates the master user password and manages it throughout its lifecycle in Secrets Manager.

To encrypt the secret, you can specify a customer managed key or use the default KMS key that is provided by Secrets Manager. Use the `--master-user-secret-kms-key-id` option to specify a customer managed key. The AWS KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the KMS key. To use a KMS key in a different AWS account, specify the key ARN or alias ARN. After Aurora is managing the database credentials for a DB cluster, you can't change the KMS key that is used to encrypt the secret.

You can choose other settings to meet your requirements.

For more information about the available settings when you are creating a DB cluster, see [Settings for Aurora DB clusters](Aurora.CreateInstance.md#Aurora.CreateInstance.Settings). For more information about the available settings when you are modifying a DB cluster, see [Settings for Amazon Aurora](Aurora.Modifying.md#Aurora.Modifying.Settings).

This example creates a DB cluster and specifies that Aurora manages the password in Secrets Manager. The secret is encrypted using the KMS key that is provided by Secrets Manager.

**Example**  
For Linux, macOS, or Unix:  

```
1. aws rds create-db-cluster \
2.      --db-cluster-identifier sample-cluster \
3.      --engine aurora-mysql \
4.      --engine-version 8.0 \
5.      --master-username admin \
6.      --manage-master-user-password
```
For Windows:  

```
1. aws rds create-db-cluster ^
2.      --db-cluster-identifier sample-cluster ^
3.      --engine aurora-mysql ^
4.      --engine-version 8.0 ^
5.      --master-username admin ^
6.      --manage-master-user-password
```

### RDS API
<a name="rds-secrets-manager-db-cluster-api"></a>

To specify that Aurora manages the master user password in Secrets Manager, set the `ManageMasterUserPassword` parameter to `true` in one of the following operations:
+ [CreateDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html)
+ [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html)
+ [RestoreDBClusterFromS3](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromS3.html)

When you set the `ManageMasterUserPassword` parameter to `true` in one of these operations, Aurora generates the master user password and manages it throughout its lifecycle in Secrets Manager.

To encrypt the secret, you can specify a customer managed key or use the default KMS key that is provided by Secrets Manager. Use the `MasterUserSecretKmsKeyId` parameter to specify a customer managed key. The AWS KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the KMS key. To use a KMS key in a different AWS account, specify the key ARN or alias ARN. After Aurora is managing the database credentials for a DB cluster, you can't change the KMS key that is used to encrypt the secret.

## Rotating the master user password secret for a DB cluster
<a name="rds-secrets-manager-rotate-db-cluster"></a>

When Aurora rotates a master user password secret, Secrets Manager generates a new secret version for the existing secret. The new version of the secret contains the new master user password. Aurora changes the master user password for the DB cluster to match the password for the new secret version.

You can rotate a secret immediately instead of waiting for a scheduled rotation. To rotate a master user password secret in Secrets Manager, modify the DB cluster. For information about modifying a DB cluster, see [Modifying an Amazon Aurora DB cluster](Aurora.Modifying.md).

You can rotate a master user password secret immediately with the RDS console, the AWS CLI, or the RDS API. The new password is always 28 characters long and contains atleast one upper and lowercase character, one number, and one punctuation. 

### Console
<a name="rds-secrets-manager-rotate-db-instance-console"></a>

To rotate a master user password secret using the RDS console, modify the DB cluster and select **Rotate secret immediately** in **Settings**.

![\[Rotate a master user password secret immediately\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/secrets-manager-integration-rotate-aurora.png)


Follow the instructions for modifying a DB cluster with the RDS console in [Modifying the DB cluster by using the console, CLI, and API](Aurora.Modifying.md#Aurora.Modifying.Cluster). You must choose **Apply immediately** on the confirmation page.

### AWS CLI
<a name="rds-secrets-manager-rotate-db-instance-cli"></a>

To rotate a master user password secret using the AWS CLI, use the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) command and specify the `--rotate-master-user-password` option. You must specify the `--apply-immediately` option when you rotate the master password.

This example rotates a master user password secret.

**Example**  
For Linux, macOS, or Unix:  

```
1. aws rds modify-db-cluster \
2.     --db-cluster-identifier mydbcluster \
3.     --rotate-master-user-password \
4.     --apply-immediately
```
For Windows:  

```
1. aws rds modify-db-cluster ^
2.     --db-cluster-identifier mydbcluster ^
3.     --rotate-master-user-password ^
4.     --apply-immediately
```

### RDS API
<a name="rds-secrets-manager-rotate-db-instance-api"></a>

You can rotate a master user password secret using the [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) operation and setting the `RotateMasterUserPassword` parameter to `true`. You must set the `ApplyImmediately` parameter to `true` when you rotate the master password.

## Viewing the details about a secret for a DB cluster
<a name="rds-secrets-manager-view-db-cluster"></a>

You can retrieve your secrets using the console ([https://console.aws.amazon.com/secretsmanager/](https://console.aws.amazon.com/secretsmanager/)) or the AWS CLI ([get-secret-value](https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/get-secret-value.html) Secrets Manager command).

You can find the Amazon Resource Name (ARN) of a secret managed by Aurora in Secrets Manager with the RDS console, the AWS CLI, or the RDS API.

### Console
<a name="rds-secrets-manager-view-db-cluster-console"></a>

**To view the details about a secret managed by Aurora in Secrets Manager**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the DB cluster to show its details.

1. Choose the **Configuration** tab.

   In **Master Credentials ARN**, you can view the secret ARN.  
![\[View the details about a secret managed by Aurora in Secrets Manager\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/secrets-manager-integration-view-cluster.png)

   You can follow the **Manage in Secrets Manager** link to view and manage the secret in the Secrets Manager console.

### AWS CLI
<a name="rds-secrets-manager-view-db-instance-cli"></a>

You can use the RDS AWS CLI [describe-db-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) command to find the following information about a secret managed by Aurora in Secrets Manager:
+ `SecretArn` – The ARN of the secret
+ `SecretStatus` – The status of the secret

  The possible status values include the following:
  + `creating` – The secret is being created.
  + `active` – The secret is available for normal use and rotation.
  + `rotating` – The secret is being rotated.
  + `impaired` – The secret can be used to access database credentials, but it can't be rotated. A secret might have this status if, for example, permissions are changed so that RDS can no longer access the secret or the KMS key for the secret.

    When a secret has this status, you can correct the condition that caused the status. If you correct the condition that caused status, the status remains `impaired` until the next rotation. Alternatively, you can modify the DB cluster to turn off automatic management of database credentials, and then modify the DB cluster again to turn on automatic management of database credentials. To modify the DB cluster, use the `--manage-master-user-password` option in the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) command.
+ `KmsKeyId` – The ARN of the KMS key that is used to encrypt the secret

Specify the `--db-cluster-identifier` option to show output for a specific DB cluster. This example shows the output for a secret that is used by a DB cluster.

**Example**  

```
1. aws rds describe-db-clusters --db-cluster-identifier mydbcluster
```
The following sample shows the output for a secret:  

```
"MasterUserSecret": {
                "SecretArn": "arn:aws:secretsmanager:eu-west-1:123456789012:secret:rds!cluster-033d7456-2c96-450d-9d48-f5de3025e51c-xmJRDx",
                "SecretStatus": "active",
                "KmsKeyId": "arn:aws:kms:eu-west-1:123456789012:key/0987dcba-09fe-87dc-65ba-ab0987654321"
            }
```

When you have the secret ARN, you can view details about the secret using the [get-secret-value](https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/get-secret-value.html) Secrets Manager CLI command.

This example shows the details for the secret in the previous sample output.

**Example**  
For Linux, macOS, or Unix:  

```
aws secretsmanager get-secret-value \
    --secret-id 'arn:aws:secretsmanager:eu-west-1:123456789012:secret:rds!cluster-033d7456-2c96-450d-9d48-f5de3025e51c-xmJRDx'
```
For Windows:  

```
aws secretsmanager get-secret-value ^
    --secret-id 'arn:aws:secretsmanager:eu-west-1:123456789012:secret:rds!cluster-033d7456-2c96-450d-9d48-f5de3025e51c-xmJRDx'
```

### RDS API
<a name="rds-secrets-manager-rotate-db-instance-api"></a>

You can view the ARN, status, and KMS key for a secret managed by Aurora in Secrets Manager using the [DescribeDBClusters](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html) RDS operation and setting the `DBClusterIdentifier` parameter to a DB cluster identifier. Details about the secret are included in the output.

When you have the secret ARN, you can view details about the secret using the [GetSecretValue](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html) Secrets Manager operation.

# Data protection in Amazon RDS
<a name="DataDurability"></a>

The AWS [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/) applies to data protection in Amazon Relational Database Service. As described in this model, AWS is responsible for protecting the global infrastructure that runs all of the AWS Cloud. You are responsible for maintaining control over your content that is hosted on this infrastructure. You are also responsible for the security configuration and management tasks for the AWS services that you use. For more information about data privacy, see the [Data Privacy FAQ](https://aws.amazon.com/compliance/data-privacy-faq/). For information about data protection in Europe, see the [AWS Shared Responsibility Model and GDPR](https://aws.amazon.com/blogs/security/the-aws-shared-responsibility-model-and-gdpr/) blog post on the *AWS Security Blog*.

For data protection purposes, we recommend that you protect AWS account credentials and set up individual users with AWS IAM Identity Center or AWS Identity and Access Management (IAM). That way, each user is given only the permissions necessary to fulfill their job duties. We also recommend that you secure your data in the following ways:
+ Use multi-factor authentication (MFA) with each account.
+ Use SSL/TLS to communicate with AWS resources. We require TLS 1.2 and recommend TLS 1.3.
+ Set up API and user activity logging with AWS CloudTrail. For information about using CloudTrail trails to capture AWS activities, see [Working with CloudTrail trails](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-trails.html) in the *AWS CloudTrail User Guide*.
+ Use AWS encryption solutions, along with all default security controls within AWS services.
+ Use advanced managed security services such as Amazon Macie, which assists in discovering and securing sensitive data that is stored in Amazon S3.
+ If you require FIPS 140-3 validated cryptographic modules when accessing AWS through a command line interface or an API, use a FIPS endpoint. For more information about the available FIPS endpoints, see [Federal Information Processing Standard (FIPS) 140-3](https://aws.amazon.com/compliance/fips/).

We strongly recommend that you never put confidential or sensitive information, such as your customers' email addresses, into tags or free-form text fields such as a **Name** field. This includes when you work with Amazon RDS or other AWS services using the console, API, AWS CLI, or AWS SDKs. Any data that you enter into tags or free-form text fields used for names may be used for billing or diagnostic logs. If you provide a URL to an external server, we strongly recommend that you do not include credentials information in the URL to validate your request to that server.

**Topics**
+ [Protecting data using encryption](Encryption.md)
+ [Internetwork traffic privacy](inter-network-traffic-privacy.md)

# Protecting data using encryption
<a name="Encryption"></a>

Amazon Aurora encrypts database resources at the storage layer. You can also encrypt connections to DB clusters.

**Topics**
+ [Encrypting Amazon Aurora resources](Overview.Encryption.md)
+ [AWS KMS key management](Overview.Encryption.Keys.md)
+ [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md)
+ [Rotating your SSL/TLS certificate](UsingWithRDS.SSL-certificate-rotation.md)

# Encrypting Amazon Aurora resources
<a name="Overview.Encryption"></a>

Amazon Aurora protects your data both at rest and in transit—whether moving between on-premises clients and Amazon Aurora, or between Amazon Aurora and other AWS resources. Amazon Aurora encrypts all user data in your Amazon Aurora DB clusters including logs, automated backups, and snapshots.

After your data is encrypted, Amazon Aurora handles authentication of access and decryption of your data transparently with a minimal impact on performance. You don't need to modify your database client applications to use encryption.

**Note**  
For encrypted and unencrypted DB clusters, data that is in transit between the source and the read replicas is encrypted, even when replicating across AWS Regions.

**Topics**
+ [Overview of encryption in Amazon Aurora resources](#Overview.Encryption.Overview)
+ [Encrypting an Amazon Aurora DB cluster](#Overview.Encryption.Enabling)
+ [Determining whether encryption is turned on for a DB cluster](#Overview.Encryption.Determining)
+ [Availability of Amazon Aurora encryption](#Overview.Encryption.Availability)
+ [Encryption in transit](#Overview.Encryption.InTransit)
+ [Limitations of Amazon Aurora encrypted DB clusters](#Overview.Encryption.Limitations)

## Overview of encryption in Amazon Aurora resources
<a name="Overview.Encryption.Overview"></a>

Amazon Aurora encrypted DB clusters provide an additional layer of data protection by securing your data from unauthorized access to the underlying storage. All new database clusters created on or after  February 18, 2026 , in Amazon Aurora are encrypted at rest using industry standard AES-256 encryption. This encryption happens automatically in the background, securing your data without requiring any action from you. It also helps reduce the operational burden and complexity involved in protecting sensitive data. With encryption at rest, you can secure compliance-sensitive and security-critical applications against both accidental and malicious threats while meeting regulatory requirements.

Amazon Aurora uses an AWS Key Management Service key to encrypt these resources. AWS KMS combines secure, highly available hardware and software to provide a key management system scaled for the cloud. When creating a new database cluster, Amazon Aurora uses Server-Side Encryption (SSE) with an [AWS-owned key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-owned-key) by default. However, you can choose from three encryption types based on your security and compliance needs:
+ **AWS owned key (SSE-RDS)** – A fully AWS-controlled encryption key that you cannot view or manage, used automatically by Aurora for default encryption.
+ **AWS managed key (AMK)** – This key is created and managed by AWS and is visible in your account but not customizable. There is no monthly fee, but AWS KMS API charges will apply.
+ **Customer managed key (CMK)** – The key is stored in your account and is created, owned, and managed by you. You have full control over the KMS key (AWS KMS charges apply).

AWS managed keys are a legacy encryption option that remains available for backward compatibility. Amazon Aurora uses AWS-owned keys by default to encrypt your data, providing strong security protection without additional charges or management overhead. For most use cases, we recommend using either the default AWS-owned key for simplicity and cost efficiency, or a customer managed key (CMK) if you require full control over your encryption keys. For more information about key types, see [ customer managed keys and AWS managed keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#key-mgmt).

**Note**  
**Important:** For source database instances or clusters created before February 18, 2026 , where you did not opt in for encryption, snapshots, clones, and Amazon Aurora replicas (read instance) created from those sources will remain unencrypted. However, restore operations and logical replication outside the Amazon Aurora cluster will produce encrypted instances.

 For an Amazon Aurora encrypted DB cluster, all DB instances, logs, backups, and snapshots are encrypted. For more information about the availability and limitations of encryption, see [Availability of Amazon Aurora encryption](#Overview.Encryption.Availability) and [Limitations of Amazon Aurora encrypted DB clusters](#Overview.Encryption.Limitations).

When you create an encrypted DB cluster, you can choose a customer managed key or the AWS managed key for Amazon Aurora to encrypt your DB cluster, if you don't specify the key identifier for a customer managed key, Amazon Aurora uses the AWS managed key for your new DB cluster. Amazon Aurora creates an AWS managed key for Amazon Aurora for your AWS account. Your AWS account has a different AWS managed key for Amazon Aurora for each AWS Region.

To manage the customer managed keys used for encrypting and decrypting your Amazon Aurora resources, you use the [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/). 

Using AWS KMS, you can create customer managed keys and define the policies to control the use of these customer managed keys. AWS KMS supports CloudTrail, so you can audit KMS key usage to verify that customer managed keys are being used appropriately. You can use your customer managed keys with Amazon Aurora and supported AWS services such as Amazon S3, Amazon EBS, and Amazon Redshift. For a list of services that are integrated with AWS KMS, see [AWS Service Integration](https://aws.amazon.com/kms/features/#AWS_Service_Integration). Some considerations about using KMS keys: 
+ Once you create an encrypted DB instance, you cannot change the KMS key used by that instance. Be sure to determine your KMS key requirements before creating your encrypted DB instance. If you need to change the encryption key for your DB cluster, follow these steps:
  + Create a manual snapshot of your cluster. 
  + Restore the snapshot and enable encryption with your desired KMS key during the restore operation. 
+ If you restore an unencrypted snapshot and choose no encryption, the database cluster created will be encrypted using the default encryption at rest (AWS-owned key).
+ You can't share a snapshot that has been encrypted using the AWS managed key of the AWS account that shared the snapshot.
+ Each DB instance in the DB cluster shares the same storage encrypted with the same KMS key.

**Important**  
Amazon Aurora can lose access to the KMS key for a DB cluster when you disable the KMS key. In these cases, the encrypted DB cluster goes into `inaccessible-encryption-credentials-recoverable` state. The DB cluster remains in this state for seven days, during which the instance is stopped. API calls made to the DB cluster during this time might not succeed. To recover the DB cluster, enable the KMS key and restart this DB cluster. You can enable the KMS key from the AWS Management Console, AWS CLI, or RDS API. Restart the DB cluster using the AWS CLI command [start-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/start-db-cluster.html) or AWS Management Console.  
The `inaccessible-encryption-credentials-recoverable` state only applies to DB clusters that can stop. This recoverable state is not applicable to instances that can't stop, such as clusters with cross-Region read replicas. For more information, see [Limitations for stopping and starting Aurora DB clusters](aurora-cluster-stop-start.md#aurora-cluster-stop-limitations).  
If the DB cluster isn't recovered within seven days, it goes into the terminal `inaccessible-encryption-credentials` state. In this state, the DB cluster is not usable anymore and you can only restore the DB cluster from a backup. We strongly recommend that you always turn on backups to guard against the loss of data in your databases.  
During the creation of a DB cluster, Aurora checks if the calling principal has access to the KMS key and generates a grant from the KMS key that it uses for the entire lifetime of the DB cluster. Revoking the calling principals access to the KMS key does not affect a running database. When using KMS keys in cross-account scenarios, such as copying a snapshot to another account, the KMS key needs to be shared with the other account. If you create a DB cluster from the snapshot without specifying a different KMS key, the new cluster uses the KMS key from the source account. Revoking access to the key after you create the DB cluster does not affect the cluster. However, disabling the key impacts all DB clusters encrypted with that key. To prevent this, specify a different key during the snapshot copy operation.

For more information about KMS keys, see [AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#kms_keys) in the *AWS Key Management Service Developer Guide* and [AWS KMS key management](Overview.Encryption.Keys.md). 

## Encrypting an Amazon Aurora DB cluster
<a name="Overview.Encryption.Enabling"></a>

All new DB clusters created on or after February 18, 2026, are encrypted by default with an AWS owned key.

To encrypt a new DB cluster, using AWS managed key or customer managed key, choose the option on the console. For information on creating a DB cluster, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md).

If you use the [create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html) AWS CLI command to create an encrypted DB cluster, set the `--storage-encrypted` parameter. If you use the [CreateDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html) API operation, set the `StorageEncrypted` parameter to true.

Once you have created an encrypted DB cluster, you can't change the KMS key used by that DB cluster. Therefore, be sure to determine your KMS key requirements before you create your encrypted DB cluster.

If you use the AWS CLI `create-db-cluster` command to create an encrypted DB cluster with a customer managed key, set the `--kms-key-id` parameter to any key identifier for the KMS key. If you use the Amazon RDS API `CreateDBInstance` operation, set the `KmsKeyId` parameter to any key identifier for the KMS key. To use a customer managed key in a different AWS account, specify the key ARN or alias ARN.

## Determining whether encryption is turned on for a DB cluster
<a name="Overview.Encryption.Determining"></a>

You can use the AWS Management Console, AWS CLI, or RDS API to determine whether encryption at rest is turned on for a DB cluster.

### Console
<a name="Overview.Encryption.Determining.CON"></a>

**To determine whether encryption at rest is turned on for a DB cluster**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the name of the DB cluster that you want to check to view its details.

1. Choose the **Configuration** tab and check the **Encryption** value.  
![\[Checking encryption at rest for a DB cluster\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/encryption-aurora-instance.png)

### AWS CLI
<a name="Overview.Encryption.Determining.CLI"></a>

To determine whether encryption at rest is turned on for a DB cluster by using the AWS CLI, call the [describe-db-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) command with the following option: 
+ `--db-cluster-identifier` – The name of the DB cluster.

The following example uses a query to return either `TRUE` or `FALSE` regarding encryption at rest for the `mydb` DB cluster.

**Example**  

```
1. aws rds describe-db-clusters --db-cluster-identifier mydb --query "*[].{StorageEncrypted:StorageEncrypted}" --output text
```

### RDS API
<a name="Overview.Encryption.Determining.API"></a>

To determine whether encryption at rest is turned on for a DB cluster by using the Amazon RDS API, call the [DescribeDBClusters](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html) operation with the following parameter: 
+ `DBClusterIdentifier` – The name of the DB cluster.

## Availability of Amazon Aurora encryption
<a name="Overview.Encryption.Availability"></a>

Amazon Aurora encryption is currently available for all database engines and storage types.

**Note**  
Amazon Aurora encryption is not available for the db.t2.micro DB instance class.

## Encryption in transit
<a name="Overview.Encryption.InTransit"></a>

**Encryption at the physical layer**  
All data flowing accross AWS Regions over the AWS global network is automatically encrypted at the physical layer before it leaves AWS secured facilities. All traffic between AZs is encrypted. Additional layers of encryption, including those listed in this section may provide additional protections.

**Encryption provided by Amazon VPC peering and Transit Gateway cross-Region peering**  
All cross-Region traffic that uses Amazon VPC and Transit Gateway peering is automatically bulk-encrypted when it exits a Region. An additional layer of encryption is automatically provided at the physical layer for all traffic before it leaves AWS secured facilities.

**Encryption between instances**  
AWS provides secure and private connectivity between DB instances of all types. In addition, some instance types use the offload capabilities of the underlying Nitro System hardware to automatically encrypt in-transit traffic between instances. This encryption uses Authenticated Encryption with Associated Data (AEAD) algorithms, with 256-bit encryption. There is no impact on network performance. To support this additional in-transit traffic encryption between instances, the following requirements must be met:  
+ The instances use the following instance types:
  + **General purpose**: M6i, M6id, M6in, M6idn, M7g
  + **Memory optimized**: R6i, R6id, R6in, R6idn, R7g, X2idn, X2iedn, X2iezn
+ The instances are in the same AWS Region.
+ The instances are in the same VPC or peered VPCs, and the traffic does not pass through a virtual network device or service, such as a load balancer or a transit gateway.

## Limitations of Amazon Aurora encrypted DB clusters
<a name="Overview.Encryption.Limitations"></a>

The following limitations exist for Amazon Aurora encrypted DB clusters:
+ You can't turn off encryption on an encrypted DB cluster.
+ If you have an existing unencrypted cluster, all snapshots created from that cluster will also be unencrypted. To create an encrypted snapshot from an unencrypted cluster, you must copy the snapshot and specify a customer managed key during the copy operation. You cannot create an encrypted snapshot from an unencrypted snapshot without specifying a customer managed key.
+ You can't create an encrypted snapshot of an unencrypted DB .
+ A snapshot of an encrypted DB cluster must be encrypted using the same KMS key as the DB cluster.
+ You can't convert an unencrypted DB cluster to an encrypted one. However, you can restore an unencrypted snapshot to an encrypted Aurora DB cluster. To do this, specify a KMS key when you restore from the unencrypted snapshot.
+ If you have an existing unencrypted cluster, any Amazon Aurora replica (read instance) created from that cluster will also be unencrypted. To create an encrypted cluster from an unencrypted cluster, you need to restore the database cluster. The restored cluster will be encrypted by default after the restore operation.
+ To copy an encrypted snapshot from one AWS Region to another, you must specify the KMS key in the destination AWS Region. This is because KMS keys are specific to the AWS Region that they are created in.

  The source snapshot remains encrypted throughout the copy process. Amazon Aurora uses envelope encryption to protect data during the copy process. For more information about envelope encryption, see [ Envelope encryption](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping) in the *AWS Key Management Service Developer Guide*.
+ You can't unencrypt an encrypted DB cluster. However, you can export data from an encrypted DB cluster and import the data into an unencrypted DB cluster.

# AWS KMS key management
<a name="Overview.Encryption.Keys"></a>

 Amazon Aurora automatically integrates with [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/) for key management. Amazon Aurora uses envelope encryption. For more information about envelope encryption, see [ Envelope encryption](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping) in the *AWS Key Management Service Developer Guide*. 

You can use two types of AWS KMS keys to encrypt your DB clusters. 
+ If you want full control over a KMS key, you must create a *customer managed key*. For more information about customer managed keys, see [Customer managed keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) in the *AWS Key Management Service Developer Guide*. 
+  *AWS managed keys* are KMS keys in your account that are created, managed, and used on your behalf by an AWS service that is integrated with AWS KMS. By default, the RDS AWS managed key ( `aws/rds`) is used for encryption. You can't manage, rotate, or delete the RDS AWS managed key. For more information about AWS managed keys, see [AWS managed keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) in the *AWS Key Management Service Developer Guide*. 

To manage KMS keys used for Amazon Aurora encrypted DB clusters, use the [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/) in the [AWS KMS console](https://console.aws.amazon.com/kms), the AWS CLI, or the AWS KMS API. To view audit logs of every action taken with an AWS managed or customer managed key, use [AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/). For more information about key rotation, see [Rotating AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html). 

## Authorizing use of a customer managed key
<a name="Overview.Encryption.Keys.Authorizing"></a>

When Aurora uses a customer managed key in cryptographic operations, it acts on behalf of the user who is creating or changing the Aurora resource.

To create an Aurora resource using a customer managed key, a user must have permissions to call the following operations on the customer managed key:
+  `kms:CreateGrant` 
+  `kms:DescribeKey` 

You can specify these required permissions in a key policy, or in an IAM policy if the key policy allows it.

**Important**  
When you use explicit deny statements for all resources (\$1) in AWS KMS key policies with managed services like Amazon RDS, you must specify a condition to allow the resource owning account. Operations might fail without this condition, even if the deny rule includes exceptions for your IAM user.

**Tip**  
To follow the principle of least privilege, do not allow full access to `kms:CreateGrant`. Instead, use the [kms:ViaService condition key](https://docs.aws.amazon.com/kms/latest/developerguide/policy-conditions.html#conditions-kms-via-service) to allow the user to create grants on the KMS key only when the grant is created on the user's behalf by an AWS service.

You can make the IAM policy stricter in various ways. For example, if you want to allow the customer managed key to be used only for requests that originate in Aurora, use the [ kms:ViaService condition key](https://docs.aws.amazon.com/kms/latest/developerguide/policy-conditions.html#conditions-kms-via-service) with the `rds.<region>.amazonaws.com` value. Also, you can use the keys or values in the [Amazon RDS encryption context](#Overview.Encryption.Keys.encryptioncontext) as a condition for using the customer managed key for encryption.

For more information, see [Allowing users in other accounts to use a KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html) in the *AWS Key Management Service Developer Guide* and [Key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies). 

## Amazon RDS encryption context
<a name="Overview.Encryption.Keys.encryptioncontext"></a>

When Aurora uses your KMS key, or when Amazon EBS uses the KMS key on behalf of Aurora, the service specifies an [encryption context](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context). The encryption context is [additional authenticated data](https://docs.aws.amazon.com/crypto/latest/userguide/cryptography-concepts.html#term-aad) (AAD) that AWS KMS uses to ensure data integrity. When an encryption context is specified for an encryption operation, the service must specify the same encryption context for the decryption operation. Otherwise, decryption fails. The encryption context is also written to your [AWS CloudTrail](https://aws.amazon.com/cloudtrail/) logs to help you understand why a given KMS key was used. Your CloudTrail logs might contain many entries describing the use of a KMS key, but the encryption context in each log entry can help you determine the reason for that particular use.

At minimum, Aurora always uses the DB cluster ID for the encryption context, as in the following JSON-formatted example:

```
{ "aws:rds:dbc-id": "cluster-CQYSMDPBRZ7BPMH7Y3RTDG5QY" }
```

This encryption context can help you identify the DB cluster for which your KMS key was used.

When your KMS key is used for a specific DB cluster and a specific Amazon EBS volume, both the DB cluster ID and the Amazon EBS volume ID are used for the encryption context, as in the following JSON-formatted example:

```
{
  "aws:rds:dbc-id": "cluster-BRG7VYS3SVIFQW7234EJQOM5RQ",
  "aws:ebs:id": "vol-ad8c6542"
}
```

# Using SSL/TLS to encrypt a connection to a DB cluster
<a name="UsingWithRDS.SSL"></a>

You can use Secure Socket Layer (SSL) or Transport Layer Security (TLS) from your application to encrypt a connection to a DB cluster running Aurora MySQL or Aurora PostgreSQL.

SSL/TLS connections provide a layer of security by encrypting data that moves between your client and DB cluster. Optionally, your SSL/TLS connection can perform server identity verification by validating the server certificate installed on your database. To require server identity verification, follow this general process:

1. Choose the **certificate authority (CA)** that signs the **DB server certificate,** for your database. For more information about certificate authorities, see [Certificate authorities](#UsingWithRDS.SSL.RegionCertificateAuthorities) . 

1. Download a certificate bundle to use when you are connecting to the database. To download a certificate bundle, see  [Certificate bundles by AWS Region](#UsingWithRDS.SSL.CertificatesAllRegions) . 
**Note**  
All certificates are only available for download using SSL/TLS connections.

1. Connect to the database using your DB engine's process for implementing SSL/TLS connections. Each DB engine has its own process for implementing SSL/TLS. To learn how to implement SSL/TLS for your database, follow the link that corresponds to your DB engine:
   +  [Security with Amazon Aurora MySQL](AuroraMySQL.Security.md) 
   +  [Security with Amazon Aurora PostgreSQL](AuroraPostgreSQL.Security.md) 

## Certificate authorities
<a name="UsingWithRDS.SSL.RegionCertificateAuthorities"></a>

The **certificate authority (CA)** is the certificate that identifies the root CA at the top of the certificate chain. The CA signs the **DB server certificate,** which is installed on each DB instance. The DB server certificate identifies the DB instance as a trusted server.

![\[Certificate authority overview\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/certificate-authority-overview.png)


Amazon RDS provides the following CAs to sign the DB server certificate for a database.


****  

| Certificate authority (CA) | Description | Common name (CN) | 
| --- | --- | --- | 
|  rds-ca-rsa2048-g1  |  Uses a certificate authority with RSA 2048 private key algorithm and SHA256 signing algorithm in most AWS Regions. In the AWS GovCloud (US) Regions, this CA uses a certificate authority with RSA 2048 private key algorithm and SHA384 signing algorithm. This CA supports automatic server certificate rotation.  | Amazon RDS region-identifier Root CA RSA2048 G1 | 
|  rds-ca-rsa4096-g1  |  Uses a certificate authority with RSA 4096 private key algorithm and SHA384 signing algorithm. This CA supports automatic server certificate rotation.   | Amazon RDS region-identifier Root CA RSA4096 G1 | 
|  rds-ca-ecc384-g1  |  Uses a certificate authority with ECC 384 private key algorithm and SHA384 signing algorithm. This CA supports automatic server certificate rotation.   | Amazon RDS region-identifier Root CA ECC384 G1 | 

**Note**  
If you are using the AWS CLI, you can see the validities of the certificate authorities listed above by using [describe-certificates](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-certificates.html). 

These CA certificates are included in the regional and global certificate bundle. When you use the rds-ca-rsa2048-g1, rds-ca-rsa4096-g1, or rds-ca-ecc384-g1 CA with a database, RDS manages the DB server certificate on the database. RDS rotates the DB server certificate automatically before it expires. 

### Setting the CA for your database
<a name="UsingWithRDS.SSL.RegionCertificateAuthorities.Selection"></a>

You can set the CA for a database when you perform the following tasks:
+ Create an Aurora DB cluster – You can set the CA for a DB instance in an Aurora cluster when you create the first DB instance in the DB cluster using the AWS CLI or RDS API. Currently, you can't set the CA for the DB instances in a DB cluster when you create the DB cluster using the RDS console. For instructions, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md) .
+ Modify a DB instance – You can set the CA for a DB instance in a DB cluster by modifying it. For instructions, see [Modifying a DB instance in a DB cluster](Aurora.Modifying.md#Aurora.Modifying.Instance) .

**Note**  
 The default CA is set to rds-ca-rsa2048-g1.  You can override the default CA for your AWS account by using the [modify-certificates](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-certificates.html) command.

The available CAs depend on the DB engine and DB engine version. When you use the AWS Management Console, you can choose the CA using the **Certificate authority** setting, as shown in the following image.

![\[Certificate authority option\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/certificate-authority.png)


The console only shows the CAs that are available for the DB engine and DB engine version. If you're using the AWS CLI, you can set the CA for a DB instance using the [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) or [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) command. 

If you're using the AWS CLI, you can see the available CAs for your account by using the [describe-certificates](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-certificates.html) command. This command also shows the expiration date for each CA in `ValidTill` in the output. You can find the CAs that are available for a specific DB engine and DB engine version using the [describe-db-engine-versions](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-engine-versions.html) command.

The following example shows the CAs available for the default RDS for PostgreSQL DB engine version.

```
aws rds describe-db-engine-versions --default-only --engine postgres
```

Your output is similar to the following. The available CAs are listed in `SupportedCACertificateIdentifiers`. The output also shows whether the DB engine version supports rotating the certificate without restart in `SupportsCertificateRotationWithoutRestart`. 

```
{
    "DBEngineVersions": [
        {
            "Engine": "postgres",
            "MajorEngineVersion": "13",
            "EngineVersion": "13.4",
            "DBParameterGroupFamily": "postgres13",
            "DBEngineDescription": "PostgreSQL",
            "DBEngineVersionDescription": "PostgreSQL 13.4-R1",
            "ValidUpgradeTarget": [],
            "SupportsLogExportsToCloudwatchLogs": false,
            "SupportsReadReplica": true,
            "SupportedFeatureNames": [
                "Lambda"
            ],
            "Status": "available",
            "SupportsParallelQuery": false,
            "SupportsGlobalDatabases": false,
            "SupportsBabelfish": false,
            "SupportsCertificateRotationWithoutRestart": true,
            "SupportedCACertificateIdentifiers": [
                "rds-ca-rsa2048-g1",
                "rds-ca-ecc384-g1",
                "rds-ca-rsa4096-g1"
            ]
        }
    ]
}
```

### DB server certificate validities
<a name="UsingWithRDS.SSL.RegionCertificateAuthorities.DBServerCert"></a>

The validity of DB server certificate depends on the DB engine and DB engine version. If the DB engine version supports rotating the certificate without restart, the validity of the DB server certificate is 1 year. Otherwise the validity is 3 years.

For more information about DB server certificate rotation, see [Automatic server certificate rotation](UsingWithRDS.SSL-certificate-rotation.md#UsingWithRDS.SSL-certificate-rotation-server-cert-rotation) . 

### Viewing the CA for your DB instance
<a name="UsingWithRDS.SSL.RegionCertificateAuthorities.Viewing"></a>

You can view the details about the CA for a database by viewing the **Connectivity & security** tab in the console, as in the following image.

![\[Certificate authority details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/certificate-authority-details.png)


If you're using the AWS CLI, you can view the details about the CA for a DB instance by using the [describe-db-instances](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html) command. 

## Download certificate bundles for Aurora
<a name="UsingWithRDS.SSL.CertificatesDownload"></a>

When you connect to your database with SSL or TLS, the database instance requires a trust certificate from Amazon RDS. Select the appropriate link in the following table to download the bundle that corresponds with the AWS Region where you host your database.

### Certificate bundles by AWS Region
<a name="UsingWithRDS.SSL.CertificatesAllRegions"></a>

The certificate bundles for all AWS Regions and GovCloud (US) Regions contain the following root CA certificates:
+  `rds-ca-rsa2048-g1` 
+  `rds-ca-rsa4096-g1` 
+  `rds-ca-ecc384-g1` 

The `rds-ca-rsa4096-g1` and `rds-ca-ecc384-g1` certificates are not available in the following Regions:
+ Asia Pacific (Mumbai)
+ Asia Pacific (Melbourne)
+ Canada West (Calgary)
+ Europe (Zurich)
+ Europe (Spain)
+ Israel (Tel Aviv)

Your application trust store needs to only register the root CA certificate. Do not register the intermediate CA certificates to your trust store as this might cause connection issues when RDS automatically rotates your DB server certificate.

**Note**  
Amazon RDS Proxy  uses certificates from the AWS Certificate Manager (ACM). If you're using RDS Proxy, you don't need to download Amazon RDS certificates or update applications that use RDS Proxy connections. For more information, see [Using TLS/SSL with RDS Proxy](rds-proxy.howitworks.md#rds-proxy-security.tls) .

To download a certificate bundle for an AWS Region, select the link for the AWS Region that hosts your database in the following table.


|  **AWS Region**  |  **Certificate bundle (PEM)**  |  **Certificate bundle (PKCS7)**  | 
| --- | --- | --- | 
| Any commercial AWS Region |  [global-bundle.pem](https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem)  |  [global-bundle.p7b](https://truststore.pki.rds.amazonaws.com/global/global-bundle.p7b)  | 
| US East (N. Virginia) |  [us-east-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/us-east-1/us-east-1-bundle.pem)  |  [us-east-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/us-east-1/us-east-1-bundle.p7b)  | 
| US East (Ohio) |  [us-east-2-bundle.pem](https://truststore.pki.rds.amazonaws.com/us-east-2/us-east-2-bundle.pem)  |  [us-east-2-bundle.p7b](https://truststore.pki.rds.amazonaws.com/us-east-2/us-east-2-bundle.p7b)  | 
| US West (N. California) |  [us-west-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/us-west-1/us-west-1-bundle.pem)  |  [us-west-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/us-west-1/us-west-1-bundle.p7b)  | 
| US West (Oregon) |  [us-west-2-bundle.pem](https://truststore.pki.rds.amazonaws.com/us-west-2/us-west-2-bundle.pem)  |  [us-west-2-bundle.p7b](https://truststore.pki.rds.amazonaws.com/us-west-2/us-west-2-bundle.p7b)  | 
| Africa (Cape Town) |  [af-south-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/af-south-1/af-south-1-bundle.pem)  |  [af-south-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/af-south-1/af-south-1-bundle.p7b)  | 
| Asia Pacific (Hong Kong) |  [ap-east-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-east-1/ap-east-1-bundle.pem)  |  [ap-east-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-east-1/ap-east-1-bundle.p7b)  | 
| Asia Pacific (Hyderabad) |  [ap-south-2-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-south-2/ap-south-2-bundle.pem)  |  [ap-south-2-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-south-2/ap-south-2-bundle.p7b)  | 
| Asia Pacific (Jakarta) |  [ap-southeast-3-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-southeast-3/ap-southeast-3-bundle.pem)  |  [ap-southeast-3-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-southeast-3/ap-southeast-3-bundle.p7b)  | 
| Asia Pacific (Malaysia) |  [ap-southeast-5-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-southeast-5/ap-southeast-5-bundle.pem)  |  [ap-southeast-5-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-southeast-5/ap-southeast-5-bundle.p7b)  | 
| Asia Pacific (Melbourne) |  [ap-southeast-4-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-southeast-4/ap-southeast-4-bundle.pem)  |  [ap-southeast-4-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-southeast-4/ap-southeast-4-bundle.p7b)  | 
| Asia Pacific (Mumbai) |  [ap-south-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-south-1/ap-south-1-bundle.pem)  |  [ap-south-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-south-1/ap-south-1-bundle.p7b)  | 
| Asia Pacific (Osaka) |  [ap-northeast-3-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-northeast-3/ap-northeast-3-bundle.pem)  |  [ap-northeast-3-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-northeast-3/ap-northeast-3-bundle.p7b)  | 
| Asia Pacific (Thailand) |  [ap-southeast-7-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-southeast-7/ap-southeast-7-bundle.pem)  |  [ap-southeast-7-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-southeast-7/ap-southeast-7-bundle.p7b)  | 
| Asia Pacific (Tokyo) |  [ap-northeast-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-northeast-1/ap-northeast-1-bundle.pem)  |  [ap-northeast-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-northeast-1/ap-northeast-1-bundle.p7b)  | 
| Asia Pacific (Seoul) |  [ap-northeast-2-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-northeast-2/ap-northeast-2-bundle.pem)  |  [ap-northeast-2-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-northeast-2/ap-northeast-2-bundle.p7b)  | 
| Asia Pacific (Singapore) |  [ap-southeast-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-southeast-1/ap-southeast-1-bundle.pem)  |  [ap-southeast-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-southeast-1/ap-southeast-1-bundle.p7b)  | 
| Asia Pacific (Sydney) |  [ap-southeast-2-bundle.pem](https://truststore.pki.rds.amazonaws.com/ap-southeast-2/ap-southeast-2-bundle.pem)  |  [ap-southeast-2-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ap-southeast-2/ap-southeast-2-bundle.p7b)  | 
| Canada (Central) |  [ca-central-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/ca-central-1/ca-central-1-bundle.pem)  |  [ca-central-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ca-central-1/ca-central-1-bundle.p7b)  | 
| Canada West (Calgary) |  [ca-west-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/ca-west-1/ca-west-1-bundle.pem)  |  [ca-west-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/ca-west-1/ca-west-1-bundle.p7b)  | 
| Europe (Frankfurt) |  [eu-central-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/eu-central-1/eu-central-1-bundle.pem)  |  [eu-central-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/eu-central-1/eu-central-1-bundle.p7b)  | 
| Europe (Ireland) |  [eu-west-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/eu-west-1/eu-west-1-bundle.pem)  |  [eu-west-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/eu-west-1/eu-west-1-bundle.p7b)  | 
| Europe (London) |  [eu-west-2-bundle.pem](https://truststore.pki.rds.amazonaws.com/eu-west-2/eu-west-2-bundle.pem)  |  [eu-west-2-bundle.p7b](https://truststore.pki.rds.amazonaws.com/eu-west-2/eu-west-2-bundle.p7b)  | 
| Europe (Milan) |  [eu-south-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/eu-south-1/eu-south-1-bundle.pem)  |  [eu-south-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/eu-south-1/eu-south-1-bundle.p7b)  | 
| Europe (Paris) |  [eu-west-3-bundle.pem](https://truststore.pki.rds.amazonaws.com/eu-west-3/eu-west-3-bundle.pem)  |  [eu-west-3-bundle.p7b](https://truststore.pki.rds.amazonaws.com/eu-west-3/eu-west-3-bundle.p7b)  | 
| Europe (Spain) |  [eu-south-2-bundle.pem](https://truststore.pki.rds.amazonaws.com/eu-south-2/eu-south-2-bundle.pem)  |  [eu-south-2-bundle.p7b](https://truststore.pki.rds.amazonaws.com/eu-south-2/eu-south-2-bundle.p7b)  | 
| Europe (Stockholm) |  [eu-north-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/eu-north-1/eu-north-1-bundle.pem)  |  [eu-north-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/eu-north-1/eu-north-1-bundle.p7b)  | 
| Europe (Zurich) |  [eu-central-2-bundle.pem](https://truststore.pki.rds.amazonaws.com/eu-central-2/eu-central-2-bundle.pem)  |  [eu-central-2-bundle.p7b](https://truststore.pki.rds.amazonaws.com/eu-central-2/eu-central-2-bundle.p7b)  | 
| Israel (Tel Aviv) |  [il-central-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/il-central-1/il-central-1-bundle.pem)  |  [il-central-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/il-central-1/il-central-1-bundle.p7b)  | 
| Mexico (Central) |  [mx-central-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/mx-central-1/mx-central-1-bundle.pem)  |  [mx-central-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/mx-central-1/mx-central-1-bundle.p7b)  | 
| Middle East (Bahrain) |  [me-south-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/me-south-1/me-south-1-bundle.pem)  |  [me-south-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/me-south-1/me-south-1-bundle.p7b)  | 
| Middle East (UAE) |  [me-central-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/me-central-1/me-central-1-bundle.pem)  |  [me-central-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/me-central-1/me-central-1-bundle.p7b)  | 
| South America (São Paulo) |  [sa-east-1-bundle.pem](https://truststore.pki.rds.amazonaws.com/sa-east-1/sa-east-1-bundle.pem)  |  [sa-east-1-bundle.p7b](https://truststore.pki.rds.amazonaws.com/sa-east-1/sa-east-1-bundle.p7b)  | 
| Any AWS GovCloud (US) Regions |  [global-bundle.pem](https://truststore.pki.us-gov-west-1.rds.amazonaws.com/global/global-bundle.pem)  |  [global-bundle.p7b](https://truststore.pki.us-gov-west-1.rds.amazonaws.com/global/global-bundle.p7b)  | 
| AWS GovCloud (US-East) |  [us-gov-east-1-bundle.pem](https://truststore.pki.us-gov-west-1.rds.amazonaws.com/us-gov-east-1/us-gov-east-1-bundle.pem)  |  [us-gov-east-1-bundle.p7b](https://truststore.pki.us-gov-west-1.rds.amazonaws.com/us-gov-east-1/us-gov-east-1-bundle.p7b)  | 
| AWS GovCloud (US-West) |  [us-gov-west-1-bundle.pem](https://truststore.pki.us-gov-west-1.rds.amazonaws.com/us-gov-west-1/us-gov-west-1-bundle.pem)  |  [us-gov-west-1-bundle.p7b](https://truststore.pki.us-gov-west-1.rds.amazonaws.com/us-gov-west-1/us-gov-west-1-bundle.p7b)  | 

### Viewing the contents of your CA certificate
<a name="UsingWithRDS.SSL.CertificatesDownload.viewing"></a>

To check the contents of your CA certificate bundle, use the following command: 

```
keytool -printcert -v -file global-bundle.pem
```

# Rotating your SSL/TLS certificate
<a name="UsingWithRDS.SSL-certificate-rotation"></a>

Amazon RDS Certificate Authority certificates rds-ca-2019 expired in August, 2024. If you use or plan to use Secure Sockets Layer (SSL) or Transport Layer Security (TLS) with certificate verification to connect to your RDS DB instances ,consider using one of the new CA certificates rds-ca-rsa2048-g1, rds-ca-rsa4096-g1 or rds-ca-ecc384-g1. If you currently do not use SSL/TLS with certificate verification, you might still have an expired CA certificate and must update them to a new CA certificate if you plan to use SSL/TLS with certificate verification to connect to your RDS databases.

Amazon RDS provides new CA certificates as an AWS security best practice. For information about the new certificates and the supported AWS Regions, see [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md) .

To update the CA certificate for your database, use the following methods: 
+  [Updating your CA certificate by modifying your DB instance ](#UsingWithRDS.SSL-certificate-rotation-updating) 
+  [Updating your CA certificate by applying maintenance](#UsingWithRDS.SSL-certificate-rotation-maintenance-update) 

Before you update your DB instances to use the new CA certificate, make sure that you update your clients or applications connecting to your RDS databases.

## Considerations for rotating certificates
<a name="UsingWithRDS.SSL-certificate-rotation-considerations"></a>

Consider the following situations before rotating your certificate:
+ Amazon RDS Proxy  uses certificates from the AWS Certificate Manager (ACM). If you're using RDS Proxy, when you rotate your SSL/TLS certificate, you don't need to update applications that use RDS Proxy connections. For more information, see [Using TLS/SSL with RDS Proxy](rds-proxy.howitworks.md#rds-proxy-security.tls) .
+ If you're using a Go version 1.15 application with a DB instance that was created or updated to the rds-ca-2019 certificate prior to July 28, 2020, you must update the certificate again. Update the certificate to rds-ca-rsa2048-g1, rds-ca-rsa4096-g1, or rds-ca-ecc384-g1 depending on your engine . 

  Use the `modify-db-instance` command , using the new CA certificate identifier. You can find the CAs that are available for a specific DB engine and DB engine version using the `describe-db-engine-versions` command. 

  If you created your database or updated its certificate after July 28, 2020, no action is required. For more information, see [Go GitHub issue \$139568](https://github.com/golang/go/issues/39568). 

## Updating your CA certificate by modifying your DB instance
<a name="UsingWithRDS.SSL-certificate-rotation-updating"></a>

The following example updates your CA certificate from *rds-ca-2019* to *rds-ca-rsa2048-g1*.You can choose a different certificate. For more information, see [Certificate authorities](UsingWithRDS.SSL.md#UsingWithRDS.SSL.RegionCertificateAuthorities) . 

Update your application trust store to reduce any down time associated with updating your CA certificate. For more information about restarts associated with CA certificate rotation, see [Automatic server certificate rotation](#UsingWithRDS.SSL-certificate-rotation-server-cert-rotation) .

**To update your CA certificate by modifying your DB instance**

1. Download the new SSL/TLS certificate as described in [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md) .

1. Update your applications to use the new SSL/TLS certificate.

   The methods for updating applications for new SSL/TLS certificates depend on your specific applications. Work with your application developers to update the SSL/TLS certificates for your applications.

   For information about checking for SSL/TLS connections and updating applications for each DB engine, see the following topics:
   +  [Updating applications to connect to Aurora MySQL DB clusters using new TLS certificates](ssl-certificate-rotation-aurora-mysql.md) 
   +  [Updating applications to connect to Aurora PostgreSQL DB clusters using new SSL/TLS certificates](ssl-certificate-rotation-aurora-postgresql.md) 

   For a sample script that updates a trust store for a Linux operating system, see [Sample script for importing certificates into your trust store](#UsingWithRDS.SSL-certificate-rotation-sample-script) .
**Note**  
The certificate bundle contains certificates for both the old and new CA, so you can upgrade your application safely and maintain connectivity during the transition period. If you are using the AWS Database Migration Service to migrate a database to a DB cluster, we recommend using the certificate bundle to ensure connectivity during the migration.

1. Modify the DB instance to change the CA from **rds-ca-2019** to **rds-ca-rsa2048-g1**. To check if your database requires a restart to update the CA certificates, use the [describe-db-engine-versions](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-engine-versions.html) command and check the `SupportsCertificateRotationWithoutRestart` flag. 
**Note**  
Reboot your Babelfish cluster after modifying to update the CA certificate.
**Important**  
If you are experiencing connectivity issues after certificate expiry, use the apply immediately option by specifying **Apply immediately** in the console or by specifying the `--apply-immediately` option using the AWS CLI. By default, this operation is scheduled to run during your next maintenance window.  
To set an override for your cluster CA that's different from the default RDS CA, use the [modify-certificates](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-certificates.html) CLI command.

You can use the AWS Management Console or the AWS CLI to change the CA certificate from **rds-ca-2019** to **rds-ca-rsa2048-g1** for a DB instance . 

------
#### [ Console ]

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**, and then choose the DB instance that you want to modify. 

1. Choose **Modify**.   
![\[Modify DB instance\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/ssl-rotate-cert-modify-aurora.png)

1. In the **Connectivity** section, choose **rds-ca-rsa2048-g1**.   
![\[Choose CA certificate\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/ssl-rotate-cert-ca-rsa2048-g1.png)

1. Choose **Continue** and check the summary of modifications. 

1. To apply the changes immediately, choose **Apply immediately**. 

1. On the confirmation page, review your changes. If they are correct, choose **Modify DB Instance** to save your changes. 
**Important**  
When you schedule this operation, make sure that you have updated your client-side trust store beforehand.

   Or choose **Back** to edit your changes or **Cancel** to cancel your changes. 

------
#### [ AWS CLI ]

To use the AWS CLI to change the CA from **rds-ca-2019** to **rds-ca-rsa2048-g1** for a DB instance , call the [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) or [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) command. Specify the DB instance identifier and the `--ca-certificate-identifier` option.

Use the `--apply-immediately` parameter to apply the update immediately. By default, this operation is scheduled to run during your next maintenance window.

**Important**  
When you schedule this operation, make sure that you have updated your client-side trust store beforehand.

**Example**  
The following example modifies `mydbinstance` by setting the CA certificate to `rds-ca-rsa2048-g1`.   
For Linux, macOS, or Unix:  

```
aws rds modify-db-instance \
    --db-instance-identifier mydbinstance \
    --ca-certificate-identifier rds-ca-rsa2048-g1
```
For Windows:  

```
aws rds modify-db-instance ^
    --db-instance-identifier mydbinstance ^
    --ca-certificate-identifier rds-ca-rsa2048-g1
```
If your instance requires reboot, you can use the [modify-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-instance.html) CLI command and specify the `--no-certificate-rotation-restart` option.

------

## Updating your CA certificate by applying maintenance
<a name="UsingWithRDS.SSL-certificate-rotation-maintenance-update"></a>

Perform the following steps to update your CA certificate by applying maintenance.

------
#### [ Console ]

**To update your CA certificate by applying maintenance**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Certificate update**.   
![\[Certificate rotation navigation pane option\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/ssl-rotate-cert-certupdate.png)

   The **Databases requiring certificate update** page appears.  
![\[Update CA certificate for database\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/ssl-rotate-cert-update-multiple.png)
**Note**  
This page only shows the DB instances for the current AWS Region. If you have databases in more than one AWS Region, check this page in each AWS Region to see all DB instances with old SSL/TLS certificates.

1. Choose the DB instance that you want to update.

   You can schedule the certificate rotation for your next maintenance window by choosing **Schedule**. Apply the rotation immediately by choosing **Apply now**. 
**Important**  
If you experience connectivity issues after certificate expiry, use the **Apply now** option.

1. 

   1. If you choose **Schedule**, you are prompted to confirm the CA certificate rotation. This prompt also states the scheduled window for your update.   
![\[Confirm certificate rotation\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/ssl-rotate-cert-confirm-schedule.png)

   1. If you choose **Apply now**, you are prompted to confirm the CA certificate rotation.  
![\[Confirm certificate rotation\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/ssl-rotate-cert-confirm-now.png)
**Important**  
Before scheduling the CA certificate rotation on your database, update any client applications that use SSL/TLS and the server certificate to connect. These updates are specific to your DB engine. After you have updated these client applications, you can confirm the CA certificate rotation. 

   To continue, choose the check box, and then choose **Confirm**. 

1. Repeat steps 3 and 4 for each DB instance that you want to update.

------

## Automatic server certificate rotation
<a name="UsingWithRDS.SSL-certificate-rotation-server-cert-rotation"></a>

If your root CA supports automatic server certificate rotation, RDS automatically handles the rotation of the DB server certificate. RDS uses the same root CA for this automatic rotation, so you don't need to download a new CA bundle. See [Certificate authorities](UsingWithRDS.SSL.md#UsingWithRDS.SSL.RegionCertificateAuthorities) .

The rotation and validity of your DB server certificate depend on your DB engine:
+ If your DB engine supports rotation without restart, RDS automatically rotates the DB server certificate without requiring any action from you. RDS attempts to rotate your DB server certificate in your preferred maintenance window at the DB server certificate half life. The new DB server certificate is valid for 12 months.
+ If your DB engine doesn't support rotation without restart, Amazon RDS makes a `server-certificate-rotation` Pending Maintenance Action visible via Describe-pending-maintenance-actions API, at the half life of the certificate, or at least 3 months before expiry. You can apply the rotation using the apply-pending-maintenance-action API. The new DB server certificate is valid for 36 months.

Use the [ describe-db-engine-versions](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-engine-versions.html) command and inspect the `SupportsCertificateRotationWithoutRestart` flag to identify whether the DB engine version supports rotating the certificate without restart. For more information, see [Setting the CA for your database](UsingWithRDS.SSL.md#UsingWithRDS.SSL.RegionCertificateAuthorities.Selection) . 

**Important**  
 For Amazon RDS for Oracle DB instances, you will see the `SupportsCertificateRotationWithoutRestart` flag of the DB engine versions marked as `FALSE`. However, Amazon RDS for Oracle DB instances do NOT require restart, but the database listener is restarted during the server certificate rotation. Existing database connections are unaffected, but new connections will encounter errors for a brief period while the listener is restarted. If you want to manually rotate the server certificate, use the [ apply-pending-maintenance-action](https://docs.aws.amazon.com/cli/latest/reference/rds/apply-pending-maintenance-action.html) AWS CLI command. 

## Sample script for importing certificates into your trust store
<a name="UsingWithRDS.SSL-certificate-rotation-sample-script"></a>

The following are sample shell scripts that import the certificate bundle into a trust store.

Each sample shell script uses keytool, which is part of the Java Development Kit (JDK). For information about installing the JDK, see [ JDK Installation Guide](https://docs.oracle.com/en/java/javase/17/install/overview-jdk-installation.html). 

------
#### [ Linux ]

The following is a sample shell script that imports the certificate bundle into a trust store on a Linux operating system.

```
mydir=tmp/certs
if [ ! -e "${mydir}" ]
then
mkdir -p "${mydir}"
fi truststore=${mydir}/rds-truststore.jks storepassword=changeit

curl -sS "https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem"> ${mydir}/global-bundle.pem
awk 'split_after == 1 {n++;split_after=0} /-----END CERTIFICATE-----/ {split_after=1}{print > "rds-ca-" n+1 ".pem"}' < ${mydir}/global-bundle.pem

for CERT in rds-ca-*; do alias=$(openssl x509 -noout -text -in $CERT | perl -ne 'next unless /Subject:/; s/.*(CN=|CN = )//; print')
  echo "Importing $alias"
  keytool -import -file ${CERT} -alias "${alias}" -storepass ${storepassword} -keystore ${truststore} -noprompt
  rm $CERT
done

rm ${mydir}/global-bundle.pem

echo "Trust store content is: "

keytool -list -v -keystore "$truststore" -storepass ${storepassword} | grep Alias | cut -d " " -f3- | while read alias 
do expiry=`keytool -list -v -keystore "$truststore" -storepass ${storepassword} -alias "${alias}" | grep Valid | perl -ne 'if(/until: (.*?)\n/) { print "$1\n"; }'`
   echo " Certificate ${alias} expires in '$expiry'" 
done
```

------
#### [ macOS ]

The following is a sample shell script that imports the certificate bundle into a trust store on macOS.

```
mydir=tmp/certs
if [ ! -e "${mydir}" ]
then
mkdir -p "${mydir}"
fi truststore=${mydir}/rds-truststore.jks storepassword=changeit

curl -sS "https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem"> ${mydir}/global-bundle.pem
split -p "-----BEGIN CERTIFICATE-----" ${mydir}/global-bundle.pem rds-ca-

for CERT in rds-ca-*; do alias=$(openssl x509 -noout -text -in $CERT | perl -ne 'next unless /Subject:/; s/.*(CN=|CN = )//; print')
  echo "Importing $alias"
  keytool -import -file ${CERT} -alias "${alias}" -storepass ${storepassword} -keystore ${truststore} -noprompt
  rm $CERT
done

rm ${mydir}/global-bundle.pem

echo "Trust store content is: "

keytool -list -v -keystore "$truststore" -storepass ${storepassword} | grep Alias | cut -d " " -f3- | while read alias 
do expiry=`keytool -list -v -keystore "$truststore" -storepass ${storepassword} -alias "${alias}" | grep Valid | perl -ne 'if(/until: (.*?)\n/) { print "$1\n"; }'`
   echo " Certificate ${alias} expires in '$expiry'" 
done
```

------

To learn more best practices about using SSL with Amazon RDS, see [ Best practices for successful SSL connections to Amazon RDS for Oracle ](https://aws.amazon.com/blogs/database/best-practices-for-successful-ssl-connections-to-amazon-rds-for-oracle/). 

# Internetwork traffic privacy
<a name="inter-network-traffic-privacy"></a>

Connections are protected both between Amazon Aurora and on-premises applications and between Amazon Aurora and other AWS resources within the same AWS Region.

## Traffic between service and on-premises clients and applications
<a name="inter-network-traffic-privacy-on-prem"></a>

You have two connectivity options between your private network and AWS: 
+ An AWS Site-to-Site VPN connection. For more information, see [What is AWS Site-to-Site VPN?](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html) 
+ An Direct Connect connection. For more information, see [What is Direct Connect?](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) 

You get access to Amazon Aurora through the network by using AWS-published API operations. Clients must support the following:
+ Transport Layer Security (TLS). We require TLS 1.2 and recommend TLS 1.3.
+ Cipher suites with perfect forward secrecy (PFS) such as DHE (Ephemeral Diffie-Hellman) or ECDHE (Elliptic Curve Ephemeral Diffie-Hellman). Most modern systems such as Java 7 and later support these modes.

Additionally, requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal. Or you can use the [AWS Security Token Service](https://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html) (AWS STS) to generate temporary security credentials to sign requests.

# Identity and access management for Amazon Aurora
<a name="UsingWithRDS.IAM"></a>





AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be *authenticated* (signed in) and *authorized* (have permissions) to use Amazon RDS resources. IAM is an AWS service that you can use with no additional charge.

**Topics**
+ [Audience](#security_iam_audience)
+ [Authenticating with identities](#security_iam_authentication)
+ [Managing access using policies](#security_iam_access-manage)
+ [How Amazon Aurora works with IAM](security_iam_service-with-iam.md)
+ [Identity-based policy examples for Amazon Aurora](security_iam_id-based-policy-examples.md)
+ [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md)
+ [Amazon RDS updates to AWS managed policies](rds-manpol-updates.md)
+ [Preventing cross-service confused deputy problems](cross-service-confused-deputy-prevention.md)
+ [IAM database authentication](UsingWithRDS.IAMDBAuth.md)
+ [Troubleshooting Amazon Aurora identity and access](security_iam_troubleshoot.md)

## Audience
<a name="security_iam_audience"></a>

How you use AWS Identity and Access Management (IAM) differs, depending on the work you do in Amazon Aurora.

**Service user** – If you use the Aurora service to do your job, then your administrator provides you with the credentials and permissions that you need. As you use more Aurora features to do your work, you might need additional permissions. Understanding how access is managed can help you request the right permissions from your administrator. If you cannot access a feature in Aurora, see [Troubleshooting Amazon Aurora identity and access](security_iam_troubleshoot.md).

**Service administrator** – If you're in charge of Aurora resources at your company, you probably have full access to Aurora. It's your job to determine which Aurora features and resources your employees should access. You must then submit requests to your administrator to change the permissions of your service users. Review the information on this page to understand the basic concepts of IAM. To learn more about how your company can use IAM with Aurora, see [How Amazon Aurora works with IAM](security_iam_service-with-iam.md).

**Administrator** – If you're an administrator, you might want to learn details about how you can write policies to manage access to Aurora. To view example Aurora identity-based policies that you can use in IAM, see [Identity-based policy examples for Amazon Aurora](security_iam_id-based-policy-examples.md).

## Authenticating with identities
<a name="security_iam_authentication"></a>

Authentication is how you sign in to AWS using your identity credentials. You must be authenticated as the AWS account root user, an IAM user, or by assuming an IAM role.

You can sign in as a federated identity using credentials from an identity source like AWS IAM Identity Center (IAM Identity Center), single sign-on authentication, or Google/Facebook credentials. For more information about signing in, see [How to sign in to your AWS account](https://docs.aws.amazon.com/signin/latest/userguide/how-to-sign-in.html) in the *AWS Sign-In User Guide*.

For programmatic access, AWS provides an SDK and CLI to cryptographically sign requests. For more information, see [AWS Signature Version 4 for API requests](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_sigv.html) in the *IAM User Guide*.

### AWS account root user
<a name="security_iam_authentication-rootuser"></a>

 When you create an AWS account, you begin with one sign-in identity called the AWS account *root user* that has complete access to all AWS services and resources. We strongly recommend that you don't use the root user for everyday tasks. For tasks that require root user credentials, see [Tasks that require root user credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html#root-user-tasks) in the *IAM User Guide*. 

### Federated identity
<a name="security_iam_authentication-federatedidentity"></a>

As a best practice, require human users to use federation with an identity provider to access AWS services using temporary credentials.

A *federated identity* is a user from your enterprise directory, web identity provider, or Directory Service that accesses AWS services using credentials from an identity source. Federated identities assume roles that provide temporary credentials.

For centralized access management, we recommend AWS IAM Identity Center. For more information, see [What is IAM Identity Center?](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) in the *AWS IAM Identity Center User Guide*.

### IAM users and groups
<a name="security_iam_authentication-iamuser"></a>

An *[IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html)* is an identity with specific permissions for a single person or application. We recommend using temporary credentials instead of IAM users with long-term credentials. For more information, see [Require human users to use federation with an identity provider to access AWS using temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-users-federation-idp) in the *IAM User Guide*.

An [https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) specifies a collection of IAM users and makes permissions easier to manage for large sets of users. For more information, see [Use cases for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/gs-identities-iam-users.html) in the *IAM User Guide*.

You can authenticate to your DB cluster using IAM database authentication.

IAM database authentication works with Aurora. For more information about authenticating to your DB cluster using IAM, see [IAM database authentication ](UsingWithRDS.IAMDBAuth.md).

### IAM roles
<a name="security_iam_authentication-iamrole"></a>

An *[IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html)* is an identity within your AWS account that has specific permissions. It is similar to a user, but is not associated with a specific person. You can temporarily assume an IAM role in the AWS Management Console by [switching roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-console.html). You can assume a role by calling an AWS CLI or AWS API operation or by using a custom URL. For more information about methods for using roles, see [Using IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html) in the *IAM User Guide*.

IAM roles with temporary credentials are useful in the following situations:
+ **Temporary user permissions** – A user can assume an IAM role to temporarily take on different permissions for a specific task. 
+ **Federated user access** – To assign permissions to a federated identity, you create a role and define permissions for the role. When a federated identity authenticates, the identity is associated with the role and is granted the permissions that are defined by the role. For information about roles for federation, see [ Create a role for a third-party identity provider (federation)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*. If you use IAM Identity Center, you configure a permission set. To control what your identities can access after they authenticate, IAM Identity Center correlates the permission set to a role in IAM. For information about permissions sets, see [ Permission sets](https://docs.aws.amazon.com/singlesignon/latest/userguide/permissionsetsconcept.html) in the *AWS IAM Identity Center User Guide*. 
+ **Cross-account access** – You can use an IAM role to allow someone (a trusted principal) in a different account to access resources in your account. Roles are the primary way to grant cross-account access. However, with some AWS services, you can attach a policy directly to a resource (instead of using a role as a proxy). To learn the difference between roles and resource-based policies for cross-account access, see [How IAM roles differ from resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_compare-resource-policies.html) in the *IAM User Guide*.
+ **Cross-service access** – Some AWS services use features in other AWS services. For example, when you make a call in a service, it's common for that service to run applications in Amazon EC2 or store objects in Amazon S3. A service might do this using the calling principal's permissions, using a service role, or using a service-linked role. 
  + **Forward access sessions** – Forward access sessions (FAS) use the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. For policy details when making FAS requests, see [Forward access sessions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_forward_access_sessions.html). 
  + **Service role** – A service role is an [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see [Create a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *IAM User Guide*. 
  + **Service-linked role** – A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. 
+ **Applications running on Amazon EC2** – You can use an IAM role to manage temporary credentials for applications that are running on an EC2 instance and making AWS CLI or AWS API requests. This is preferable to storing access keys within the EC2 instance. To assign an AWS role to an EC2 instance and make it available to all of its applications, you create an instance profile that is attached to the instance. An instance profile contains the role and enables programs that are running on the EC2 instance to get temporary credentials. For more information, see [Use an IAM role to grant permissions to applications running on Amazon EC2 instances](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html) in the *IAM User Guide*. 

To learn whether to use IAM roles, see [When to create an IAM role (instead of a user)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html#id_which-to-choose_role) in the *IAM User Guide*.

## Managing access using policies
<a name="security_iam_access-manage"></a>

You control access in AWS by creating policies and attaching them to IAM identities or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an entity (root user, user, or IAM role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents. For more information about the structure and contents of JSON policy documents, see [Overview of JSON policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#access_policies-json) in the *IAM User Guide*.

An administrator can use policies to specify who has access to AWS resources, and what actions they can perform on those resources. Every IAM entity (permission set or role) starts with no permissions. In other words, by default, users can do nothing, not even change their own password. To give a user permission to do something, an administrator must attach a permissions policy to a user. Or the administrator can add the user to a group that has the intended permissions. When an administrator gives permissions to a group, all users in that group are granted those permissions.

IAM policies define permissions for an action regardless of the method that you use to perform the operation. For example, suppose that you have a policy that allows the `iam:GetRole` action. A user with that policy can get role information from the AWS Management Console, the AWS CLI, or the AWS API.

### Identity-based policies
<a name="security_iam_access-manage-id-based-policies"></a>

Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as a permission set or role. These policies control what actions that identity can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

Identity-based policies can be further categorized as *inline policies* or *managed policies*. Inline policies are embedded directly into a single permission set or role. Managed policies are standalone policies that you can attach to multiple permission sets and roles in your AWS account. Managed policies include AWS managed policies and customer managed policies. To learn how to choose between a managed policy or an inline policy, see [Choosing between managed policies and inline policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#choosing-managed-or-inline) in the *IAM User Guide*.

For information about AWS managed policies that are specific to Amazon Aurora, see [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md).

### Other policy types
<a name="security_iam_access-manage-other-policies"></a>

AWS supports additional, less-common policy types. These policy types can set the maximum permissions granted to you by the more common policy types. 
+ **Permissions boundaries** – A permissions boundary is an advanced feature in which you set the maximum permissions that an identity-based policy can grant to an IAM entity (permission set or role). You can set a permissions boundary for an entity. The resulting permissions are the intersection of entity's identity-based policies and its permissions boundaries. Resource-based policies that specify the permission set or role in the `Principal` field are not limited by the permissions boundary. An explicit deny in any of these policies overrides the allow. For more information about permissions boundaries, see [Permissions boundaries for IAM entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) in the *IAM User Guide*.
+ **Service control policies (SCPs)** – SCPs are JSON policies that specify the maximum permissions for an organization or organizational unit (OU) in AWS Organizations. AWS Organizations is a service for grouping and centrally managing multiple AWS accounts that your business owns. If you enable all features in an organization, then you can apply service control policies (SCPs) to any or all of your accounts. The SCP limits permissions for entities in member accounts, including each AWS account root user. For more information about Organizations and SCPs, see [How SCPs work](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_about-scps.html) in the *AWS Organizations User Guide*.
+ **Session policies** – Session policies are advanced policies that you pass as a parameter when you programmatically create a temporary session for a role or federated user. The resulting session's permissions are the intersection of the permission sets or role's identity-based policies and the session policies. Permissions can also come from a resource-based policy. An explicit deny in any of these policies overrides the allow. For more information, see [Session policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session) in the *IAM User Guide*. 

### Multiple policy types
<a name="security_iam_access-manage-multiple-policies"></a>

When multiple types of policies apply to a request, the resulting permissions are more complicated to understand. To learn how AWS determines whether to allow a request when multiple policy types are involved, see [Policy evaluation logic](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html) in the *IAM User Guide*.

# How Amazon Aurora works with IAM
<a name="security_iam_service-with-iam"></a>

Before you use IAM to manage access to Amazon Aurora, you should understand what IAM features are available to use with Aurora.

The following table lists IAM features you can use with Amazon Aurora:


| IAM feature | Amazon Aurora support | 
| --- | --- | 
|  [Identity-based policies](#security_iam_service-with-iam-id-based-policies)  |  Yes  | 
|  [Resource-based policies](#security_iam_service-with-iam-resource-based-policies)  |  No  | 
|  [Policy actions](#security_iam_service-with-iam-id-based-policies-actions)  |  Yes  | 
|  [Policy resources](#security_iam_service-with-iam-id-based-policies-resources)  |  Yes  | 
|  [Policy condition keys (service-specific)](#UsingWithRDS.IAM.Conditions)  |  Yes  | 
|  [ACLs](#security_iam_service-with-iam-acls)  |  No  | 
|  [Attribute-based access control (ABAC) (tags in policies)](#security_iam_service-with-iam-tags)  |  Yes  | 
|  [Temporary credentials](#security_iam_service-with-iam-roles-tempcreds)  |  Yes  | 
|  [Forward access sessions](#security_iam_service-with-iam-principal-permissions)  |  Yes  | 
|  [Service roles](#security_iam_service-with-iam-roles-service)  |  Yes  | 
|  [Service-linked roles](#security_iam_service-with-iam-roles-service-linked)  |  Yes  | 

To get a high-level view of how Amazon Aurora and other AWS services work with IAM, see [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) in the *IAM User Guide*.

**Topics**
+ [Aurora identity-based policies](#security_iam_service-with-iam-id-based-policies)
+ [Resource-based policies within Aurora](#security_iam_service-with-iam-resource-based-policies)
+ [Policy actions for Aurora](#security_iam_service-with-iam-id-based-policies-actions)
+ [Policy resources for Aurora](#security_iam_service-with-iam-id-based-policies-resources)
+ [Policy condition keys for Aurora](#UsingWithRDS.IAM.Conditions)
+ [Access control lists (ACLs) in Aurora](#security_iam_service-with-iam-acls)
+ [Attribute-based access control (ABAC) in policies with Aurora tags](#security_iam_service-with-iam-tags)
+ [Using temporary credentials with Aurora](#security_iam_service-with-iam-roles-tempcreds)
+ [Forward access sessions for Aurora](#security_iam_service-with-iam-principal-permissions)
+ [Service roles for Aurora](#security_iam_service-with-iam-roles-service)
+ [Service-linked roles for Aurora](#security_iam_service-with-iam-roles-service-linked)

## Aurora identity-based policies
<a name="security_iam_service-with-iam-id-based-policies"></a>

**Supports identity-based policies:** Yes.

Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as an IAM user, group of users, or role. These policies control what actions users and roles can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see [Define custom IAM permissions with customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

With IAM identity-based policies, you can specify allowed or denied actions and resources as well as the conditions under which actions are allowed or denied. To learn about all of the elements that you can use in a JSON policy, see [IAM JSON policy elements reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html) in the *IAM User Guide*.

### Identity-based policy examples for Aurora
<a name="security_iam_service-with-iam-id-based-policies-examples"></a>

To view examples of Aurora identity-based policies, see [Identity-based policy examples for Amazon Aurora](security_iam_id-based-policy-examples.md).

## Resource-based policies within Aurora
<a name="security_iam_service-with-iam-resource-based-policies"></a>

**Supports resource-based policies:** No.

Resource-based policies are JSON policy documents that you attach to a resource. Examples of resource-based policies are IAM *role trust policies* and Amazon S3 *bucket policies*. In services that support resource-based policies, service administrators can use them to control access to a specific resource. For the resource where the policy is attached, the policy defines what actions a specified principal can perform on that resource and under what conditions. You must [specify a principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) in a resource-based policy. Principals can include accounts, users, roles, federated users, or AWS services.

To enable cross-account access, you can specify an entire account or IAM entities in another account as the principal in a resource-based policy. For more information, see [Cross account resource access in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-cross-account-resource-access.html) in the *IAM User Guide*.

## Policy actions for Aurora
<a name="security_iam_service-with-iam-id-based-policies-actions"></a>

**Supports policy actions:** Yes.

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Action` element of a JSON policy describes the actions that you can use to allow or deny access in a policy. Include actions in a policy to grant permissions to perform the associated operation.

Policy actions in Aurora use the following prefix before the action: `rds:`. For example, to grant someone permission to describe DB instances with the Amazon RDS `DescribeDBInstances` API operation, you include the `rds:DescribeDBInstances` action in their policy. Policy statements must include either an `Action` or `NotAction` element. Aurora defines its own set of actions that describe tasks that you can perform with this service.

To specify multiple actions in a single statement, separate them with commas as follows.

```
"Action": [
      "rds:action1",
      "rds:action2"
```

You can specify multiple actions using wildcards (\$1). For example, to specify all actions that begin with the word `Describe`, include the following action.

```
"Action": "rds:Describe*"
```



To see a list of Aurora actions, see [Actions Defined by Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-actions-as-permissions) in the *Service Authorization Reference*.

## Policy resources for Aurora
<a name="security_iam_service-with-iam-id-based-policies-resources"></a>

**Supports policy resources:** Yes.

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Resource` JSON policy element specifies the object or objects to which the action applies. As a best practice, specify a resource using its [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html). For actions that don't support resource-level permissions, use a wildcard (\$1) to indicate that the statement applies to all resources.

```
"Resource": "*"
```

The DB instance resource has the following Amazon Resource Name (ARN).

```
arn:${Partition}:rds:${Region}:${Account}:{ResourceType}/${Resource}
```

For more information about the format of ARNs, see [Amazon Resource Names (ARNs) and AWS service namespaces](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).

For example, to specify the `dbtest` DB instance in your statement, use the following ARN.

```
"Resource": "arn:aws:rds:us-west-2:123456789012:db:dbtest"
```

To specify all DB instances that belong to a specific account, use the wildcard (\$1).

```
"Resource": "arn:aws:rds:us-east-1:123456789012:db:*"
```

Some RDS API operations, such as those for creating resources, can't be performed on a specific resource. In those cases, use the wildcard (\$1).

```
"Resource": "*"
```

Many Amazon RDS API operations involve multiple resources. For example, `CreateDBInstance` creates a DB instance. You can specify that an user must use a specific security group and parameter group when creating a DB instance. To specify multiple resources in a single statement, separate the ARNs with commas. 

```
"Resource": [
      "resource1",
      "resource2"
```

To see a list of Aurora resource types and their ARNs, see [Resources Defined by Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-resources-for-iam-policies) in the *Service Authorization Reference*. To learn with which actions you can specify the ARN of each resource, see [Actions Defined by Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-actions-as-permissions).

## Policy condition keys for Aurora
<a name="UsingWithRDS.IAM.Conditions"></a>

**Supports service-specific policy condition keys:** Yes.

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Condition` element specifies when statements execute based on defined criteria. You can create conditional expressions that use [condition operators](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html), such as equals or less than, to match the condition in the policy with values in the request. To see all AWS global condition keys, see [AWS global condition context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) in the *IAM User Guide*.

Aurora defines its own set of condition keys and also supports using some global condition keys. To see all AWS global condition keys, see [AWS global condition context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) in the *IAM User Guide*.



 All RDS API operations support the `aws:RequestedRegion` condition key. 

To see a list of Aurora condition keys, see [Condition Keys for Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-policy-keys) in the *Service Authorization Reference*. To learn with which actions and resources you can use a condition key, see [Actions Defined by Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-actions-as-permissions).

## Access control lists (ACLs) in Aurora
<a name="security_iam_service-with-iam-acls"></a>

**Supports access control lists (ACLs):** No

Access control lists (ACLs) control which principals (account members, users, or roles) have permissions to access a resource. ACLs are similar to resource-based policies, although they do not use the JSON policy document format.

## Attribute-based access control (ABAC) in policies with Aurora tags
<a name="security_iam_service-with-iam-tags"></a>

**Supports attribute-based access control (ABAC) tags in policies:** Yes

Attribute-based access control (ABAC) is an authorization strategy that defines permissions based on attributes called tags. You can attach tags to IAM entities and AWS resources, then design ABAC policies to allow operations when the principal's tag matches the tag on the resource.

To control access based on tags, you provide tag information in the [condition element](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) of a policy using the `aws:ResourceTag/key-name`, `aws:RequestTag/key-name`, or `aws:TagKeys` condition keys.

If a service supports all three condition keys for every resource type, then the value is **Yes** for the service. If a service supports all three condition keys for only some resource types, then the value is **Partial**.

For more information about ABAC, see [Define permissions with ABAC authorization](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) in the *IAM User Guide*. To view a tutorial with steps for setting up ABAC, see [Use attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_attribute-based-access-control.html) in the *IAM User Guide*.

For more information about tagging Aurora resources, see [Specifying conditions: Using custom tags](UsingWithRDS.IAM.SpecifyingCustomTags.md). To view an example identity-based policy for limiting access to a resource based on the tags on that resource, see [Grant permission for actions on a resource with a specific tag with two different values](security_iam_id-based-policy-examples-create-and-modify-examples.md#security_iam_id-based-policy-examples-grant-permissions-tags).

## Using temporary credentials with Aurora
<a name="security_iam_service-with-iam-roles-tempcreds"></a>

**Supports temporary credentials:** Yes.

Temporary credentials provide short-term access to AWS resources and are automatically created when you use federation or switch roles. AWS recommends that you dynamically generate temporary credentials instead of using long-term access keys. For more information, see [Temporary security credentials in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) and [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) in the *IAM User Guide*.

## Forward access sessions for Aurora
<a name="security_iam_service-with-iam-principal-permissions"></a>

**Supports forward access sessions:** Yes.

 Forward access sessions (FAS) use the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. For policy details when making FAS requests, see [Forward access sessions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_forward_access_sessions.html). 

## Service roles for Aurora
<a name="security_iam_service-with-iam-roles-service"></a>

**Supports service roles:** Yes.

 A service role is an [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see [Create a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *IAM User Guide*. 

**Warning**  
Changing the permissions for a service role might break Aurora functionality. Edit service roles only when Aurora provides guidance to do so.

## Service-linked roles for Aurora
<a name="security_iam_service-with-iam-roles-service-linked"></a>

**Supports service-linked roles:** Yes.

 A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. 

For details about using Aurora service-linked roles, see [Using service-linked roles for Amazon Aurora](UsingWithRDS.IAM.ServiceLinkedRoles.md).

# Identity-based policy examples for Amazon Aurora
<a name="security_iam_id-based-policy-examples"></a>

By default, permission sets and roles don't have permission to create or modify Aurora resources. They also can't perform tasks using the AWS Management Console, AWS CLI, or AWS API. An administrator must create IAM policies that grant permission sets and roles permission to perform specific API operations on the specified resources they need. The administrator must then attach those policies to the permission sets or roles that require those permissions.

To learn how to create an IAM identity-based policy using these example JSON policy documents, see [Creating policies on the JSON tab](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html#access_policies_create-json-editor) in the *IAM User Guide*.

**Topics**
+ [Policy best practices](#security_iam_service-with-iam-policy-best-practices)
+ [Using the Aurora console](#security_iam_id-based-policy-examples-console)
+ [Permissions required to use the console](#UsingWithRDS.IAM.RequiredPermissions.Console)
+ [Allow users to view their own permissions](#security_iam_id-based-policy-examples-view-own-permissions)
+ [Permission policies to create, modify and, delete resources in Aurora](security_iam_id-based-policy-examples-create-and-modify-examples.md)
+ [Example policies: Using condition keys](UsingWithRDS.IAM.Conditions.Examples.md)
+ [Specifying conditions: Using custom tags](UsingWithRDS.IAM.SpecifyingCustomTags.md)
+ [Grant permission to tag Aurora resources during creation](security_iam_id-based-policy-examples-grant-permissions-tags-on-create.md)

## Policy best practices
<a name="security_iam_service-with-iam-policy-best-practices"></a>

Identity-based policies determine whether someone can create, access, or delete Amazon RDS resources in your account. These actions can incur costs for your AWS account. When you create or edit identity-based policies, follow these guidelines and recommendations:
+ **Get started with AWS managed policies and move toward least-privilege permissions** – To get started granting permissions to your users and workloads, use the *AWS managed policies* that grant permissions for many common use cases. They are available in your AWS account. We recommend that you reduce permissions further by defining AWS customer managed policies that are specific to your use cases. For more information, see [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) or [AWS managed policies for job functions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html) in the *IAM User Guide*.
+ **Apply least-privilege permissions** – When you set permissions with IAM policies, grant only the permissions required to perform a task. You do this by defining the actions that can be taken on specific resources under specific conditions, also known as *least-privilege permissions*. For more information about using IAM to apply permissions, see [ Policies and permissions in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) in the *IAM User Guide*.
+ **Use conditions in IAM policies to further restrict access** – You can add a condition to your policies to limit access to actions and resources. For example, you can write a policy condition to specify that all requests must be sent using SSL. You can also use conditions to grant access to service actions if they are used through a specific AWS service, such as CloudFormation. For more information, see [ IAM JSON policy elements: Condition](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) in the *IAM User Guide*.
+ **Use IAM Access Analyzer to validate your IAM policies to ensure secure and functional permissions** – IAM Access Analyzer validates new and existing policies so that the policies adhere to the IAM policy language (JSON) and IAM best practices. IAM Access Analyzer provides more than 100 policy checks and actionable recommendations to help you author secure and functional policies. For more information, see [Validate policies with IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html) in the *IAM User Guide*.
+ **Require multi-factor authentication (MFA)** – If you have a scenario that requires IAM users or a root user in your AWS account, turn on MFA for additional security. To require MFA when API operations are called, add MFA conditions to your policies. For more information, see [ Secure API access with MFA](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html) in the *IAM User Guide*.

For more information about best practices in IAM, see [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*.

## Using the Aurora console
<a name="security_iam_id-based-policy-examples-console"></a>

To access the Amazon Aurora console, you must have a minimum set of permissions. These permissions must allow you to list and view details about the Amazon Aurora resources in your AWS account. If you create an identity-based policy that is more restrictive than the minimum required permissions, the console won't function as intended for entities (users or roles) with that policy.

You don't need to allow minimum console permissions for users that are making calls only to the AWS CLI or the AWS API. Instead, allow access to only the actions that match the API operation that you're trying to perform.

To ensure that those entities can still use the Aurora console, also attach the following AWS managed policy to the entities.

```
AmazonRDSReadOnlyAccess
```

For more information, see [Adding permissions to a user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

## Permissions required to use the console
<a name="UsingWithRDS.IAM.RequiredPermissions.Console"></a>

For a user to work with the console, that user must have a minimum set of permissions. These permissions allow the user to describe the Amazon Aurora resources for their AWS account and to provide other related information, including Amazon EC2 security and network information.

If you create an IAM policy that is more restrictive than the minimum required permissions, the console doesn't function as intended for users with that IAM policy. To ensure that those users can still use the console, also attach the `AmazonRDSReadOnlyAccess` managed policy to the user, as described in [Managing access using policies](UsingWithRDS.IAM.md#security_iam_access-manage).

You don't need to allow minimum console permissions for users that are making calls only to the AWS CLI or the Amazon RDS API. 

The following policy grants full access to all Amazon Aurora resources for the root AWS account:

```
AmazonRDSFullAccess             
```

## Allow users to view their own permissions
<a name="security_iam_id-based-policy-examples-view-own-permissions"></a>

This example shows how you might create a policy that allows IAM users to view the inline and managed policies that are attached to their user identity. This policy includes permissions to complete this action on the console or programmatically using the AWS CLI or AWS API.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "ViewOwnUserInfo",
            "Effect": "Allow",
            "Action": [
                "iam:GetUserPolicy",
                "iam:ListGroupsForUser",
                "iam:ListAttachedUserPolicies",
                "iam:ListUserPolicies",
                "iam:GetUser"
            ],
            "Resource": ["arn:aws:iam::*:user/${aws:username}"]
        },
        {
            "Sid": "NavigateInConsole",
            "Effect": "Allow",
            "Action": [
                "iam:GetGroupPolicy",
                "iam:GetPolicyVersion",
                "iam:GetPolicy",
                "iam:ListAttachedGroupPolicies",
                "iam:ListGroupPolicies",
                "iam:ListPolicyVersions",
                "iam:ListPolicies",
                "iam:ListUsers"
            ],
            "Resource": "*"
        }
    ]
}
```

# Permission policies to create, modify and, delete resources in Aurora
<a name="security_iam_id-based-policy-examples-create-and-modify-examples"></a>

The following sections present examples of permission policies that grant and restrict access to resources:

## Allow a user to create DB instances in an AWS account
<a name="security_iam_id-based-policy-examples-create-db-instance-in-account"></a>

The following is an example policy that allows the account with the ID `123456789012` to create DB instances for your AWS account. The policy requires that the name of the new DB instance begin with `test`. The new DB instance must also use the MySQL database engine and the `db.t2.micro` DB instance class. In addition, the new DB instance must use an option group and a DB parameter group that starts with `default`, and it must use the `default` subnet group.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "AllowCreateDBInstanceOnly",
         "Effect": "Allow",
         "Action": [
            "rds:CreateDBInstance"
         ],
         "Resource": [
            "arn:aws:rds:*:123456789012:db:test*",
            "arn:aws:rds:*:123456789012:og:default*",
            "arn:aws:rds:*:123456789012:pg:default*",
            "arn:aws:rds:*:123456789012:subgrp:default"
         ],
         "Condition": {
            "StringEquals": {
               "rds:DatabaseEngine": "mysql",
               "rds:DatabaseClass": "db.t2.micro"
            }
         }
      }
   ]
}
```

------

The policy includes a single statement that specifies the following permissions for the user:
+ The policy allows the account to create a DB instance using the [CreateDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html) API operation (this also applies to the [create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html) AWS CLI command and the AWS Management Console).
+ The `Resource` element specifies that the user can perform actions on or with resources. You specify resources using an Amazon Resources Name (ARN). This ARN includes the name of the service that the resource belongs to (`rds`), the AWS Region (`*` indicates any region in this example), the AWS account number (`123456789012` is the account number in this example), and the type of resource. For more information about creating ARNs, see [Amazon Resource Names (ARNs) in Amazon RDS](USER_Tagging.ARN.md).

  The `Resource` element in the example specifies the following policy constraints on resources for the user:
  + The DB instance identifier for the new DB instance must begin with `test` (for example, `testCustomerData1`, `test-region2-data`).
  + The option group for the new DB instance must begin with `default`.
  + The DB parameter group for the new DB instance must begin with `default`.
  + The subnet group for the new DB instance must be the `default` subnet group.
+ The `Condition` element specifies that the DB engine must be MySQL and the DB instance class must be `db.t2.micro`. The `Condition` element specifies the conditions when a policy should take effect. You can add additional permissions or restrictions by using the `Condition` element. For more information about specifying conditions, see [Policy condition keys for Aurora](security_iam_service-with-iam.md#UsingWithRDS.IAM.Conditions). This example specifies the `rds:DatabaseEngine` and `rds:DatabaseClass` conditions. For information about the valid condition values for `rds:DatabaseEngine`, see the list under the `Engine` parameter in [CreateDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html). For information about the valid condition values for `rds:DatabaseClass`, see [Supported DB engines for DB instance classes](Concepts.DBInstanceClass.SupportAurora.md). 

The policy doesn't specify the `Principal` element because in an identity-based policy you don't specify the principal who gets the permission. When you attach policy to a user, the user is the implicit principal. When you attach a permission policy to an IAM role, the principal identified in the role's trust policy gets the permissions.

To see a list of Aurora actions, see [Actions Defined by Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-actions-as-permissions) in the *Service Authorization Reference*.

## Allow a user to perform any describe action on any RDS resource
<a name="IAMPolicyExamples-RDS-perform-describe-action"></a>

The following permissions policy grants permissions to a user to run all of the actions that begin with `Describe`. These actions show information about an RDS resource, such as a DB instance. The wildcard character (\$1) in the `Resource` element indicates that the actions are allowed for all Amazon Aurora resources owned by the account. 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "AllowRDSDescribe",
         "Effect": "Allow",
         "Action": "rds:Describe*",
         "Resource": "*"
      }
   ]
}
```

------

## Allow a user to create a DB instance that uses the specified DB parameter group and subnet group
<a name="security_iam_id-based-policy-examples-create-db-instance-specified-groups"></a>

The following permissions policy grants permissions to allow a user to only create a DB instance that must use the `mydbpg` DB parameter group and the `mydbsubnetgroup` DB subnet group. 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "VisualEditor0",
         "Effect": "Allow",
         "Action": "rds:CreateDBInstance",
         "Resource": [
            "arn:aws:rds:*:*:pg:mydbpg",
            "arn:aws:rds:*:*:subgrp:mydbsubnetgroup"
         ]
      }
   ]
}
```

------

## Grant permission for actions on a resource with a specific tag with two different values
<a name="security_iam_id-based-policy-examples-grant-permissions-tags"></a>

You can use conditions in your identity-based policy to control access to Aurora resources based on tags. The following policy allows permission to perform the `CreateDBSnapshot` API operation on DB instances with either the `stage` tag set to `development` or `test`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"AllowAnySnapshotName",
         "Effect":"Allow",
         "Action":[
            "rds:CreateDBSnapshot"
         ],
         "Resource":"arn:aws:rds:*:123456789012:snapshot:*"
      },
      {
         "Sid":"AllowDevTestToCreateSnapshot",
         "Effect":"Allow",
         "Action":[
            "rds:CreateDBSnapshot"
         ],
         "Resource":"arn:aws:rds:*:123456789012:db:*",
         "Condition":{
            "StringEquals":{
                "rds:db-tag/stage":[
                  "development",
                  "test"
               ]
            }
         }
      }
   ]
}
```

------

The following policy allows permission to perform the `ModifyDBInstance` API operation on DB instances with either the `stage` tag set to `development` or `test`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"AllowChangingParameterOptionSecurityGroups",
         "Effect":"Allow",
         "Action":[
            "rds:ModifyDBInstance"
         ],
         "Resource": [
            "arn:aws:rds:*:123456789012:pg:*",
            "arn:aws:rds:*:123456789012:secgrp:*",
            "arn:aws:rds:*:123456789012:og:*"
         ]
      },
      {
         "Sid":"AllowDevTestToModifyInstance",
         "Effect":"Allow",
         "Action":[
            "rds:ModifyDBInstance"
         ],
         "Resource":"arn:aws:rds:*:123456789012:db:*",
         "Condition":{
            "StringEquals":{
                "rds:db-tag/stage":[
                  "development",
                  "test"
               ]
            }
         }
      }
   ]
}
```

------

## Prevent a user from deleting a DB instance
<a name="IAMPolicyExamples-RDS-prevent-db-deletion"></a>

The following permissions policy grants permissions to prevent a user from deleting a specific DB instance. For example, you might want to deny the ability to delete your production DB instances to any user that is not an administrator.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "DenyDelete1",
         "Effect": "Deny",
         "Action": "rds:DeleteDBInstance",
         "Resource": "arn:aws:rds:us-west-2:123456789012:db:my-mysql-instance"
      }
   ]
}
```

------

## Deny all access to a resource
<a name="IAMPolicyExamples-RDS-deny-all-access"></a>

You can explicitly deny access to a resource. Deny policies take precedence over allow policies. The following policy explicitly denies a user the ability to manage a resource:

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Effect": "Deny",
         "Action": "rds:*",
         "Resource": "arn:aws:rds:us-east-1:123456789012:db:mydb"
      }
   ]
}
```

------

# Example policies: Using condition keys
<a name="UsingWithRDS.IAM.Conditions.Examples"></a>

Following are examples of how you can use condition keys in Amazon Aurora IAM permissions policies. 

## Example 1: Grant permission to create a DB instance that uses a specific DB engine and isn't MultiAZ
<a name="w2aac73c48c33c21b5"></a>

The following policy uses an RDS condition key and allows a user to create only DB instances that use the MySQL database engine and don't use MultiAZ. The `Condition` element indicates the requirement that the database engine is MySQL. 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "AllowMySQLCreate",
         "Effect": "Allow",
         "Action": "rds:CreateDBInstance",
         "Resource": "*",
         "Condition": {
            "StringEquals": {
               "rds:DatabaseEngine": "mysql"
            },
            "Bool": {
               "rds:MultiAz": false
            }
         }
      }
   ]
}
```

------

## Example 2: Explicitly deny permission to create DB instances for certain DB instance classes and create DB instances that use Provisioned IOPS
<a name="w2aac73c48c33c21b7"></a>

The following policy explicitly denies permission to create DB instances that use the DB instance classes `r3.8xlarge` and `m4.10xlarge`, which are the largest and most expensive DB instance classes. This policy also prevents users from creating DB instances that use Provisioned IOPS, which incurs an additional cost. 

Explicitly denying permission supersedes any other permissions granted. This ensures that identities to not accidentally get permission that you never want to grant.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Sid": "DenyLargeCreate",
         "Effect": "Deny",
         "Action": "rds:CreateDBInstance",
         "Resource": "*",
         "Condition": {
            "StringEquals": {
               "rds:DatabaseClass": [
                  "db.r3.8xlarge",
                  "db.m4.10xlarge"
               ]
            }
         }
      },
      {
         "Sid": "DenyPIOPSCreate",
         "Effect": "Deny",
         "Action": "rds:CreateDBInstance",
         "Resource": "*",
         "Condition": {
            "NumericNotEquals": {
               "rds:Piops": "0"
            }
         }
      }
   ]
}
```

------

## Example 3: Limit the set of tag keys and values that can be used to tag a resource
<a name="w2aac73c48c33c21b9"></a>

The following policy uses an RDS condition key and allows the addition of a tag with the key `stage` to be added to a resource with the values `test`, `qa`, and `production`.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowTagEdits",
      "Effect": "Allow",
      "Action": [
        "rds:AddTagsToResource",
        "rds:RemoveTagsFromResource"
      ],
      "Resource": "arn:aws:rds:us-east-1:123456789012:db:db-123456",
      "Condition": {
        "StringEquals": {
          "rds:req-tag/stage": [
            "test",
            "qa",
            "production"
          ]
        }
      }
    }
  ]
}
```

------

# Specifying conditions: Using custom tags
<a name="UsingWithRDS.IAM.SpecifyingCustomTags"></a>

Amazon Aurora supports specifying conditions in an IAM policy using custom tags.

For example, suppose that you add a tag named `environment` to your DB instances with values such as `beta`, `staging`, `production`, and so on. If you do, you can create a policy that restricts certain users to DB instances based on the `environment` tag value.

**Note**  
Custom tag identifiers are case-sensitive.

The following table lists the RDS tag identifiers that you can use in a `Condition` element. 

<a name="rds-iam-condition-tag-reference"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAM.SpecifyingCustomTags.html)

The syntax for a custom tag condition is as follows:

`"Condition":{"StringEquals":{"rds:rds-tag-identifier/tag-name": ["value"]} }` 

For example, the following `Condition` element applies to DB instances with a tag named `environment` and a tag value of `production`. 

` "Condition":{"StringEquals":{"rds:db-tag/environment": ["production"]} } ` 

For information about creating tags, see [Tagging Amazon Aurora andAmazon RDS resources](USER_Tagging.md).

**Important**  
If you manage access to your RDS resources using tagging, we recommend that you secure access to the tags for your RDS resources. You can manage access to tags by creating policies for the `AddTagsToResource` and `RemoveTagsFromResource` actions. For example, the following policy denies users the ability to add or remove tags for all resources. You can then create policies to allow specific users to add or remove tags.   

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"DenyTagUpdates",
         "Effect":"Deny",
         "Action":[
            "rds:AddTagsToResource",
            "rds:RemoveTagsFromResource"
         ],
         "Resource":"*"
      }
   ]
}
```

To see a list of Aurora actions, see [Actions Defined by Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html#amazonrds-actions-as-permissions) in the *Service Authorization Reference*.

## Example policies: Using custom tags
<a name="UsingWithRDS.IAM.Conditions.Tags.Examples"></a>

Following are examples of how you can use custom tags in Amazon Aurora IAM permissions policies. For more information about adding tags to an Amazon Aurora resource, see [Amazon Resource Names (ARNs) in Amazon RDS](USER_Tagging.ARN.md). 

**Note**  
All examples use the us-west-2 region and contain fictitious account IDs.

### Example 1: Grant permission for actions on a resource with a specific tag with two different values
<a name="w2aac73c48c33c23c29b6"></a>

The following policy allows permission to perform the `CreateDBSnapshot` API operation on DB instances with either the `stage` tag set to `development` or `test`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"AllowAnySnapshotName",
         "Effect":"Allow",
         "Action":[
            "rds:CreateDBSnapshot"
         ],
         "Resource":"arn:aws:rds:*:123456789012:snapshot:*"
      },
      {
         "Sid":"AllowDevTestToCreateSnapshot",
         "Effect":"Allow",
         "Action":[
            "rds:CreateDBSnapshot"
         ],
         "Resource":"arn:aws:rds:*:123456789012:db:*",
         "Condition":{
            "StringEquals":{
                "rds:db-tag/stage":[
                  "development",
                  "test"
               ]
            }
         }
      }
   ]
}
```

------

The following policy allows permission to perform the `ModifyDBInstance` API operation on DB instances with either the `stage` tag set to `development` or `test`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"AllowChangingParameterOptionSecurityGroups",
         "Effect":"Allow",
         "Action":[
            "rds:ModifyDBInstance"
            ],
          "Resource": [
            "arn:aws:rds:*:123456789012:pg:*",
            "arn:aws:rds:*:123456789012:secgrp:*",
            "arn:aws:rds:*:123456789012:og:*"
            ]
       },
       {
         "Sid":"AllowDevTestToModifyInstance",
         "Effect":"Allow",
         "Action":[
            "rds:ModifyDBInstance"
            ],
         "Resource":"arn:aws:rds:*:123456789012:db:*",
         "Condition":{
            "StringEquals":{
               "rds:db-tag/stage":[
                  "development",
                  "test"
                  ]
               }
            }
       }
    ]
}
```

------

### Example 2: Explicitly deny permission to create a DB instance that uses specified DB parameter groups
<a name="w2aac73c48c33c23c29b8"></a>

The following policy explicitly denies permission to create a DB instance that uses DB parameter groups with specific tag values. You might apply this policy if you require that a specific customer-created DB parameter group always be used when creating DB instances. Policies that use `Deny` are most often used to restrict access that was granted by a broader policy.

Explicitly denying permission supersedes any other permissions granted. This ensures that identities to not accidentally get permission that you never want to grant.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"DenyProductionCreate",
         "Effect":"Deny",
         "Action":"rds:CreateDBInstance",
         "Resource":"arn:aws:rds:*:123456789012:pg:*",
         "Condition":{
            "StringEquals":{
               "rds:pg-tag/usage":"prod"
            }
         }
      }
   ]
}
```

------

### Example 3: Grant permission for actions on a DB instance with an instance name that is prefixed with a user name
<a name="w2aac73c48c33c23c29c10"></a>

The following policy allows permission to call any API (except to `AddTagsToResource` or `RemoveTagsFromResource`) on a DB instance that has a DB instance name that is prefixed with the user's name and that has a tag called `stage` equal to `devo` or that has no tag called `stage`.

The `Resource` line in the policy identifies a resource by its Amazon Resource Name (ARN). For more information about using ARNs with Amazon Aurora resources, see [Amazon Resource Names (ARNs) in Amazon RDS](USER_Tagging.ARN.md). 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"AllowFullDevAccessNoTags",
         "Effect":"Allow",
         "NotAction":[
            "rds:AddTagsToResource",
            "rds:RemoveTagsFromResource"
         ],
         "Resource":"arn:aws:rds:*:123456789012:db:${aws:username}*",
         "Condition":{
            "StringEqualsIfExists":{
               "rds:db-tag/stage":"devo"
            }
         }
      }
   ]
}
```

------

# Grant permission to tag Aurora resources during creation
<a name="security_iam_id-based-policy-examples-grant-permissions-tags-on-create"></a>

Some RDS API operations allow you to specify tags when you create the resource. You can use resource tags to implement attribute-based control (ABAC). For more information, see [What is ABAC for AWS?](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) and [Controlling access to AWS resources using tags](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html).

To enable users to tag resources on creation, they must have permissions to use the action that creates the resource, such as `rds:CreateDBCluster`. If tags are specified in the create action, RDS performs additional authorization on the `rds:AddTagsToResource` action to verify if users have permissions to create tags. Therefore, users must also have explicit permissions to use the `rds:AddTagsToResource` action.

In the IAM policy definition for the `rds:AddTagsToResource` action, you can use the `aws:RequestTag` condition key to require tags in a request to tag a resource.

For example, the following policy allows users to create DB instances and apply tags during DB instance creation, but only with specific tag keys (`environment` or `project`):

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
       {
           "Effect": "Allow",
           "Action": [
               "rds:CreateDBInstance"
           ],
           "Resource": "*"
       },
       {
           "Effect": "Allow",
           "Action": [
               "rds:AddTagsToResource"
           ],
           "Resource": "*",
           "Condition": {
               "StringEquals": {
                   "aws:RequestTag/environment": ["production", "development"],
                   "aws:RequestTag/project": ["dataanalytics", "webapp"]
               },
               "ForAllValues:StringEquals": {
                   "aws:TagKeys": ["environment", "project"]
               }
           }
       }
   ]
}
```

------

This policy denies any create DB instance request that includes tags other than the `environment` or `project` tags, or that doesn't specify either of these tags. Additionally, users must specify values for the tags that match the allowed values in the policy.

The following policy allows users to create DB clusters and apply any tags during creation except the `environment=prod` tag:

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
       {
           "Effect": "Allow",
           "Action": [
               "rds:CreateDBCluster"
           ],
           "Resource": "*"
       },
       {
           "Effect": "Allow",
           "Action": [
               "rds:AddTagsToResource"
           ],
           "Resource": "*",
           "Condition": {
               "StringNotEquals": {
                   "aws:RequestTag/environment": "prod"
               }
           }
       }
   ]
}
```

------

## Supported RDS API actions for tagging on creation
<a name="security_iam_id-based-policy-examples-supported-rds-api-actions-tagging-creation"></a>

The following RDS API actions support tagging when you create a resource. For these actions, you can specify tags when you create the resource:
+ `CreateBlueGreenDeployment`
+ `CreateCustomDBEngineVersion`
+ `CreateDBCluster`
+ `CreateDBClusterEndpoint`
+ `CreateDBClusterParameterGroup`
+ `CreateDBClusterSnapshot`
+ `CreateDBInstance`
+ `CreateDBInstanceReadReplica`
+ `CreateDBParameterGroup`
+ `CreateDBProxy`
+ `CreateDBProxyEndpoint`
+ `CreateDBSecurityGroup`
+ `CreateDBShardGroup`
+ `CreateDBSnapshot`
+ `CreateDBSubnetGroup`
+ `CreateEventSubscription`
+ `CreateGlobalCluster`
+ `CreateIntegration`
+ `CreateOptionGroup`
+ `CreateTenantDatabase`
+ `CopyDBClusterParameterGroup`
+ `CopyDBClusterSnapshot`
+ `CopyDBParameterGroup`
+ `CopyDBSnapshot`
+ `CopyOptionGroup`
+ `RestoreDBClusterFromS3`
+ `RestoreDBClusterFromSnapshot`
+ `RestoreDBClusterToPointInTime`
+ `RestoreDBInstanceFromDBSnapshot`
+ `RestoreDBInstanceFromS3`
+ `RestoreDBInstanceToPointInTime`
+ `PurchaseReservedDBInstancesOffering`

If you use the AWS CLI or API to create a resource with tags, the `Tags` parameter is used to apply tags to resources during creation.

For these API actions, if tagging fails, the resource is not created, and the request fails with an error. This ensures that resources are either created with tags or not created at all, preventing resources from being created without the intended tags.

# AWS managed policies for Amazon RDS
<a name="rds-security-iam-awsmanpol"></a>

To add permissions to permission sets and roles, it's easier to use AWS managed policies than to write policies yourself. It takes time and expertise to [create IAM customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html) that provide your team with only the permissions they need. To get started quickly, you can use our AWS managed policies. These policies cover common use cases and are available in your AWS account. For more information about AWS managed policies, see [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) in the *IAM User Guide*.

AWS services maintain and update AWS managed policies. You can't change the permissions in AWS managed policies. Services occasionally add additional permissions to an AWS managed policy to support new features. This type of update affects all identities (permission sets and roles) where the policy is attached. Services are most likely to update an AWS managed policy when a new feature is launched or when new operations become available. Services don't remove permissions from an AWS managed policy, so policy updates don't break your existing permissions.

Additionally, AWS supports managed policies for job functions that span multiple services. For example, the `ReadOnlyAccess` AWS managed policy provides read-only access to all AWS services and resources. When a service launches a new feature, AWS adds read-only permissions for new operations and resources. For a list and descriptions of job function policies, see [AWS managed policies for job functions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html) in the *IAM User Guide*.

**Topics**
+ [AWS managed policy: AmazonRDSReadOnlyAccess](#rds-security-iam-awsmanpol-AmazonRDSReadOnlyAccess)
+ [AWS managed policy: AmazonRDSFullAccess](#rds-security-iam-awsmanpol-AmazonRDSFullAccess)
+ [AWS managed policy: AmazonRDSDataFullAccess](#rds-security-iam-awsmanpol-AmazonRDSDataFullAccess)
+ [AWS managed policy: AmazonRDSEnhancedMonitoringRole](#rds-security-iam-awsmanpol-AmazonRDSEnhancedMonitoringRole)
+ [AWS managed policy: AmazonRDSPerformanceInsightsReadOnly](#rds-security-iam-awsmanpol-AmazonRDSPerformanceInsightsReadOnly)
+ [AWS managed policy: AmazonRDSPerformanceInsightsFullAccess](#rds-security-iam-awsmanpol-AmazonRDSPerformanceInsightsFullAccess)
+ [AWS managed policy: AmazonRDSDirectoryServiceAccess](#rds-security-iam-awsmanpol-AmazonRDSDirectoryServiceAccess)
+ [AWS managed policy: AmazonRDSServiceRolePolicy](#rds-security-iam-awsmanpol-AmazonRDSServiceRolePolicy)
+ [AWS managed policy: AmazonRDSPreviewServiceRolePolicy](#rds-security-iam-awsmanpol-AmazonRDSPreviewServiceRolePolicy)
+ [AWS managed policy: AmazonRDSBetaServiceRolePolicy](#rds-security-iam-awsmanpol-AmazonRDSBetaServiceRolePolicy)

## AWS managed policy: AmazonRDSReadOnlyAccess
<a name="rds-security-iam-awsmanpol-AmazonRDSReadOnlyAccess"></a>

This policy allows read-only access to Amazon RDS through the AWS Management Console.

**Permissions details**

This policy includes the following permissions:
+ `rds` – Allows principals to describe Amazon RDS resources and list the tags for Amazon RDS resources.
+ `cloudwatch` – Allows principals to get Amazon CloudWatch metric statistics.
+ `ec2` – Allows principals to describe Availability Zones and networking resources.
+ `logs` – Allows principals to describe CloudWatch Logs log streams of log groups, and get CloudWatch Logs log events.
+ `devops-guru` – Allows principals to describe resources that have Amazon DevOps Guru coverage, which is specified either by CloudFormation stack names or resource tags.

For more information about this policy, including the JSON policy document, see [AmazonRDSReadOnlyAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSReadOnlyAccess.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSFullAccess
<a name="rds-security-iam-awsmanpol-AmazonRDSFullAccess"></a>

This policy provides full access to Amazon RDS through the AWS Management Console.

**Permissions details**

This policy includes the following permissions:
+ `rds` – Allows principals full access to Amazon RDS.
+ `application-autoscaling` – Allows principals describe and manage Application Auto Scaling scaling targets and policies.
+ `cloudwatch` – Allows principals get CloudWatch metric statics and manage CloudWatch alarms.
+ `ec2` – Allows principals to describe Availability Zones and networking resources.
+ `logs` – Allows principals to describe CloudWatch Logs log streams of log groups, and get CloudWatch Logs log events.
+ `outposts` – Allows principals to get AWS Outposts instance types.
+ `pi` – Allows principals to get Performance Insights metrics.
+ `sns` – Allows principals to Amazon Simple Notification Service (Amazon SNS) subscriptions and topics, and to publish Amazon SNS messages.
+ `devops-guru` – Allows principals to describe resources that have Amazon DevOps Guru coverage, which is specified either by CloudFormation stack names or resource tags.

For more information about this policy, including the JSON policy document, see [AmazonRDSFullAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSFullAccess.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSDataFullAccess
<a name="rds-security-iam-awsmanpol-AmazonRDSDataFullAccess"></a>

This policy allows full access to use the Data API and the query editor on Aurora Serverless clusters in a specific AWS account. This policy allows the AWS account to get the value of a secret from AWS Secrets Manager. 

You can attach the `AmazonRDSDataFullAccess` policy to your IAM identities.

**Permissions details**

This policy includes the following permissions:
+ `dbqms` – Allows principals to access, create, delete, describe, and update queries. The Database Query Metadata Service (`dbqms`) is an internal-only service. It provides your recent and saved queries for the query editor on the AWS Management Console for multiple AWS services, including Amazon RDS.
+ `rds-data` – Allows principals to run SQL statements on Aurora Serverless databases.
+ `secretsmanager` – Allows principals to get the value of a secret from AWS Secrets Manager.

For more information about this policy, including the JSON policy document, see [AmazonRDSDataFullAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSDataFullAccess.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSEnhancedMonitoringRole
<a name="rds-security-iam-awsmanpol-AmazonRDSEnhancedMonitoringRole"></a>

This policy provides access to Amazon CloudWatch Logs for Amazon RDS Enhanced Monitoring.

**Permissions details**

This policy includes the following permissions:
+ `logs` – Allows principals to create CloudWatch Logs log groups and retention policies, and to create and describe CloudWatch Logs log streams of log groups. It also allows principals to put and get CloudWatch Logs log events.

For more information about this policy, including the JSON policy document, see [AmazonRDSEnhancedMonitoringRole](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSEnhancedMonitoringRole.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSPerformanceInsightsReadOnly
<a name="rds-security-iam-awsmanpol-AmazonRDSPerformanceInsightsReadOnly"></a>

This policy provides read-only access to Amazon RDS Performance Insights for Amazon RDS DB instances and Amazon Aurora DB clusters.

This policy now includes `Sid` (statement ID) as an identifier for the policy statement. 

**Permissions details**

This policy includes the following permissions:
+ `rds` – Allows principals to describe Amazon RDS DB instances and Amazon Aurora DB clusters.
+ `pi` – Allows principals make calls to the Amazon RDS Performance Insights API and access Performance Insights metrics.

For more information about this policy, including the JSON policy document, see [AmazonRDSPerformanceInsightsReadOnly](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSPerformanceInsightsReadOnly.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSPerformanceInsightsFullAccess
<a name="rds-security-iam-awsmanpol-AmazonRDSPerformanceInsightsFullAccess"></a>

This policy provides full access to Amazon RDS Performance Insights for Amazon RDS DB instances and Amazon Aurora DB clusters.

This policy now includes `Sid` (statement ID) as an identifier for the policy statement. 

**Permissions details**

This policy includes the following permissions:
+ `rds` – Allows principals to describe Amazon RDS DB instances and Amazon Aurora DB clusters.
+ `pi` – Allows principals make calls to the Amazon RDS Performance Insights API, and create, view, and delete performance analysis reports.
+ `cloudwatch` – Allows principals to list all the Amazon CloudWatch metrics, and get metric data and statistics.

For more information about this policy, including the JSON policy document, see [AmazonRDSPerformanceInsightsFullAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSPerformanceInsightsFullAccess.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSDirectoryServiceAccess
<a name="rds-security-iam-awsmanpol-AmazonRDSDirectoryServiceAccess"></a>

This policy allows Amazon RDS to make calls to the Directory Service.

**Permissions details**

This policy includes the following permission:
+ `ds` – Allows principals to describe Directory Service directories and control authorization to Directory Service directories.

For more information about this policy, including the JSON policy document, see [AmazonRDSDirectoryServiceAccess](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSDirectoryServiceAccess.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSServiceRolePolicy
<a name="rds-security-iam-awsmanpol-AmazonRDSServiceRolePolicy"></a>

You can't attach the `AmazonRDSServiceRolePolicy` policy to your IAM entities. This policy is attached to a service-linked role that allows Amazon RDS to perform actions on your behalf. For more information, see [Service-linked role permissions for Amazon Aurora](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions).

## AWS managed policy: AmazonRDSPreviewServiceRolePolicy
<a name="rds-security-iam-awsmanpol-AmazonRDSPreviewServiceRolePolicy"></a>

You shouldn't attach `AmazonRDSPreviewServiceRolePolicy` to your IAM entities. This policy is attached to a service-linked role that allows Amazon RDS to call AWS services on behalf of your RDS DB resources. For more information, see [Service-linked role for Amazon RDS Preview](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-rdspreview). 

**Permissions details**

This policy includes the following permissions:
+ `ec2` ‐ Allows principals to describe Availability Zones and networking resources.
+ `secretsmanager` – Allows principals to get the value of a secret from AWS Secrets Manager.
+ `cloudwatch`, `logs` ‐ Allows Amazon RDS to upload DB instance metrics and logs to CloudWatch through CloudWatch agent.

For more information about this policy, including the JSON policy document, see [AmazonRDSPreviewServiceRolePolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSPreviewServiceRolePolicy.html) in the *AWS Managed Policy Reference Guide*.

## AWS managed policy: AmazonRDSBetaServiceRolePolicy
<a name="rds-security-iam-awsmanpol-AmazonRDSBetaServiceRolePolicy"></a>

You shouldn't attach `AmazonRDSBetaServiceRolePolicy` to your IAM entities. This policy is attached to a service-linked role that allows Amazon RDS to call AWS services on behalf of your RDS DB resources. For more information, see [Service-linked role permissions for Amazon RDS Beta](UsingWithRDS.IAM.ServiceLinkedRoles.md#slr-permissions-rdsbeta).

**Permissions details**

This policy includes the following permissions:
+ `ec2` ‐ Allows Amazon RDS to perform backup operations on the DB instance that provides point-in-time restore capabilities.
+ `secretsmanager` ‐ Allows Amazon RDS to manage DB instance specific secrets created by Amazon RDS.
+ `cloudwatch`, `logs` ‐ Allows Amazon RDS to upload DB instance metrics and logs to CloudWatch through CloudWatch agent.

For more information about this policy, including the JSON policy document, see [AmazonRDSBetaServiceRolePolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSBetaServiceRolePolicy.html) in the *AWS Managed Policy Reference Guide*.

# Amazon RDS updates to AWS managed policies
<a name="rds-manpol-updates"></a>

View details about updates to AWS managed policies for Amazon RDS since this service began tracking these changes. For automatic alerts about changes to this page, subscribe to the RSS feed on the Amazon RDS [Document history](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/WhatsNew.html) page.




| Change | Description | Date | 
| --- | --- | --- | 
| [AWS managed policy: AmazonRDSPreviewServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSPreviewServiceRolePolicy) – Update to existing policy |  Amazon RDS removed `sns:Publish` permission from the `AmazonRDSPreviewServiceRolePolicy` of the `AWSServiceRoleForRDSPreview` service-linked role. For more information, see [AWS managed policy: AmazonRDSPreviewServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSPreviewServiceRolePolicy). | August 7, 2024 | 
| [AWS managed policy: AmazonRDSBetaServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSBetaServiceRolePolicy) – Update to existing policy |  Amazon RDS removed `sns:Publish` permission from the `AmazonRDSBetaServiceRolePolicy` of the `AWSServiceRoleForRDSBeta` service-linked role. For more information, see [AWS managed policy: AmazonRDSBetaServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSBetaServiceRolePolicy).  | August 7, 2024 | 
| [AWS managed policy: AmazonRDSServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSServiceRolePolicy) – Update to existing policy |  Amazon RDS removed `sns:Publish` permission from the `AmazonRDSServiceRolePolicy` of the ` AWSServiceRoleForRDS` service-linked role. For more information, see [AWS managed policy: AmazonRDSServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSServiceRolePolicy).  | July 2, 2024 | 
| [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md) – Update to existing policy |  Amazon RDS added a new permission to the `AmazonRDSCustomServiceRolePolicy` of the `AWSServiceRoleForRDSCustom` service-linked role to allow RDS Custom for SQL Server to modify the underlying database host instance type. RDS also added the `ec2:DescribeInstanceTypes` permission to get instance type information for database host. For more information, see [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md).  | April 8, 2024 | 
|  [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md) – New policy  | Amazon RDS added a new managed policy named AmazonRDSCustomInstanceProfileRolePolicy to allow RDS Custom to perform automation actions and database management tasks through an EC2 instance profile. For more information, see [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md). | February 27, 2024 | 
|  [Service-linked role permissions for Amazon Aurora](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy | Amazon RDS added new statement IDs to the `AmazonRDSServiceRolePolicy` of the `AWSServiceRoleForRDS` service-linked role. For more information, see [Service-linked role permissions for Amazon Aurora](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions).  |  January 19, 2024  | 
|  [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md) – Update to existing policies  |  The `AmazonRDSPerformanceInsightsReadOnly` and `AmazonRDSPerformanceInsightsFullAccess` managed policies now includes `Sid` (statement ID) as an identifier in the policy statement.  For more information, see [AWS managed policy: AmazonRDSPerformanceInsightsReadOnly](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSPerformanceInsightsReadOnly) and [AWS managed policy: AmazonRDSPerformanceInsightsFullAccess](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSPerformanceInsightsFullAccess)   |  October 23, 2023  | 
|  [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md) – Update to existing policy  |  Amazon RDS added new permissions to `AmazonRDSFullAccess` managed policy. The permissions allow you to generate, view, and delete the performance analysis report for a time period. For more information about configuring access policies for Performance Insights, see [Configuring access policies for Performance Insights](USER_PerfInsights.access-control.md)  |  August 17, 2023  | 
|  [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md) – New policy and update to existing policy  |  Amazon RDS added new permissions to `AmazonRDSPerformanceInsightsReadOnly` managed policy and a new managed policy named `AmazonRDSPerformanceInsightsFullAccess`. These permissions allow you to analyse the Performance Insights for a time period, view the analysis results along with the recommendations, and delete the reports. For more information about configuring access policies for Performance Insights, see [Configuring access policies for Performance Insights](USER_PerfInsights.access-control.md)  |  August 16, 2023  | 
|  [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md) – Update to an existing policy  |  Amazon RDS added a new Amazon CloudWatch namespace `ListMetrics` to `AmazonRDSFullAccess` and `AmazonRDSReadOnlyAccess`. This namespace is required for Amazon RDS to list specific resource usage metrics. For more information, see [Overview of managing access permissions to your CloudWatch resources](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-access-control-overview-cw.html) in the *Amazon CloudWatch User Guide*.  |  April 4, 2023  | 
|  [Service-linked role permissions for Amazon Aurora](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added new permissions to the `AmazonRDSServiceRolePolicy` of the `AWSServiceRoleForRDS` service-linked role for integration with AWS Secrets Manager. RDS requires integration with Secrets Manager for managing master user passwords in Secrets Manager. The secret uses a reserved naming convention and restricts customer updates. For more information, see [Password management with Amazon Aurora and AWS Secrets Manager](rds-secrets-manager.md).  |  December 22, 2022  | 
|  [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md) – Update to existing policies  |  Amazon RDS added a new permission to the `AmazonRDSFullAccess` and `AmazonRDSReadOnlyAccess` managed policies to allow you to turn on Amazon DevOps Guru in the RDS console. This permission is required to check whether DevOps Guru is turned on. For more information, see [Configuring IAM access policies for DevOps Guru for RDS](devops-guru-for-rds.md#devops-guru-for-rds.configuring.access).  |  December 19, 2022  | 
|  [Service-linked role permissions for Amazon Aurora](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added a new Amazon CloudWatch namespace to `AmazonRDSPreviewServiceRolePolicy` for `PutMetricData`. This namespace is required for Amazon RDS to publish resource usage metrics. For more information, see [Using condition keys to limit access to CloudWatch namespaces](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-cw-condition-keys-namespace.html) in the *Amazon CloudWatch User Guide*.  |  June 7, 2022  | 
|  [Service-linked role permissions for Amazon Aurora](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added a new Amazon CloudWatch namespace to `AmazonRDSBetaServiceRolePolicy` for `PutMetricData`. This namespace is required for Amazon RDS to publish resource usage metrics. For more information, see [Using condition keys to limit access to CloudWatch namespaces](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-cw-condition-keys-namespace.html) in the *Amazon CloudWatch User Guide*.  |  June 7, 2022  | 
|  [Service-linked role permissions for Amazon Aurora](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added a new Amazon CloudWatch namespace to `AWSServiceRoleForRDS` for `PutMetricData`. This namespace is required for Amazon RDS to publish resource usage metrics. For more information, see [Using condition keys to limit access to CloudWatch namespaces](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-cw-condition-keys-namespace.html) in the *Amazon CloudWatch User Guide*.  |  April 22, 2022  | 
|  [AWS managed policies for Amazon RDS](rds-security-iam-awsmanpol.md) – New policy  |  Amazon RDS added a new managed policy named `AmazonRDSPerformanceInsightsReadOnly` to allow Amazon RDS to call AWS services on behalf of your DB instances. For more information about configuring access policies for Performance Insights, see [Configuring access policies for Performance Insights](USER_PerfInsights.access-control.md)  |  March 10, 2022  | 
|  [Service-linked role permissions for Amazon Aurora](UsingWithRDS.IAM.ServiceLinkedRoles.md#service-linked-role-permissions) – Update to an existing policy  |  Amazon RDS added new Amazon CloudWatch namespaces to `AWSServiceRoleForRDS` for `PutMetricData`. These namespaces are required for Amazon DocumentDB (with MongoDB compatibility) and Amazon Neptune to publish CloudWatch metrics. For more information, see [Using condition keys to limit access to CloudWatch namespaces](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-cw-condition-keys-namespace.html) in the *Amazon CloudWatch User Guide*.  |  March 4, 2022  | 
|  Amazon RDS started tracking changes  |  Amazon RDS started tracking changes for its AWS managed policies.  |  October 26, 2021  | 

# Preventing cross-service confused deputy problems
<a name="cross-service-confused-deputy-prevention"></a>

The *confused deputy problem* is a security issue where an entity that doesn't have permission to perform an action can coerce a more-privileged entity to perform the action. In AWS, cross-service impersonation can result in the confused deputy problem. 

Cross-service impersonation can occur when one service (the *calling service*) calls another service (the *called service*). The calling service can be manipulated to use its permissions to act on another customer's resources in a way that it shouldn't have permission to access. To prevent this, AWS provides tools that can help you protect your data for all services with service principals that have been given access to resources in your account. For more information, see [The confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html) in the *IAM User Guide*.

To limit the permissions that Amazon RDS gives another service for a specific resource, we recommend using the [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn) and [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount) global condition context keys in resource policies. 

In some cases, the `aws:SourceArn` value doesn't contain the account ID, for example when you use the Amazon Resource Name (ARN) for an Amazon S3 bucket. In these cases, make sure to use both global condition context keys to limit permissions. In some cases, you use both global condition context keys and the `aws:SourceArn` value contains the account ID. In these cases, make sure that the `aws:SourceAccount` value and the account in the `aws:SourceArn` use the same account ID when they're used in the same policy statement. If you want only one resource to be associated with the cross-service access, use `aws:SourceArn`. If you want to allow any resource in the specified AWS account to be associated with the cross-service use, use `aws:SourceAccount`.

Make sure that the value of `aws:SourceArn` is an ARN for an Amazon RDS resource type. For more information, see [Amazon Resource Names (ARNs) in Amazon RDS](USER_Tagging.ARN.md).

The most effective way to protect against the confused deputy problem is to use the `aws:SourceArn` global condition context key with the full ARN of the resource. In some cases, you might not know the full ARN of the resource or you might be specifying multiple resources. In these cases, use the `aws:SourceArn` global context condition key with wildcards (`*`) for the unknown portions of the ARN. An example is `arn:aws:rds:*:123456789012:*`. 

The following example shows how you can use the `aws:SourceArn` and `aws:SourceAccount` global condition context keys in Amazon RDS to prevent the confused deputy problem.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": {
    "Sid": "ConfusedDeputyPreventionExamplePolicy",
    "Effect": "Allow",
    "Principal": {
      "Service": "rds.amazonaws.com"
    },
    "Action": "sts:AssumeRole",
    "Condition": {
      "ArnLike": {
        "aws:SourceArn": "arn:aws:rds:us-east-1:123456789012:db:mydbinstance"
      },
      "StringEquals": {
        "aws:SourceAccount": "123456789012"
      }
    }
  }
}
```

------

For more examples of policies that use the `aws:SourceArn` and `aws:SourceAccount` global condition context keys, see the following sections:
+ [Granting permissions to publish notifications to an Amazon SNS topic](USER_Events.GrantingPermissions.md)
+ [Setting up access to an Amazon S3 bucket](USER_PostgreSQL.S3Import.AccessPermission.md) (PostgreSQL import)
+ [Setting up access to an Amazon S3 bucket](postgresql-s3-export-access-bucket.md) (PostgreSQL export)

# IAM database authentication
<a name="UsingWithRDS.IAMDBAuth"></a>

You can authenticate to your DB cluster using AWS Identity and Access Management (IAM) database authentication. IAM database authentication works with Aurora MySQL, and Aurora PostgreSQL. With this authentication method, you don't need to use a password when you connect to a DB cluster. Instead, you use an authentication token.

An *authentication token* is a unique string of characters that Amazon Aurora generates on request. Authentication tokens are generated using AWS Signature Version 4. Each token has a lifetime of 15 minutes. You don't need to store user credentials in the database, because authentication is managed externally using IAM. You can also still use standard database authentication. The token is only used for authentication and doesn't affect the session after it is established.

IAM database authentication provides the following benefits:
+ Network traffic to and from the database is encrypted using Secure Socket Layer (SSL) or Transport Layer Security (TLS). For more information about using SSL/TLS with Amazon Aurora, see [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md).
+ You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB cluster.
+ For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for greater security.

In general, consider using IAM database authentication when your applications create fewer than 200 connections per second, and you don't want to manage usernames and passwords directly in your application code.

The Amazon Web Services (AWS) JDBC Driver supports IAM database authentication. For more information, see [AWS IAM Authentication Plugin](https://github.com/aws/aws-advanced-jdbc-wrapper/blob/main/docs/using-the-jdbc-driver/using-plugins/UsingTheIamAuthenticationPlugin.md) in the [Amazon Web Services (AWS) JDBC Driver GitHub repository](https://github.com/aws/aws-advanced-jdbc-wrapper).

The Amazon Web Services (AWS) Python Driver supports IAM database authentication. For more information, see [AWS IAM Authentication Plugin](https://github.com/aws/aws-advanced-python-wrapper/blob/main/docs/using-the-python-driver/using-plugins/UsingTheIamAuthenticationPlugin.md) in the [Amazon Web Services (AWS) Python Driver GitHub repository](https://github.com/aws/aws-advanced-python-wrapper).

Navigate through the following topics to learn the process to set IAM for DB authentication:
+ [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md)
+ [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md)
+ [Creating a database account using IAM authentication](UsingWithRDS.IAMDBAuth.DBAccounts.md)
+ [Connecting to your DB cluster using IAM authentication](UsingWithRDS.IAMDBAuth.Connecting.md) 

## Region and version availability
<a name="UsingWithRDS.IAMDBAuth.Availability"></a>

 Feature availability and support varies across specific versions of each Aurora database engine, and across AWS Regions. For more information on version and Region availability with Aurora and IAM database authentication, see [Supported Regions and Aurora DB engines for IAM database authentication](Concepts.Aurora_Fea_Regions_DB-eng.Feature.IAMdbauth.md). 

For Aurora MySQL, all supported DB instance classes support IAM database authentication, except for db.t2.small and db.t3.small. For information about the supported DB instance classes, see [Supported DB engines for DB instance classes](Concepts.DBInstanceClass.SupportAurora.md). 

## CLI and SDK support
<a name="UsingWithRDS.IAMDBAuth.cli-sdk"></a>

IAM database authentication is available for the [AWS CLI](https://docs.aws.amazon.com/cli/latest/reference/rds/generate-db-auth-token.html) and for the following language-specific AWS SDKs:
+ [AWS SDK for .NET](https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/RDS/TRDSAuthTokenGenerator.html)
+ [AWS SDK for C\$1\$1](https://docs.aws.amazon.com/sdk-for-cpp/latest/api/class_aws_1_1_r_d_s_1_1_r_d_s_client.html#ae134ffffed5d7672f6156d324e7bd392)
+ [AWS SDK for Go](https://docs.aws.amazon.com/sdk-for-go/api/service/rds/#pkg-overview)
+ [AWS SDK for Java](https://docs.aws.amazon.com/sdk-for-java/latest/reference/software/amazon/awssdk/services/rds/RdsUtilities.html)
+ [AWS SDK for JavaScript](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/modules/_aws_sdk_rds_signer.html)
+ [AWS SDK for PHP](https://docs.aws.amazon.com/aws-sdk-php/v3/api/class-Aws.Rds.AuthTokenGenerator.html)
+ [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html#RDS.Client.generate_db_auth_token)
+ [AWS SDK for Ruby](https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/RDS/AuthTokenGenerator.html)

## Limitations for IAM database authentication
<a name="UsingWithRDS.IAMDBAuth.Limitations"></a>

When using IAM database authentication, the following limitations apply:
+ Currently, IAM database authentication doesn't support all global condition context keys.

  For more information about global condition context keys, see [AWS global condition context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) in the *IAM User Guide*.
+ For PostgreSQL, if the IAM role (`rds_iam`) is added to a user (including the RDS master user), IAM authentication takes precedence over password authentication, so the user must log in as an IAM user.
+ You cannot use a custom Route 53 DNS record instead of the DB cluster endpoint to generate the authentication token.
+ CloudWatch and CloudTrail don't log IAM authentication. These services do not track `generate-db-auth-token` API calls that authorize the IAM role to enable database connection.
+ IAM DB authentication requires compute resources on the database cluster. You must have between 300 and 1000 MiB extra memory on your database for reliable connectivity. To see the memory needed for your workload, compare the RES column for RDS processes in the Enhanced Monitoring processlist before and after enabling IAM DB authentication. See [Viewing OS metrics in the RDS console](USER_Monitoring.OS.Viewing.md).

  If you are using a burstable class instance, avoid running out of memory by reducing the memory used by other parameters like buffers and cache by the same amount.
+ For Aurora MySQL, you can't use password based authentication for a database user you configure with IAM authentication.
+ IAM DB authentication is not supported for RDS on Outposts for any engine.

## Recommendations for IAM database authentication
<a name="UsingWithRDS.IAMDBAuth.ConnectionsPerSecond"></a>

We recommend the following when using IAM database authentication:
+ Use IAM database authentication when your application requires fewer than 200 new IAM database authentication connections per second.

  The database engines that work with Amazon Aurora don't impose any limits on authentication attempts per second. However, when you use IAM database authentication, your application must generate an authentication token. Your application then uses that token to connect to the DB cluster. If you exceed the limit of maximum new connections per second, then the extra overhead of IAM database authentication can cause connection throttling. 

  Consider using connection pooling in your applications to mitigate constant connection creation. This can reduce the overhead from IAM DB authentication and allow your applications to reuse existing connections. Alternatively, consider using RDS Proxy for these use cases. RDS Proxy has additional costs. See [RDS Proxy pricing](https://aws.amazon.com/rds/proxy/pricing/).
+ The size of an IAM database authentication token depends on many things including the number of IAM tags, IAM service policies, ARN lengths, as well as other IAM and database properties. The minimum size of this token is generally about 1 KB but can be larger. Since this token is used as the password in the connection string to the database using IAM authentication, you should ensure that your database driver (e.g., ODBC) and/or any tools do not limit or otherwise truncate this token due to its size. A truncated token will cause the authentication validation done by the database and IAM to fail.
+ If you are using temporary credentials when creating an IAM database authentication token, the temporary credentials must still be valid when using the IAM database authentication token to make a connection request.

## Unsupported AWS global condition context keys
<a name="UsingWithRDS.IAMDBAuth.GlobalContextKeys"></a>

 IAM database authentication does not support the following subset of AWS global condition context keys. 
+ `aws:Referer`
+ `aws:SourceIp`
+ `aws:SourceVpc`
+ `aws:SourceVpce`
+ `aws:UserAgent`
+ `aws:VpcSourceIp`

For more information, see [AWS global condition context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) in the *IAM User Guide*. 

# Enabling and disabling IAM database authentication
<a name="UsingWithRDS.IAMDBAuth.Enabling"></a>

By default, IAM database authentication is disabled on DB clusters. You can enable or disable IAM database authentication using the AWS Management Console, AWS CLI, or the API.

You can enable IAM database authentication when you perform one of the following actions:
+ To create a new DB cluster with IAM database authentication enabled, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md).
+ To modify a DB cluster to enable IAM database authentication, see [Modifying an Amazon Aurora DB cluster](Aurora.Modifying.md).
+ To restore a DB cluster from a snapshot with IAM database authentication enabled, see [Restoring from a DB cluster snapshot](aurora-restore-snapshot.md).
+ To restore a DB cluster to a point in time with IAM database authentication enabled, see [Restoring a DB cluster to a specified time](aurora-pitr.md).

## Console
<a name="UsingWithRDS.IAMDBAuth.Enabling.Console"></a>

Each creation or modification workflow has a **Database authentication** section, where you can enable or disable IAM database authentication. In that section, choose **Password and IAM database authentication** to enable IAM database authentication.

**To enable or disable IAM database authentication for an existing DB cluster**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the DB cluster that you want to modify.
**Note**  
You can only enable IAM authentication if all DB instances in the DB cluster are compatible with IAM. Check the compatibility requirements in [Region and version availability](UsingWithRDS.IAMDBAuth.md#UsingWithRDS.IAMDBAuth.Availability). 

1. Choose **Modify**.

1. In the **Database authentication** section, choose **IAM database authentication** to enable IAM database authentication. Choose **Password authentication** or **Password and Kerberos authentication** to disable IAM authentication.

1. You can also choose to enable publishing IAM DB authentication logs to CloudWatch Logs. Under **Log exports**, choose the **iam-db-auth-error log** option. Publishing your logs to CloudWatch Logs consumes storage and you incur charges for that storage. Be sure to delete any CloudWatch Logs that you no longer need.

1. Choose **Continue**.

1. To apply the changes immediately, choose **Immediately** in the **Scheduling of modifications** section.

1. Choose **Modify cluster**.

## AWS CLI
<a name="UsingWithRDS.IAMDBAuth.Enabling.CLI"></a>

To create a new DB cluster with IAM authentication by using the AWS CLI, use the [https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html) command. Specify the `--enable-iam-database-authentication` option.

To update an existing DB cluster to have or not have IAM authentication, use the AWS CLI command [https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html). Specify either the `--enable-iam-database-authentication` or `--no-enable-iam-database-authentication` option, as appropriate.

**Note**  
You can only enable IAM authentication if all DB instances in the DB cluster are compatible with IAM. Check the compatibility requirements in [Region and version availability](UsingWithRDS.IAMDBAuth.md#UsingWithRDS.IAMDBAuth.Availability). 

By default, Aurora performs the modification during the next maintenance window. If you want to override this and enable IAM DB authentication as soon as possible, use the `--apply-immediately` parameter. 

If you are restoring a DB cluster, use one of the following AWS CLI commands:
+ `[restore-db-cluster-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-to-point-in-time.html)`
+ `[restore-db-cluster-from-db-snapshot](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-instance-from-db-snapshot.html)`

The IAM database authentication setting defaults to that of the source snapshot. To change this setting, set the `--enable-iam-database-authentication` or `--no-enable-iam-database-authentication` option, as appropriate.

## RDS API
<a name="UsingWithRDS.IAMDBAuth.Enabling.API"></a>

To create a new DB instance with IAM authentication by using the API, use the API operation [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html). Set the `EnableIAMDatabaseAuthentication` parameter to `true`.

To update an existing DB cluster to have IAM authentication, use the API operation [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html). Set the `EnableIAMDatabaseAuthentication` parameter to `true` to enable IAM authentication, or `false` to disable it.

**Note**  
You can only enable IAM authentication if all DB instances in the DB cluster are compatible with IAM. Check the compatibility requirements in [Region and version availability](UsingWithRDS.IAMDBAuth.md#UsingWithRDS.IAMDBAuth.Availability). 

If you are restoring a DB cluster, use one of the following API operations:
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromSnapshot.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromSnapshot.html)
+ [https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterToPointInTime.html](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterToPointInTime.html)

The IAM database authentication setting defaults to that of the source snapshot. To change this setting, set the `EnableIAMDatabaseAuthentication` parameter to `true` to enable IAM authentication, or `false` to disable it.

# Creating and using an IAM policy for IAM database access
<a name="UsingWithRDS.IAMDBAuth.IAMPolicy"></a>

To allow a user or role to connect to your DB cluster, you must create an IAM policy. After that, you attach the policy to a permissions set or role.

**Note**  
To learn more about IAM policies, see [Identity and access management for Amazon Aurora](UsingWithRDS.IAM.md).

The following example policy allows a user to connect to a DB cluster using IAM database authentication.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "rds-db:connect"
            ],
            "Resource": [
                "arn:aws:rds-db:us-east-2:111122223333:dbuser:cluster-ABCDEFGHIJKL01234/db_user"
            ]
        }
    ]
}
```

------

**Important**  
A user with administrator permissions can access DB clusters without explicit permissions in an IAM policy. If you want to restrict administrator access to DB clusters, you can create an IAM role with the appropriate, lesser privileged permissions and assign it to the administrator.

**Note**  
Don't confuse the `rds-db:` prefix with other RDS API operation prefixes that begin with `rds:`. You use the `rds-db:` prefix and the `rds-db:connect` action only for IAM database authentication. They aren't valid in any other context. 

The example policy includes a single statement with the following elements:
+ `Effect` – Specify `Allow` to grant access to the DB cluster. If you don't explicitly allow access, then access is denied by default.
+ `Action` – Specify `rds-db:connect` to allow connections to the DB cluster.
+ `Resource` – Specify an Amazon Resource Name (ARN) that describes one database account in one DB cluster. The ARN format is as follows.

  ```
  arn:aws:rds-db:region:account-id:dbuser:DbClusterResourceId/db-user-name
  ```

  In this format, replace the following:
  + `region` is the AWS Region for the DB cluster. In the example policy, the AWS Region is `us-east-2`.
  + `account-id` is the AWS account number for the DB cluster. In the example policy, the account number is `1234567890`. The user must be in the same account as the account for the DB cluster.

    To perform cross-account access, create an IAM role with the policy shown above in the account for the DB cluster and allow your other account to assume the role. 
  + `DbClusterResourceId` is the identifier for the DB cluster. This identifier is unique to an AWS Region and never changes. In the example policy, the identifier is `cluster-ABCDEFGHIJKL01234`.

    To find a DB cluster resource ID in the AWS Management Console for Amazon Aurora, choose the DB cluster to see its details. Then choose the **Configuration** tab. The **Resource ID** is shown in the **Configuration** section.

    Alternatively, you can use the AWS CLI command to list the identifiers and resource IDs for all of your DB cluster in the current AWS Region, as shown following.

    ```
    aws rds describe-db-clusters --query "DBClusters[*].[DBClusterIdentifier,DbClusterResourceId]"
    ```
**Note**  
If you are connecting to a database through RDS Proxy, specify the proxy resource ID, such as `prx-ABCDEFGHIJKL01234`. For information about using IAM database authentication with RDS Proxy, see [Connecting to a database using IAM authentication](rds-proxy-connecting.md#rds-proxy-connecting-iam).
  + `db-user-name` is the name of the database account to associate with IAM authentication. In the example policy, the database account is `db_user`.

You can construct other ARNs to support various access patterns. The following policy allows access to two different database accounts in a DB cluster.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Effect": "Allow",
         "Action": [
             "rds-db:connect"
         ],
         "Resource": [
             "arn:aws:rds-db:us-east-2:123456789012:dbuser:cluster-ABCDEFGHIJKL01234/jane_doe",
             "arn:aws:rds-db:us-east-2:123456789012:dbuser:cluster-ABCDEFGHIJKL01234/mary_roe"
         ]
      }
   ]
}
```

------

The following policy uses the "\$1" character to match all DB clusters and database accounts for a particular AWS account and AWS Region. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "rds-db:connect"
            ],
            "Resource": [
                "arn:aws:rds-db:us-east-2:111122223333:dbuser:*/*"
            ]
        }
    ]
}
```

------

The following policy matches all of the DB clusters for a particular AWS account and AWS Region. However, the policy only grants access to DB clusters that have a `jane_doe` database account.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
      {
         "Effect": "Allow",
         "Action": [
             "rds-db:connect"
         ],
         "Resource": [
             "arn:aws:rds-db:us-east-2:123456789012:dbuser:*/jane_doe"
         ]
      }
   ]
}
```

------

The user or role has access to only those databases that the database user does. For example, suppose that your DB cluster has a database named *dev*, and another database named *test*. If the database user `jane_doe` has access only to *dev*, any users or roles that access that DB cluster with the `jane_doe` user also have access only to *dev*. This access restriction is also true for other database objects, such as tables, views, and so on.

An administrator must create IAM policies that grant entities permission to perform specific API operations on the specified resources they need. The administrator must then attach those policies to the permission sets or roles that require those permissions. For examples of policies, see [Identity-based policy examples for Amazon Aurora](security_iam_id-based-policy-examples.md).

## Attaching an IAM policy to a permission set or role
<a name="UsingWithRDS.IAMDBAuth.IAMPolicy.Attaching"></a>

After you create an IAM policy to allow database authentication, you need to attach the policy to a permission set or role. For a tutorial on this topic, see [ Create and attach your first customer managed policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_managed-policies.html) in the *IAM User Guide*.

As you work through the tutorial, you can use one of the policy examples shown in this section as a starting point and tailor it to your needs. At the end of the tutorial, you have a permission set with an attached policy that can make use of the `rds-db:connect` action.

**Note**  
You can map multiple permission sets or roles to the same database user account. For example, suppose that your IAM policy specified the following resource ARN.  

```
arn:aws:rds-db:us-east-2:123456789012:dbuser:cluster-12ABC34DEFG5HIJ6KLMNOP78QR/jane_doe
```
If you attach the policy to *Jane*, *Bob*, and *Diego*, then each of those users can connect to the specified DB cluster using the `jane_doe` database account.

# Creating a database account using IAM authentication
<a name="UsingWithRDS.IAMDBAuth.DBAccounts"></a>

With IAM database authentication, you don't need to assign database passwords to the user accounts you create. If you remove a user that is mapped to a database account, you should also remove the database account with the `DROP USER` statement.

**Note**  
The user name used for IAM authentication must match the case of the user name in the database.

**Topics**
+ [Using IAM authentication with Aurora MySQL](#UsingWithRDS.IAMDBAuth.DBAccounts.MySQL)
+ [Using IAM authentication with AuroraPostgreSQL](#UsingWithRDS.IAMDBAuth.DBAccounts.PostgreSQL)

## Using IAM authentication with Aurora MySQL
<a name="UsingWithRDS.IAMDBAuth.DBAccounts.MySQL"></a>

With Aurora MySQL, authentication is handled by `AWSAuthenticationPlugin`—an AWS-provided plugin that works seamlessly with IAM to authenticate your users. Connect to the DB cluster as the master user or a different user who can create users and grant privileges. After connecting, issue the `CREATE USER` statement, as shown in the following example.

```
CREATE USER 'jane_doe' IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS'; 
```

The `IDENTIFIED WITH` clause allows Aurora MySQL to use the `AWSAuthenticationPlugin` to authenticate the database account (`jane_doe`). The `AS 'RDS'` clause refers to the authentication method. Make sure the specified database user name is the same as a resource in the IAM policy for IAM database access. For more information, see [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md). 

**Note**  
If you see the following message, it means that the AWS-provided plugin is not available for the current DB cluster.  
`ERROR 1524 (HY000): Plugin 'AWSAuthenticationPlugin' is not loaded`  
To troubleshoot this error, verify that you are using a supported configuration and that you have enabled IAM database authentication on your DB cluster. For more information, see [Region and version availability](UsingWithRDS.IAMDBAuth.md#UsingWithRDS.IAMDBAuth.Availability) and [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md).

After you create an account using `AWSAuthenticationPlugin`, you manage it in the same way as other database accounts. For example, you can modify account privileges with `GRANT` and `REVOKE` statements, or modify various account attributes with the `ALTER USER` statement. 

Database network traffic is encrypted using SSL/TLS when using IAM. To allow SSL connections, modify the user account with the following command.

```
ALTER USER 'jane_doe'@'%' REQUIRE SSL;     
```

 

## Using IAM authentication with AuroraPostgreSQL
<a name="UsingWithRDS.IAMDBAuth.DBAccounts.PostgreSQL"></a>

To use IAM authentication with Aurora PostgreSQL, connect to the DB cluster as the master user or a different user who can create users and grant privileges. After connecting, create database users and then grant them the `rds_iam` role as shown in the following example.

```
CREATE USER db_userx; 
GRANT rds_iam TO db_userx;
```

Make sure the specified database user name is the same as a resource in the IAM policy for IAM database access. For more information, see [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md). You must grant the `rds_iam` role to use IAM authentication. You can use nested memberships or indirect grants of the role as well. 

Note that a PostgreSQL database user can use either IAM or Kerberos authentication but not both, so this user can't also have the `rds_ad` role. This also applies to nested memberships. For more information, see [Step 7: Create PostgreSQL users for your Kerberos principals](postgresql-kerberos-setting-up.md#postgresql-kerberos-setting-up.create-logins).

# Connecting to your DB cluster using IAM authentication
<a name="UsingWithRDS.IAMDBAuth.Connecting"></a>

With IAM database authentication, you use an authentication token when you connect to your DB cluster. An *authentication token* is a string of characters that you use instead of a password. After you generate an authentication token, it's valid for 15 minutes before it expires. If you try to connect using an expired token, the connection request is denied.

Every authentication token must be accompanied by a valid signature, using AWS signature version 4. (For more information, see [Signature Version 4 signing process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) in the* AWS General Reference.*) The AWS CLI and an AWS SDK, such as the AWS SDK for Java or AWS SDK for Python (Boto3), can automatically sign each token you create.

You can use an authentication token when you connect to Amazon Aurora from another AWS service, such as AWS Lambda. By using a token, you can avoid placing a password in your code. Alternatively, you can use an AWS SDK to programmatically create and programmatically sign an authentication token.

After you have a signed IAM authentication token, you can connect to an Aurora DB cluster. Following, you can find out how to do this using either a command line tool or an AWS SDK, such as the AWS SDK for Java or AWS SDK for Python (Boto3).

For more information, see the following blog posts:
+ [Use IAM authentication to connect with SQL Workbench/J to Aurora MySQL or Amazon RDS for MySQL](https://aws.amazon.com/blogs/database/use-iam-authentication-to-connect-with-sql-workbenchj-to-amazon-aurora-mysql-or-amazon-rds-for-mysql/)
+ [Using IAM authentication to connect with pgAdmin Amazon Aurora PostgreSQL or Amazon RDS for PostgreSQL](https://aws.amazon.com/blogs/database/using-iam-authentication-to-connect-with-pgadmin-amazon-aurora-postgresql-or-amazon-rds-for-postgresql/)

**Prerequisites**  
The following are prerequisites for connecting to your DB cluster using IAM authentication:
+ [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md)
+ [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md)
+ [Creating a database account using IAM authentication](UsingWithRDS.IAMDBAuth.DBAccounts.md)

**Topics**
+ [Connecting to your DB cluster using IAM authentication with the AWS drivers](IAMDBAuth.Connecting.Drivers.md)
+ [Connecting to your DB cluster using IAM authentication from the command line: AWS CLI and mysql client](UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.md)
+ [Connecting to your DB cluster using IAM authentication from the command line: AWS CLI and psql client](UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.PostgreSQL.md)
+ [Connecting to your DB cluster using IAM authentication and the AWS SDK for .NET](UsingWithRDS.IAMDBAuth.Connecting.NET.md)
+ [Connecting to your DB cluster using IAM authentication and the AWS SDK for Go](UsingWithRDS.IAMDBAuth.Connecting.Go.md)
+ [Connecting to your DB cluster using IAM authentication and the AWS SDK for Java](UsingWithRDS.IAMDBAuth.Connecting.Java.md)
+ [Connecting to your DB cluster using IAM authentication and the AWS SDK for Python (Boto3)](UsingWithRDS.IAMDBAuth.Connecting.Python.md)

# Connecting to your DB cluster using IAM authentication with the AWS drivers
<a name="IAMDBAuth.Connecting.Drivers"></a>

The AWS suite of drivers has been designed to provide support for faster switchover and failover times, and authentication with AWS Secrets Manager, AWS Identity and Access Management (IAM), and Federated Identity. The AWS drivers rely on monitoring DB cluster status and being aware of the cluster topology to determine the new writer. This approach reduces switchover and failover times to single-digit seconds, compared to tens of seconds for open-source drivers.

For more information on the AWS drivers, see the corresponding language driver for your [Aurora MySQL](Aurora.Connecting.md#Aurora.Connecting.JDBCDriverMySQL) or [Aurora PostgreSQL](Aurora.Connecting.md#Aurora.Connecting.AuroraPostgreSQL.Utilities) DB cluster.

# Connecting to your DB cluster using IAM authentication from the command line: AWS CLI and mysql client
<a name="UsingWithRDS.IAMDBAuth.Connecting.AWSCLI"></a>

You can connect from the command line to an Aurora DB cluster with the AWS CLI and `mysql` command line tool as described following.

**Prerequisites**  
The following are prerequisites for connecting to your DB cluster using IAM authentication:
+ [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md)
+ [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md)
+ [Creating a database account using IAM authentication](UsingWithRDS.IAMDBAuth.DBAccounts.md)

**Note**  
For information about connecting to your database using SQL Workbench/J with IAM authentication, see the blog post [Use IAM authentication to connect with SQL Workbench/J to Aurora MySQL or Amazon RDS for MySQL](https://aws.amazon.com/blogs/database/use-iam-authentication-to-connect-with-sql-workbenchj-to-amazon-aurora-mysql-or-amazon-rds-for-mysql/).

**Topics**
+ [Generating an IAM authentication token](#UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.AuthToken)
+ [Connecting to a DB cluster](#UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.Connect)

## Generating an IAM authentication token
<a name="UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.AuthToken"></a>

The following example shows how to get a signed authentication token using the AWS CLI.

```
aws rds generate-db-auth-token \
   --hostname rdsmysql.123456789012.us-west-2.rds.amazonaws.com \
   --port 3306 \
   --region us-west-2 \
   --username jane_doe
```

In the example, the parameters are as follows:
+ `--hostname` – The host name of the DB cluster that you want to access
+ `--port` – The port number used for connecting to your DB cluster
+ `--region` – The AWS Region where the DB cluster is running
+ `--username` – The database account that you want to access

The first several characters of the token look like the following.

```
rdsmysql.123456789012.us-west-2.rds.amazonaws.com:3306/?Action=connect&DBUser=jane_doe&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Expires=900...
```

**Note**  
You cannot use a custom Route 53 DNS record instead of the DB cluster endpoint to generate the authentication token.

## Connecting to a DB cluster
<a name="UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.Connect"></a>

The general format for connecting is shown following.

```
mysql --host=hostName --port=portNumber --ssl-ca=full_path_to_ssl_certificate --enable-cleartext-plugin --user=userName --password=authToken
```

The parameters are as follows:
+ `--host` – The host name of the DB cluster that you want to access
+ `--port` – The port number used for connecting to your DB cluster
+ `--ssl-ca` – The full path to the SSL certificate file that contains the public key

  For more information, see [TLS connections to Aurora MySQL DB clusters](AuroraMySQL.Security.md#AuroraMySQL.Security.SSL).

  To download an SSL certificate, see [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md).
+ `--enable-cleartext-plugin` – A value that specifies that `AWSAuthenticationPlugin` must be used for this connection

  If you are using a MariaDB client, the `--enable-cleartext-plugin` option isn't required.
+ `--user` – The database account that you want to access
+ `--password` – A signed IAM authentication token

The authentication token consists of several hundred characters. It can be unwieldy on the command line. One way to work around this is to save the token to an environment variable, and then use that variable when you connect. The following example shows one way to perform this workaround. In the example, */sample\$1dir/* is the full path to the SSL certificate file that contains the public key.

```
RDSHOST="mysqlcluster.cluster-123456789012.us-east-1.rds.amazonaws.com"
TOKEN="$(aws rds generate-db-auth-token --hostname $RDSHOST --port 3306 --region us-west-2 --username jane_doe )"

mysql --host=$RDSHOST --port=3306 --ssl-ca=/sample_dir/global-bundle.pem --enable-cleartext-plugin --user=jane_doe --password=$TOKEN
```

When you connect using `AWSAuthenticationPlugin`, the connection is secured using SSL. To verify this, type the following at the `mysql>` command prompt.

```
show status like 'Ssl%';
```

The following lines in the output show more details.

```
+---------------+-------------+
| Variable_name | Value                                                                                                                                                                                                                                |
+---------------+-------------+
| ...           | ...
| Ssl_cipher    | AES256-SHA                                                                                                                                                                                                                           |
| ...           | ...
| Ssl_version   | TLSv1.1                                                                                                                                                                                                                              |
| ...           | ...
+-----------------------------+
```

If you want to connect to a DB cluster through a proxy, see [Connecting to a database using IAM authentication](rds-proxy-connecting.md#rds-proxy-connecting-iam).

# Connecting to your DB cluster using IAM authentication from the command line: AWS CLI and psql client
<a name="UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.PostgreSQL"></a>

You can connect from the command line to an Aurora PostgreSQL DB cluster with the AWS CLI and psql command line tool as described following.

**Prerequisites**  
The following are prerequisites for connecting to your DB cluster using IAM authentication:
+ [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md)
+ [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md)
+ [Creating a database account using IAM authentication](UsingWithRDS.IAMDBAuth.DBAccounts.md)

**Note**  
For information about connecting to your database using pgAdmin with IAM authentication, see the blog post [Using IAM authentication to connect with pgAdmin Amazon Aurora PostgreSQL or Amazon RDS for PostgreSQL](https://aws.amazon.com/blogs/database/using-iam-authentication-to-connect-with-pgadmin-amazon-aurora-postgresql-or-amazon-rds-for-postgresql/).

**Topics**
+ [Generating an IAM authentication token](#UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.AuthToken.PostgreSQL)
+ [Connecting to an Aurora PostgreSQL cluster](#UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.Connect.PostgreSQL)

## Generating an IAM authentication token
<a name="UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.AuthToken.PostgreSQL"></a>

The authentication token consists of several hundred characters so it can be unwieldy on the command line. One way to work around this is to save the token to an environment variable, and then use that variable when you connect. The following example shows how to use the AWS CLI to get a signed authentication token using the `generate-db-auth-token` command, and store it in a `PGPASSWORD` environment variable.

```
export RDSHOST="mypostgres-cluster.cluster-123456789012.us-west-2.rds.amazonaws.com"
export PGPASSWORD="$(aws rds generate-db-auth-token --hostname $RDSHOST --port 5432 --region us-west-2 --username jane_doe )"
```

In the example, the parameters to the `generate-db-auth-token` command are as follows:
+ `--hostname` – The host name of the DB cluster (cluster endpoint) that you want to access
+ `--port` – The port number used for connecting to your DB cluster
+ `--region` – The AWS Region where the DB cluster is running
+ `--username` – The database account that you want to access

The first several characters of the generated token look like the following.

```
mypostgres-cluster.cluster-123456789012.us-west-2.rds.amazonaws.com:5432/?Action=connect&DBUser=jane_doe&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Expires=900...
```

**Note**  
You cannot use a custom Route 53 DNS record instead of the DB cluster endpoint to generate the authentication token.

## Connecting to an Aurora PostgreSQL cluster
<a name="UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.Connect.PostgreSQL"></a>

The general format for using psql to connect is shown following.

```
psql "host=hostName port=portNumber sslmode=verify-full sslrootcert=full_path_to_ssl_certificate dbname=DBName user=userName password=authToken"
```

The parameters are as follows:
+ `host` – The host name of the DB cluster (cluster endpoint) that you want to access
+ `port` – The port number used for connecting to your DB cluster
+ `sslmode` – The SSL mode to use

  When you use `sslmode=verify-full`, the SSL connection verifies the DB cluster endpoint against the endpoint in the SSL certificate.
+ `sslrootcert` – The full path to the SSL certificate file that contains the public key

  For more information, see [Securing Aurora PostgreSQL data with SSL/TLS](AuroraPostgreSQL.Security.md#AuroraPostgreSQL.Security.SSL).

  To download an SSL certificate, see [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md).
+ `dbname` – The database that you want to access
+ `user` – The database account that you want to access
+ `password` – A signed IAM authentication token

**Note**  
You cannot use a custom Route 53 DNS record instead of the DB cluster endpoint to generate the authentication token.

The following example shows using psql to connect. In the example, psql uses the environment variable `RDSHOST` for the host and the environment variable `PGPASSWORD` for the generated token. Also, */sample\$1dir/* is the full path to the SSL certificate file that contains the public key.

```
export RDSHOST="mypostgres-cluster.cluster-123456789012.us-west-2.rds.amazonaws.com"
export PGPASSWORD="$(aws rds generate-db-auth-token --hostname $RDSHOST --port 5432 --region us-west-2 --username jane_doe )"
                    
psql "host=$RDSHOST port=5432 sslmode=verify-full sslrootcert=/sample_dir/global-bundle.pem dbname=DBName user=jane_doe password=$PGPASSWORD"
```

If you want to connect to a DB cluster through a proxy, see [Connecting to a database using IAM authentication](rds-proxy-connecting.md#rds-proxy-connecting-iam).

# Connecting to your DB cluster using IAM authentication and the AWS SDK for .NET
<a name="UsingWithRDS.IAMDBAuth.Connecting.NET"></a>

You can connect to an Aurora MySQL or Aurora PostgreSQL DB cluster with the AWS SDK for .NET as described following.

**Prerequisites**  
The following are prerequisites for connecting to your DB cluster using IAM authentication:
+ [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md)
+ [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md)
+ [Creating a database account using IAM authentication](UsingWithRDS.IAMDBAuth.DBAccounts.md)

**Examples**  
The following code examples show how to generate an authentication token, and then use it to connect to a DB cluster.

To run this code example, you need the [AWS SDK for .NET](http://aws.amazon.com/sdk-for-net/), found on the AWS site. The `AWSSDK.CORE` and the `AWSSDK.RDS` packages are required. To connect to a DB cluster, use the .NET database connector for the DB engine, such as MySqlConnector for MariaDB or MySQL, or Npgsql for PostgreSQL.

This code connects to an Aurora MySQL DB cluster. Modify the values of the following variables as needed:
+ `server` – The endpoint of the DB cluster that you want to access
+ `user` – The database account that you want to access
+ `database` – The database that you want to access
+ `port` – The port number used for connecting to your DB cluster
+ `SslMode` – The SSL mode to use

  When you use `SslMode=Required`, the SSL connection verifies the DB cluster endpoint against the endpoint in the SSL certificate.
+ `SslCa` – The full path to the SSL certificate for Amazon Aurora

  To download a certificate, see [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md).

**Note**  
You cannot use a custom Route 53 DNS record instead of the DB cluster endpoint to generate the authentication token.

```
using System;
using System.Data;
using MySql.Data;
using MySql.Data.MySqlClient;
using Amazon;

namespace ubuntu
{
  class Program
  {
    static void Main(string[] args)
    {
      var pwd = Amazon.RDS.Util.RDSAuthTokenGenerator.GenerateAuthToken(RegionEndpoint.USEast1, "mysqlcluster.cluster-123456789012.us-east-1.rds.amazonaws.com", 3306, "jane_doe");
      // for debug only Console.Write("{0}\n", pwd);  //this verifies the token is generated

      MySqlConnection conn = new MySqlConnection($"server=mysqlcluster.cluster-123456789012.us-east-1.rds.amazonaws.com;user=jane_doe;database=mydB;port=3306;password={pwd};SslMode=Required;SslCa=full_path_to_ssl_certificate");
      conn.Open();

      // Define a query
      MySqlCommand sampleCommand = new MySqlCommand("SHOW DATABASES;", conn);

      // Execute a query
      MySqlDataReader mysqlDataRdr = sampleCommand.ExecuteReader();

      // Read all rows and output the first column in each row
      while (mysqlDataRdr.Read())
        Console.WriteLine(mysqlDataRdr[0]);

      mysqlDataRdr.Close();
      // Close connection
      conn.Close();
    }
  }
}
```

This code connects to an Aurora PostgreSQL DB cluster.

Modify the values of the following variables as needed:
+ `Server` – The endpoint of the DB cluster that you want to access
+ `User ID` – The database account that you want to access
+ `Database` – The database that you want to access
+ `Port` – The port number used for connecting to your DB cluster
+ `SSL Mode` – The SSL mode to use

  When you use `SSL Mode=Required`, the SSL connection verifies the DB cluster endpoint against the endpoint in the SSL certificate.
+ `Root Certificate` – The full path to the SSL certificate for Amazon Aurora

  To download a certificate, see [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md).

**Note**  
You cannot use a custom Route 53 DNS record instead of the DB cluster endpoint to generate the authentication token.

```
using System;
using Npgsql;
using Amazon.RDS.Util;

namespace ConsoleApp1
{
    class Program
    {
        static void Main(string[] args)
        {
            var pwd = RDSAuthTokenGenerator.GenerateAuthToken("postgresmycluster.cluster-123456789012.us-east-1.rds.amazonaws.com", 5432, "jane_doe");
// for debug only Console.Write("{0}\n", pwd);  //this verifies the token is generated

            NpgsqlConnection conn = new NpgsqlConnection($"Server=postgresmycluster.cluster-123456789012.us-east-1.rds.amazonaws.com;User Id=jane_doe;Password={pwd};Database=mydb;SSL Mode=Require;Root Certificate=full_path_to_ssl_certificate");
            conn.Open();

            // Define a query
                   NpgsqlCommand cmd = new NpgsqlCommand("select count(*) FROM pg_user", conn);

            // Execute a query
            NpgsqlDataReader dr = cmd.ExecuteReader();

            // Read all rows and output the first column in each row
            while (dr.Read())
                Console.Write("{0}\n", dr[0]);

            // Close connection
            conn.Close();
        }
    }
}
```

If you want to connect to a DB cluster through a proxy, see [Connecting to a database using IAM authentication](rds-proxy-connecting.md#rds-proxy-connecting-iam).

# Connecting to your DB cluster using IAM authentication and the AWS SDK for Go
<a name="UsingWithRDS.IAMDBAuth.Connecting.Go"></a>

You can connect to an Aurora MySQL or Aurora PostgreSQL DB cluster with the AWS SDK for Go as described following.

**Prerequisites**  
The following are prerequisites for connecting to your DB cluster using IAM authentication:
+ [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md)
+ [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md)
+ [Creating a database account using IAM authentication](UsingWithRDS.IAMDBAuth.DBAccounts.md)

**Examples**  
To run these code examples, you need the [AWS SDK for Go](http://aws.amazon.com/sdk-for-go/), found on the AWS site.

Modify the values of the following variables as needed:
+ `dbName` – The database that you want to access
+ `dbUser` – The database account that you want to access
+ `dbHost` – The endpoint of the DB cluster that you want to access
**Note**  
You cannot use a custom Route 53 DNS record instead of the DB cluster endpoint to generate the authentication token.
+ `dbPort` – The port number used for connecting to your DB cluster
+ `region` – The AWS Region where the DB cluster is running

In addition, make sure the imported libraries in the sample code exist on your system.

**Important**  
The examples in this section use the following code to provide credentials that access a database from a local environment:  
`creds := credentials.NewEnvCredentials()`  
If you are accessing a database from an AWS service, such as Amazon EC2 or Amazon ECS, you can replace the code with the following code:  
`sess := session.Must(session.NewSession())`  
`creds := sess.Config.Credentials`  
If you make this change, make sure you add the following import:  
`"github.com/aws/aws-sdk-go/aws/session"`

**Topics**
+ [Connecting using IAM authentication and the AWS SDK for Go V2](#UsingWithRDS.IAMDBAuth.Connecting.GoV2)
+ [Connecting using IAM authentication and the AWS SDK for Go V1.](#UsingWithRDS.IAMDBAuth.Connecting.GoV1)

## Connecting using IAM authentication and the AWS SDK for Go V2
<a name="UsingWithRDS.IAMDBAuth.Connecting.GoV2"></a>

You can connect to a DB cluster using IAM authentication and the AWS SDK for Go V2.

The following code examples show how to generate an authentication token, and then use it to connect to a DB cluster. 

This code connects to an Aurora MySQL DB cluster.

```
package main
                
import (
     "context"
     "database/sql"
     "fmt"

     "github.com/aws/aws-sdk-go-v2/config"
     "github.com/aws/aws-sdk-go-v2/feature/rds/auth"
     _ "github.com/go-sql-driver/mysql"
)

func main() {

     var dbName string = "DatabaseName"
     var dbUser string = "DatabaseUser"
     var dbHost string = "mysqlcluster.cluster-123456789012.us-east-1.rds.amazonaws.com"
     var dbPort int = 3306
     var dbEndpoint string = fmt.Sprintf("%s:%d", dbHost, dbPort)
     var region string = "us-east-1"

    cfg, err := config.LoadDefaultConfig(context.TODO())
    if err != nil {
    	panic("configuration error: " + err.Error())
    }

    authenticationToken, err := auth.BuildAuthToken(
    	context.TODO(), dbEndpoint, region, dbUser, cfg.Credentials)
    if err != nil {
	    panic("failed to create authentication token: " + err.Error())
    }

    dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?tls=true&allowCleartextPasswords=true",
        dbUser, authenticationToken, dbEndpoint, dbName,
    )

    db, err := sql.Open("mysql", dsn)
    if err != nil {
        panic(err)
    }

    err = db.Ping()
    if err != nil {
        panic(err)
    }
}
```

This code connects to an Aurora PostgreSQL DB cluster.

```
package main

import (
     "context"
     "database/sql"
     "fmt"

     "github.com/aws/aws-sdk-go-v2/config"
     "github.com/aws/aws-sdk-go-v2/feature/rds/auth"
     _ "github.com/lib/pq"
)

func main() {

     var dbName string = "DatabaseName"
     var dbUser string = "DatabaseUser"
     var dbHost string = "postgresmycluster.cluster-123456789012.us-east-1.rds.amazonaws.com"
     var dbPort int = 5432
     var dbEndpoint string = fmt.Sprintf("%s:%d", dbHost, dbPort)
     var region string = "us-east-1"

    cfg, err := config.LoadDefaultConfig(context.TODO())
    if err != nil {
    	panic("configuration error: " + err.Error())
    }

    authenticationToken, err := auth.BuildAuthToken(
    	context.TODO(), dbEndpoint, region, dbUser, cfg.Credentials)
    if err != nil {
	    panic("failed to create authentication token: " + err.Error())
    }

    dsn := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s",
        dbHost, dbPort, dbUser, authenticationToken, dbName,
    )

    db, err := sql.Open("postgres", dsn)
    if err != nil {
        panic(err)
    }

    err = db.Ping()
    if err != nil {
        panic(err)
    }
}
```

If you want to connect to a DB cluster through a proxy, see [Connecting to a database using IAM authentication](rds-proxy-connecting.md#rds-proxy-connecting-iam).

## Connecting using IAM authentication and the AWS SDK for Go V1.
<a name="UsingWithRDS.IAMDBAuth.Connecting.GoV1"></a>

You can connect to a DB cluster using IAM authentication and the AWS SDK for Go V1

The following code examples show how to generate an authentication token, and then use it to connect to a DB cluster. 

This code connects to an Aurora MySQL DB cluster.

```
package main
         
import (
    "database/sql"
    "fmt"
    "log"

    "github.com/aws/aws-sdk-go/aws/credentials"
    "github.com/aws/aws-sdk-go/service/rds/rdsutils"
    _ "github.com/go-sql-driver/mysql"
)

func main() {
    dbName := "app"
    dbUser := "jane_doe"
    dbHost := "mysqlcluster.cluster-123456789012.us-east-1.rds.amazonaws.com"
    dbPort := 3306
    dbEndpoint := fmt.Sprintf("%s:%d", dbHost, dbPort)
    region := "us-east-1"

    creds := credentials.NewEnvCredentials()
    authToken, err := rdsutils.BuildAuthToken(dbEndpoint, region, dbUser, creds)
    if err != nil {
        panic(err)
    }

    dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?tls=true&allowCleartextPasswords=true",
        dbUser, authToken, dbEndpoint, dbName,
    )

    db, err := sql.Open("mysql", dsn)
    if err != nil {
        panic(err)
    }

    err = db.Ping()
    if err != nil {
        panic(err)
    }
}
```

This code connects to an Aurora PostgreSQL DB cluster.

```
package main

import (
	"database/sql"
	"fmt"

	"github.com/aws/aws-sdk-go/aws/credentials"
	"github.com/aws/aws-sdk-go/service/rds/rdsutils"
	_ "github.com/lib/pq"
)

func main() {
    dbName := "app"
    dbUser := "jane_doe"
    dbHost := "postgresmycluster.cluster-123456789012.us-east-1.rds.amazonaws.com"
    dbPort := 5432
    dbEndpoint := fmt.Sprintf("%s:%d", dbHost, dbPort)
    region := "us-east-1"

    creds := credentials.NewEnvCredentials()
    authToken, err := rdsutils.BuildAuthToken(dbEndpoint, region, dbUser, creds)
    if err != nil {
        panic(err)
    }

    dsn := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s",
        dbHost, dbPort, dbUser, authToken, dbName,
    )

    db, err := sql.Open("postgres", dsn)
    if err != nil {
        panic(err)
    }

    err = db.Ping()
    if err != nil {
        panic(err)
    }
}
```

If you want to connect to a DB cluster through a proxy, see [Connecting to a database using IAM authentication](rds-proxy-connecting.md#rds-proxy-connecting-iam).

# Connecting to your DB cluster using IAM authentication and the AWS SDK for Java
<a name="UsingWithRDS.IAMDBAuth.Connecting.Java"></a>

You can connect to an Aurora MySQL or Aurora PostgreSQL DB cluster with the AWS SDK for Java as described following.

**Prerequisites**  
The following are prerequisites for connecting to your DB cluster using IAM authentication:
+ [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md)
+ [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md)
+ [Creating a database account using IAM authentication](UsingWithRDS.IAMDBAuth.DBAccounts.md)
+ [Set up the AWS SDK for Java](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-install.html)

For examples on how to use the SDK for Java 2.x, see [Amazon RDS examples using SDK for Java 2.x](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/java_rds_code_examples.html). You can also use the AWS Advanced JDBC Wrapper, see [AWS Advanced JDBC Wrapper documentation](https://github.com/aws/aws-advanced-jdbc-wrapper/blob/main/docs/Documentation.md).

**Topics**
+ [Generating an IAM authentication token](#UsingWithRDS.IAMDBAuth.Connecting.Java.AuthToken)
+ [Manually constructing an IAM authentication token](#UsingWithRDS.IAMDBAuth.Connecting.Java.AuthToken2)
+ [Connecting to a DB cluster](#UsingWithRDS.IAMDBAuth.Connecting.Java.AuthToken.Connect)

## Generating an IAM authentication token
<a name="UsingWithRDS.IAMDBAuth.Connecting.Java.AuthToken"></a>

If you are writing programs using the AWS SDK for Java, you can get a signed authentication token using the `RdsIamAuthTokenGenerator` class. Using this class requires that you provide AWS credentials. To do this, you create an instance of the `DefaultAWSCredentialsProviderChain` class. `DefaultAWSCredentialsProviderChain` uses the first AWS access key and secret key that it finds in the [default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default). For more information about AWS access keys, see [Managing access keys for users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html).

**Note**  
You cannot use a custom Route 53 DNS record instead of the DB cluster endpoint to generate the authentication token.

After you create an instance of `RdsIamAuthTokenGenerator`, you can call the `getAuthToken` method to obtain a signed token. Provide the AWS Region, host name, port number, and user name. The following code example illustrates how to do this.

```
package com.amazonaws.codesamples;

import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.amazonaws.services.rds.auth.GetIamAuthTokenRequest;
import com.amazonaws.services.rds.auth.RdsIamAuthTokenGenerator;

public class GenerateRDSAuthToken {

    public static void main(String[] args) {

	    String region = "us-west-2";
	    String hostname = "rdsmysql.123456789012.us-west-2.rds.amazonaws.com";
	    String port = "3306";
	    String username = "jane_doe";
	
	    System.out.println(generateAuthToken(region, hostname, port, username));
    }

    static String generateAuthToken(String region, String hostName, String port, String username) {

	    RdsIamAuthTokenGenerator generator = RdsIamAuthTokenGenerator.builder()
		    .credentials(new DefaultAWSCredentialsProviderChain())
		    .region(region)
		    .build();

	    String authToken = generator.getAuthToken(
		    GetIamAuthTokenRequest.builder()
		    .hostname(hostName)
		    .port(Integer.parseInt(port))
		    .userName(username)
		    .build());
	    
	    return authToken;
    }

}
```

## Manually constructing an IAM authentication token
<a name="UsingWithRDS.IAMDBAuth.Connecting.Java.AuthToken2"></a>

In Java, the easiest way to generate an authentication token is to use `RdsIamAuthTokenGenerator`. This class creates an authentication token for you, and then signs it using AWS signature version 4. For more information, see [Signature version 4 signing process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) in the *AWS General Reference.*

However, you can also construct and sign an authentication token manually, as shown in the following code example.

```
package com.amazonaws.codesamples;

import com.amazonaws.SdkClientException;
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.amazonaws.auth.SigningAlgorithm;
import com.amazonaws.util.BinaryUtils;
import org.apache.commons.lang3.StringUtils;

import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
import java.nio.charset.Charset;
import java.security.MessageDigest;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.SortedMap;
import java.util.TreeMap;

import static com.amazonaws.auth.internal.SignerConstants.AWS4_TERMINATOR;
import static com.amazonaws.util.StringUtils.UTF8;

public class CreateRDSAuthTokenManually {
    public static String httpMethod = "GET";
    public static String action = "connect";
    public static String canonicalURIParameter = "/";
    public static SortedMap<String, String> canonicalQueryParameters = new TreeMap();
    public static String payload = StringUtils.EMPTY;
    public static String signedHeader = "host";
    public static String algorithm = "AWS4-HMAC-SHA256";
    public static String serviceName = "rds-db";
    public static String requestWithoutSignature;

    public static void main(String[] args) throws Exception {

        String region = "us-west-2";
        String instanceName = "rdsmysql.123456789012.us-west-2.rds.amazonaws.com";
        String port = "3306";
        String username = "jane_doe";
	
        Date now = new Date();
        String date = new SimpleDateFormat("yyyyMMdd").format(now);
        String dateTimeStamp = new SimpleDateFormat("yyyyMMdd'T'HHmmss'Z'").format(now);
        DefaultAWSCredentialsProviderChain creds = new DefaultAWSCredentialsProviderChain();
	    String awsAccessKey = creds.getCredentials().getAWSAccessKeyId();
	    String awsSecretKey = creds.getCredentials().getAWSSecretKey();
        String expiryMinutes = "900";
        
        System.out.println("Step 1:  Create a canonical request:");
        String canonicalString = createCanonicalString(username, awsAccessKey, date, dateTimeStamp, region, expiryMinutes, instanceName, port);
        System.out.println(canonicalString);
        System.out.println();

        System.out.println("Step 2:  Create a string to sign:");        
        String stringToSign = createStringToSign(dateTimeStamp, canonicalString, awsAccessKey, date, region);
        System.out.println(stringToSign);
        System.out.println();

        System.out.println("Step 3:  Calculate the signature:");        
        String signature = BinaryUtils.toHex(calculateSignature(stringToSign, newSigningKey(awsSecretKey, date, region, serviceName)));
        System.out.println(signature);
        System.out.println();

        System.out.println("Step 4:  Add the signing info to the request");                
        System.out.println(appendSignature(signature));
        System.out.println();
        
    }

    //Step 1: Create a canonical request date should be in format YYYYMMDD and dateTime should be in format YYYYMMDDTHHMMSSZ
    public static String createCanonicalString(String user, String accessKey, String date, String dateTime, String region, String expiryPeriod, String hostName, String port) throws Exception {
        canonicalQueryParameters.put("Action", action);
        canonicalQueryParameters.put("DBUser", user);
        canonicalQueryParameters.put("X-Amz-Algorithm", "AWS4-HMAC-SHA256");
        canonicalQueryParameters.put("X-Amz-Credential", accessKey + "%2F" + date + "%2F" + region + "%2F" + serviceName + "%2Faws4_request");
        canonicalQueryParameters.put("X-Amz-Date", dateTime);
        canonicalQueryParameters.put("X-Amz-Expires", expiryPeriod);
        canonicalQueryParameters.put("X-Amz-SignedHeaders", signedHeader);
        String canonicalQueryString = "";
        while(!canonicalQueryParameters.isEmpty()) {
            String currentQueryParameter = canonicalQueryParameters.firstKey();
            String currentQueryParameterValue = canonicalQueryParameters.remove(currentQueryParameter);
            canonicalQueryString = canonicalQueryString + currentQueryParameter + "=" + currentQueryParameterValue;
            if (!currentQueryParameter.equals("X-Amz-SignedHeaders")) {
                canonicalQueryString += "&";
            }
        }
        String canonicalHeaders = "host:" + hostName + ":" + port + '\n';
        requestWithoutSignature = hostName + ":" + port + "/?" + canonicalQueryString;

        String hashedPayload = BinaryUtils.toHex(hash(payload));
        return httpMethod + '\n' + canonicalURIParameter + '\n' + canonicalQueryString + '\n' + canonicalHeaders + '\n' + signedHeader + '\n' + hashedPayload;

    }

    //Step 2: Create a string to sign using sig v4
    public static String createStringToSign(String dateTime, String canonicalRequest, String accessKey, String date, String region) throws Exception {
        String credentialScope = date + "/" + region + "/" + serviceName + "/aws4_request";
        return algorithm + '\n' + dateTime + '\n' + credentialScope + '\n' + BinaryUtils.toHex(hash(canonicalRequest));

    }

    //Step 3: Calculate signature
    /**
     * Step 3 of the &AWS; Signature version 4 calculation. It involves deriving
     * the signing key and computing the signature. Refer to
     * http://docs.aws.amazon
     * .com/general/latest/gr/sigv4-calculate-signature.html
     */
    public static byte[] calculateSignature(String stringToSign,
                                            byte[] signingKey) {
        return sign(stringToSign.getBytes(Charset.forName("UTF-8")), signingKey,
                SigningAlgorithm.HmacSHA256);
    }

    public static byte[] sign(byte[] data, byte[] key,
                          SigningAlgorithm algorithm) throws SdkClientException {
        try {
            Mac mac = algorithm.getMac();
            mac.init(new SecretKeySpec(key, algorithm.toString()));
            return mac.doFinal(data);
        } catch (Exception e) {
            throw new SdkClientException(
                    "Unable to calculate a request signature: "
                            + e.getMessage(), e);
        }
    }

    public static byte[] newSigningKey(String secretKey,
                                   String dateStamp, String regionName, String serviceName) {
        byte[] kSecret = ("AWS4" + secretKey).getBytes(Charset.forName("UTF-8"));
        byte[] kDate = sign(dateStamp, kSecret, SigningAlgorithm.HmacSHA256);
        byte[] kRegion = sign(regionName, kDate, SigningAlgorithm.HmacSHA256);
        byte[] kService = sign(serviceName, kRegion,
                SigningAlgorithm.HmacSHA256);
        return sign(AWS4_TERMINATOR, kService, SigningAlgorithm.HmacSHA256);
    }

    public static byte[] sign(String stringData, byte[] key,
                       SigningAlgorithm algorithm) throws SdkClientException {
        try {
            byte[] data = stringData.getBytes(UTF8);
            return sign(data, key, algorithm);
        } catch (Exception e) {
            throw new SdkClientException(
                    "Unable to calculate a request signature: "
                            + e.getMessage(), e);
        }
    }

    //Step 4: append the signature
    public static String appendSignature(String signature) {
        return requestWithoutSignature + "&X-Amz-Signature=" + signature;
    }

    public static byte[] hash(String s) throws Exception {
        try {
            MessageDigest md = MessageDigest.getInstance("SHA-256");
            md.update(s.getBytes(UTF8));
            return md.digest();
        } catch (Exception e) {
            throw new SdkClientException(
                    "Unable to compute hash while signing request: "
                            + e.getMessage(), e);
        }
    }
}
```

## Connecting to a DB cluster
<a name="UsingWithRDS.IAMDBAuth.Connecting.Java.AuthToken.Connect"></a>

The following code example shows how to generate an authentication token, and then use it to connect to a cluster running Aurora MySQL. 

To run this code example, you need the [AWS SDK for Java](http://aws.amazon.com/sdk-for-java/), found on the AWS site. In addition, you need the following:
+ MySQL Connector/J. This code example was tested with `mysql-connector-java-5.1.33-bin.jar`.
+ An intermediate certificate for Amazon Aurora that is specific to an AWS Region. (For more information, see [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md).) At runtime, the class loader looks for the certificate in the same directory as this Java code example, so that the class loader can find it.
+ Modify the values of the following variables as needed:
  + `RDS_INSTANCE_HOSTNAME` – The host name of the DB cluster that you want to access.
  + `RDS_INSTANCE_PORT` – The port number used for connecting to your PostgreSQL DB cluster.
  + `REGION_NAME` – The AWS Region where the DB cluster is running.
  + `DB_USER` – The database account that you want to access.
  + `SSL_CERTIFICATE` – An SSL certificate for Amazon Aurora that is specific to an AWS Region.

    To download a certificate for your AWS Region, see [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md). Place the SSL certificate in the same directory as this Java program file, so that the class loader can find the certificate at runtime.

This code example obtains AWS credentials from the [default credential provider chain](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html#credentials-default).

**Note**  
Specify a password for `DEFAULT_KEY_STORE_PASSWORD` other than the prompt shown here as a security best practice.

```
package com.amazonaws.samples;

import com.amazonaws.services.rds.auth.RdsIamAuthTokenGenerator;
import com.amazonaws.services.rds.auth.GetIamAuthTokenRequest;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.amazonaws.auth.AWSStaticCredentialsProvider;

import java.io.File;
import java.io.FileOutputStream;
import java.io.InputStream;
import java.security.KeyStore;
import java.security.cert.CertificateFactory;
import java.security.cert.X509Certificate;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
import java.util.Properties;

import java.net.URL;

public class IAMDatabaseAuthenticationTester {
    //&AWS; Credentials of the IAM user with policy enabling IAM Database Authenticated access to the db by the db user.
    private static final DefaultAWSCredentialsProviderChain creds = new DefaultAWSCredentialsProviderChain();
    private static final String AWS_ACCESS_KEY = creds.getCredentials().getAWSAccessKeyId();
    private static final String AWS_SECRET_KEY = creds.getCredentials().getAWSSecretKey();

    //Configuration parameters for the generation of the IAM Database Authentication token
    private static final String RDS_INSTANCE_HOSTNAME = "rdsmysql.123456789012.us-west-2.rds.amazonaws.com";
    private static final int RDS_INSTANCE_PORT = 3306;
    private static final String REGION_NAME = "us-west-2";
    private static final String DB_USER = "jane_doe";
    private static final String JDBC_URL = "jdbc:mysql://" + RDS_INSTANCE_HOSTNAME + ":" + RDS_INSTANCE_PORT;

    private static final String SSL_CERTIFICATE = "rds-ca-2019-us-west-2.pem";

    private static final String KEY_STORE_TYPE = "JKS";
    private static final String KEY_STORE_PROVIDER = "SUN";
    private static final String KEY_STORE_FILE_PREFIX = "sys-connect-via-ssl-test-cacerts";
    private static final String KEY_STORE_FILE_SUFFIX = ".jks";
    private static final String DEFAULT_KEY_STORE_PASSWORD = "changeit";

    public static void main(String[] args) throws Exception {
        //get the connection
        Connection connection = getDBConnectionUsingIam();

        //verify the connection is successful
        Statement stmt= connection.createStatement();
        ResultSet rs=stmt.executeQuery("SELECT 'Success!' FROM DUAL;");
        while (rs.next()) {
        	    String id = rs.getString(1);
            System.out.println(id); //Should print "Success!"
        }

        //close the connection
        stmt.close();
        connection.close();
        
        clearSslProperties();
        
    }

    /**
     * This method returns a connection to the db instance authenticated using IAM Database Authentication
     * @return
     * @throws Exception
     */
    private static Connection getDBConnectionUsingIam() throws Exception {
        setSslProperties();
        return DriverManager.getConnection(JDBC_URL, setMySqlConnectionProperties());
    }

    /**
     * This method sets the mysql connection properties which includes the IAM Database Authentication token
     * as the password. It also specifies that SSL verification is required.
     * @return
     */
    private static Properties setMySqlConnectionProperties() {
        Properties mysqlConnectionProperties = new Properties();
        mysqlConnectionProperties.setProperty("verifyServerCertificate","true");
        mysqlConnectionProperties.setProperty("useSSL", "true");
        mysqlConnectionProperties.setProperty("user",DB_USER);
        mysqlConnectionProperties.setProperty("password",generateAuthToken());
        return mysqlConnectionProperties;
    }

    /**
     * This method generates the IAM Auth Token.
     * An example IAM Auth Token would look like follows:
     * btusi123---cmz7kenwo2ye---rds---cn-north-1.amazonaws.com.rproxy.goskope.com.cn:3306/?Action=connect&DBUser=iamtestuser&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20171003T010726Z&X-Amz-SignedHeaders=host&X-Amz-Expires=899&X-Amz-Credential=AKIAPFXHGVDI5RNFO4AQ%2F20171003%2Fcn-north-1%2Frds-db%2Faws4_request&X-Amz-Signature=f9f45ef96c1f770cdad11a53e33ffa4c3730bc03fdee820cfdf1322eed15483b
     * @return
     */
    private static String generateAuthToken() {
        BasicAWSCredentials awsCredentials = new BasicAWSCredentials(AWS_ACCESS_KEY, AWS_SECRET_KEY);

        RdsIamAuthTokenGenerator generator = RdsIamAuthTokenGenerator.builder()
                .credentials(new AWSStaticCredentialsProvider(awsCredentials)).region(REGION_NAME).build();
        return generator.getAuthToken(GetIamAuthTokenRequest.builder()
                .hostname(RDS_INSTANCE_HOSTNAME).port(RDS_INSTANCE_PORT).userName(DB_USER).build());
    }

    /**
     * This method sets the SSL properties which specify the key store file, its type and password:
     * @throws Exception
     */
    private static void setSslProperties() throws Exception {
        System.setProperty("javax.net.ssl.trustStore", createKeyStoreFile());
        System.setProperty("javax.net.ssl.trustStoreType", KEY_STORE_TYPE);
        System.setProperty("javax.net.ssl.trustStorePassword", DEFAULT_KEY_STORE_PASSWORD);
    }

    /**
     * This method returns the path of the Key Store File needed for the SSL verification during the IAM Database Authentication to
     * the db instance.
     * @return
     * @throws Exception
     */
    private static String createKeyStoreFile() throws Exception {
        return createKeyStoreFile(createCertificate()).getPath();
    }

    /**
     *  This method generates the SSL certificate
     * @return
     * @throws Exception
     */
    private static X509Certificate createCertificate() throws Exception {
        CertificateFactory certFactory = CertificateFactory.getInstance("X.509");
        URL url = new File(SSL_CERTIFICATE).toURI().toURL();
        if (url == null) {
            throw new Exception();
        }
        try (InputStream certInputStream = url.openStream()) {
            return (X509Certificate) certFactory.generateCertificate(certInputStream);
        }
    }

    /**
     * This method creates the Key Store File
     * @param rootX509Certificate - the SSL certificate to be stored in the KeyStore
     * @return
     * @throws Exception
     */
    private static File createKeyStoreFile(X509Certificate rootX509Certificate) throws Exception {
        File keyStoreFile = File.createTempFile(KEY_STORE_FILE_PREFIX, KEY_STORE_FILE_SUFFIX);
        try (FileOutputStream fos = new FileOutputStream(keyStoreFile.getPath())) {
            KeyStore ks = KeyStore.getInstance(KEY_STORE_TYPE, KEY_STORE_PROVIDER);
            ks.load(null);
            ks.setCertificateEntry("rootCaCertificate", rootX509Certificate);
            ks.store(fos, DEFAULT_KEY_STORE_PASSWORD.toCharArray());
        }
        return keyStoreFile;
    }
    
    /**
     * This method clears the SSL properties.
     * @throws Exception
     */
    private static void clearSslProperties() throws Exception {
           System.clearProperty("javax.net.ssl.trustStore");
           System.clearProperty("javax.net.ssl.trustStoreType");
           System.clearProperty("javax.net.ssl.trustStorePassword"); 
    }
    
}
```

If you want to connect to a DB cluster through a proxy, see [Connecting to a database using IAM authentication](rds-proxy-connecting.md#rds-proxy-connecting-iam).

# Connecting to your DB cluster using IAM authentication and the AWS SDK for Python (Boto3)
<a name="UsingWithRDS.IAMDBAuth.Connecting.Python"></a>

You can connect to an Aurora MySQL or Aurora PostgreSQL DB cluster with the AWS SDK for Python (Boto3) as described following.

**Prerequisites**  
The following are prerequisites for connecting to your DB cluster using IAM authentication:
+ [Enabling and disabling IAM database authentication](UsingWithRDS.IAMDBAuth.Enabling.md)
+ [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md)
+ [Creating a database account using IAM authentication](UsingWithRDS.IAMDBAuth.DBAccounts.md)

In addition, make sure the imported libraries in the sample code exist on your system.

**Examples**  
The code examples use profiles for shared credentials. For information about the specifying credentials, see [Credentials](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) in the AWS SDK for Python (Boto3) documentation.

The following code examples show how to generate an authentication token, and then use it to connect to a DB cluster. 

To run this code example, you need the [AWS SDK for Python (Boto3)](http://aws.amazon.com/sdk-for-python/), found on the AWS site.

Modify the values of the following variables as needed:
+ `ENDPOINT` – The endpoint of the DB cluster that you want to access
+ `PORT` – The port number used for connecting to your DB cluster
+ `USER` – The database account that you want to access
+ `REGION` – The AWS Region where the DB cluster is running
+ `DBNAME` – The database that you want to access
+ `SSLCERTIFICATE` – The full path to the SSL certificate for Amazon Aurora

  For `ssl_ca`, specify an SSL certificate. To download an SSL certificate, see [Using SSL/TLS to encrypt a connection to a DB cluster](UsingWithRDS.SSL.md).

**Note**  
You cannot use a custom Route 53 DNS record instead of the DB cluster endpoint to generate the authentication token.

This code connects to an Aurora MySQL DB cluster.

Before running this code, install the PyMySQL driver by following the instructions in the [ Python Package Index](https://pypi.org/project/PyMySQL/).

```
import pymysql
import sys
import boto3
import os

ENDPOINT="mysqlcluster.cluster-123456789012.us-east-1.rds.amazonaws.com"
PORT="3306"
USER="jane_doe"
REGION="us-east-1"
DBNAME="mydb"
os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'

#gets the credentials from .aws/credentials
session = boto3.Session(profile_name='default')
client = session.client('rds')

token = client.generate_db_auth_token(DBHostname=ENDPOINT, Port=PORT, DBUsername=USER, Region=REGION)

try:
    conn =  pymysql.connect(auth_plugin_map={'mysql_clear_password':None},host=ENDPOINT, user=USER, password=token, port=PORT, database=DBNAME, ssl_ca='SSLCERTIFICATE', ssl_verify_identity=True, ssl_verify_cert=True)
    cur = conn.cursor()
    cur.execute("""SELECT now()""")
    query_results = cur.fetchall()
    print(query_results)
except Exception as e:
    print("Database connection failed due to {}".format(e))
```

This code connects to an Aurora PostgreSQL DB cluster.

Before running this code, install `psycopg2` by following the instructions in [Psycopg documentation](https://pypi.org/project/psycopg2/).

```
import psycopg2
import sys
import boto3
import os

ENDPOINT="postgresmycluster.cluster-123456789012.us-east-1.rds.amazonaws.com"
PORT="5432"
USER="jane_doe"
REGION="us-east-1"
DBNAME="mydb"

#gets the credentials from .aws/credentials
session = boto3.Session(profile_name='RDSCreds')
client = session.client('rds')

token = client.generate_db_auth_token(DBHostname=ENDPOINT, Port=PORT, DBUsername=USER, Region=REGION)

try:
    conn = psycopg2.connect(host=ENDPOINT, port=PORT, database=DBNAME, user=USER, password=token, sslrootcert="SSLCERTIFICATE")
    cur = conn.cursor()
    cur.execute("""SELECT now()""")
    query_results = cur.fetchall()
    print(query_results)
except Exception as e:
    print("Database connection failed due to {}".format(e))
```

If you want to connect to a DB cluster through a proxy, see [Connecting to a database using IAM authentication](rds-proxy-connecting.md#rds-proxy-connecting-iam).

# Troubleshooting for IAM DB authentication
<a name="UsingWithRDS.IAMDBAuth.Troubleshooting"></a>

Following, you can find troubleshooting ideas for some common IAM DB authentication issues and information on CloudWatch logs and metrics for IAM DB authentication.

## Exporting IAM DB authentication error logs to CloudWatch Logs
<a name="UsingWithRDS.IAMDBAuth.Troubleshooting.ErrorLogs"></a>

IAM DB authentication error logs are stored on the database host, and you can export these logs your CloudWatch Logs account. Use the logs and remediation methods in this page to troubleshoot IAM DB authentication issues.

You can enable log exports to CloudWatch Logs from the console, AWS CLI, and RDS API. For console instructions, see [Publishing database logs to Amazon CloudWatch Logs](USER_LogAccess.Procedural.UploadtoCloudWatch.md).

To export your IAM DB authentication error logs to CloudWatch Logs when creating a DB cluster from the AWS CLI, use the following command:

```
aws rds create-db-cluster --db-cluster-identifier mydbinstance \
--region us-east-1 \
--engine postgres \
--engine-version 16 \
--master-username master \
--master-user-password password \
--publicly-accessible \
--enable-iam-database-authentication \
--enable-cloudwatch-logs-exports=iam-db-auth-error
```

To export your IAM DB authentication error logs to CloudWatch Logs when modifying a DB clusterfrom the AWS CLI, use the following command:

```
aws rds modify-db-cluster --db-cluster-identifier mydbcluster \
--region us-east-1 \
--cloudwatch-logs-export-configuration '{"EnableLogTypes":["iam-db-auth-error"]}'
```

To verify if your DB cluster is exporting IAM DB authentication logs to CloudWatch Logs, check if the `EnabledCloudwatchLogsExports` parameter is set to `iam-db-auth-error` in the output for the `describe-db-instances` command.

```
aws rds describe-db-cluster --region us-east-1 --db-cluster-identifier mydbcluster
            ...
            
             "EnabledCloudwatchLogsExports": [
                "iam-db-auth-error"
            ],
            ...
```

## IAM DB authentication CloudWatch metrics
<a name="UsingWithRDS.IAMDBAuth.Troubleshooting.CWMetrics"></a>

Amazon Aurora delivers near-real time metrics about IAM DB authentication to your Amazon CloudWatch account. The following table lists the IAM DB authentication metrics available using CloudWatch:


| Metric | Description | 
| --- | --- | 
|  `IamDbAuthConnectionRequests`  |  Total number of connection requests made with IAM DB authentication.  | 
|  `IamDbAuthConnectionSuccess`  |  Total number of successful IAM DB authentication requests.  | 
|  `IamDbAuthConnectionFailure`  |  Total number of failed IAM DB authentication requests.  | 
|  `IamDbAuthConnectionFailureInvalidToken`  | Total number of failed IAM DB authentication requests due to invalid token. | 
|  `IamDbAuthConnectionFailureInsufficientPermissions`  |  Total number of failed IAM DB authentication requests due to incorrect policies or permissions.  | 
|  `IamDbAuthConnectionFailureThrottling`  |  Total number of failed IAM DB authentication requests due to IAM DB authentication throttling.  | 
|  `IamDbAuthConnectionFailureServerError`  |  Total number of failed IAM DB authentication requests due to an internal server error in the IAM DB authentication feature.  | 

## Common issues and solutions
<a name="UsingWithRDS.IAMDBAuth.Troubleshooting.IssuesSolutions"></a>

 You might encounter the following issues when using IAM DB authention. Use the remediation steps in the table to solve the issues:


| Error | Metric(s) | Cause | Solution | 
| --- | --- | --- | --- | 
|  `[ERROR] Failed to authenticate the connection request for user db_user because the provided token is malformed or otherwise invalid. (Status Code: 400, Error Code: InvalidToken)`  |  `IamDbAuthConnectionFailure` `IamDbAuthConnectionFailureInvalidToken`  |  The IAM DB authentiation token in the connection request is either not a valid SigV4a token, or it is not formatted correctly.  |  Check your token generation strategy in your application. In some cases, make sure you are passing the token with valid formatting. Truncating the token (or incorrect string formatting) will make the token invalid.   | 
|  `[ERROR] Failed to authenticate the connection request for user db_user because the token age is longer than 15 minutes. (Status Code: 400, Error Code:ExpiredToken)`  |  `IamDbAuthConnectionFailure` `IamDbAuthConnectionFailureInvalidToken`  |  The IAM DB authentication token has expired. Tokens are only valid for 15 minutes.  |  Check your token caching and/or token re-use logic in your application. You should not re-use tokens that are older than 15 minutes.  | 
|  `[ERROR] Failed to authorize the connection request for user db_user because the IAM policy assumed by the caller 'arn:aws:sts::123456789012:assumed-role/ <RoleName>/ <RoleSession>' is not authorized to perform `rds-db:connect` on the DB instance. (Status Code: 403, Error Code:NotAuthorized)`  |  `IamDbAuthConnectionFailure` `IamDbAuthConnectionFailureInsufficientPermissions`  |  This error might be due to the following reasons: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.IAMDBAuth.Troubleshooting.html)  |  Verify that the IAM role and/or policy you are assuming in your application. Make sure you assume the same policy to generate the token as to connect to the DB.   | 
|  `[ERROR] Failed to authorize the connection request for user db_user due to IAM DB authentication throttling. (Status Code: 429, Error Code: ThrottlingException)`  |  `IamDbAuthConnectionFailure` `IamDbAuthConnectionFailureThrottling`  | You are making too many connection requests to your DB in a short amount of time. IAM DB authentication throttling limit is 200 connections per second. |  Reduce the rate of establishing new connections with IAM authentication. Consider implementing connection pooling using RDS Proxy in order to reuse established connections in your application.  | 
|  `[ERROR] Failed to authorize the connection request for user db_user due to an internal IAM DB authentication error. (Status Code: 500, Error Code: InternalError)`  |  `IamDbAuthConnectionFailure` `IamDbAuthConnectionFailureThrottling` |  There was an internal error while authorizing the DB conneciton with IAM DB authentication.  |  Reach out to https://aws.amazon.com/premiumsupport/ to investigate the issue.  | 

# Troubleshooting Amazon Aurora identity and access
<a name="security_iam_troubleshoot"></a>

Use the following information to help you diagnose and fix common issues that you might encounter when working with Aurora and IAM.

**Topics**
+ [I'm not authorized to perform an action in Aurora](#security_iam_troubleshoot-no-permissions)
+ [I'm not authorized to perform iam:PassRole](#security_iam_troubleshoot-passrole)
+ [I want to allow people outside of my AWS account to access my Aurora resources](#security_iam_troubleshoot-cross-account-access)

## I'm not authorized to perform an action in Aurora
<a name="security_iam_troubleshoot-no-permissions"></a>

If the AWS Management Console tells you that you're not authorized to perform an action, then you must contact your administrator for assistance. Your administrator is the person that provided you with your sign-in credentials.

The following example error occurs when the `mateojackson` user tries to use the console to view details about a *widget* but does not have `rds:GetWidget` permissions.

```
User: arn:aws:iam::123456789012:user/mateojackson is not authorized to perform: rds:GetWidget on resource: my-example-widget
```

In this case, Mateo asks his administrator to update his policies to allow him to access the `my-example-widget` resource using the `rds:GetWidget` action.

## I'm not authorized to perform iam:PassRole
<a name="security_iam_troubleshoot-passrole"></a>

If you receive an error that you're not authorized to perform the `iam:PassRole` action, then you must contact your administrator for assistance. Your administrator is the person that provided you with your sign-in credentials. Ask that person to update your policies to allow you to pass a role to Aurora.

Some AWS services allow you to pass an existing role to that service, instead of creating a new service role or service-linked role. To do this, you must have permissions to pass the role to the service.

The following example error occurs when a user named `marymajor` tries to use the console to perform an action in Aurora. However, the action requires the service to have permissions granted by a service role. Mary does not have permissions to pass the role to the service.

```
User: arn:aws:iam::123456789012:user/marymajor is not authorized to perform: iam:PassRole
```

In this case, Mary asks her administrator to update her policies to allow her to perform the `iam:PassRole` action.

## I want to allow people outside of my AWS account to access my Aurora resources
<a name="security_iam_troubleshoot-cross-account-access"></a>

You can create a role that users in other accounts or people outside of your organization can use to access your resources. You can specify who is trusted to assume the role. For services that support resource-based policies or access control lists (ACLs), you can use those policies to grant people access to your resources.

To learn more, consult the following:
+ To learn whether Aurora supports these features, see [How Amazon Aurora works with IAM](security_iam_service-with-iam.md).
+ To learn how to provide access to your resources across AWS accounts that you own, see [Providing access to an IAM user in another AWS account that you own](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_aws-accounts.html) in the *IAM User Guide*.
+ To learn how to provide access to your resources to third-party AWS accounts, see [Providing access to AWS accounts owned by third parties](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html) in the *IAM User Guide*.
+ To learn how to provide access through identity federation, see [Providing access to externally authenticated users (identity federation)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_federated-users.html) in the *IAM User Guide*.
+ To learn the difference between using roles and resource-based policies for cross-account access, see [How IAM roles differ from resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_compare-resource-policies.html) in the *IAM User Guide*.

# Logging and monitoring in Amazon Aurora
<a name="Overview.LoggingAndMonitoring"></a>

Monitoring is an important part of maintaining the reliability, availability, and performance of Amazon Aurora and your AWS solutions. You should collect monitoring data from all of the parts of your AWS solution so that you can more easily debug a multi-point failure if one occurs. AWS provides several tools for monitoring your Amazon Aurora resources and responding to potential incidents:

**Amazon CloudWatch Alarms**  
Using Amazon CloudWatch alarms, you watch a single metric over a time period that you specify. If the metric exceeds a given threshold, a notification is sent to an Amazon SNS topic or AWS Auto Scaling policy. CloudWatch alarms do not invoke actions because they are in a particular state. Rather the state must have changed and been maintained for a specified number of periods.

**AWS CloudTrail Logs**  
CloudTrail provides a record of actions taken by a user, role, or an AWS service in Amazon Aurora. CloudTrail captures all API calls for Amazon Aurora as events, including calls from the console and from code calls to Amazon RDS API operations. Using the information collected by CloudTrail, you can determine the request that was made to Amazon Aurora, the IP address from which the request was made, who made the request, when it was made, and additional details. For more information, see [Monitoring Amazon Aurora API calls in AWS CloudTrail](logging-using-cloudtrail.md) .

**Enhanced Monitoring**  
 Amazon Aurora provides metrics in real time for the operating system (OS) that your DB cluster runs on. You can view the metrics for your DB cluster using the console, or consume the Enhanced Monitoring JSON output from Amazon CloudWatch Logs in a monitoring system of your choice. For more information, see [Monitoring OS metrics with Enhanced Monitoring](USER_Monitoring.OS.md) .

**Amazon RDS Performance Insights**  
Performance Insights expands on existing Amazon Aurora monitoring features to illustrate your database's performance and help you analyze any issues that affect it. With the Performance Insights dashboard, you can visualize the database load and filter the load by waits, SQL statements, hosts, or users. For more information, see [Monitoring DB load with Performance Insights on Amazon Aurora](USER_PerfInsights.md) .

**Database Logs**  
You can view, download, and watch database logs using the AWS Management Console, AWS CLI, or RDS API. For more information, see [Monitoring Amazon Aurora log files](USER_LogAccess.md) .

** Amazon Aurora Recommendations**  
 Amazon Aurora provides automated recommendations for database resources. These recommendations provide best practice guidance by analyzing DB cluster configuration, usage, and performance data. For more information, see [Recommendations from Amazon Aurora](monitoring-recommendations.md) .

** Amazon Aurora Event Notification**  
 Amazon Aurora uses the Amazon Simple Notification Service (Amazon SNS) to provide notification when an Amazon Aurora event occurs. These notifications can be in any notification form supported by Amazon SNS for an AWS Region, such as an email, a text message, or a call to an HTTP endpoint. For more information, see [Working with Amazon RDS event notification](USER_Events.md) .

**AWS Trusted Advisor**  
Trusted Advisor draws upon best practices learned from serving hundreds of thousands of AWS customers. Trusted Advisor inspects your AWS environment and then makes recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. All AWS customers have access to five Trusted Advisor checks. Customers with a Business or Enterprise support plan can view all Trusted Advisor checks.   
Trusted Advisor has the following Amazon Aurora-related checks:  
+  Amazon Aurora Idle DB Instances
+  Amazon Aurora Security Group Access Risk
+  Amazon Aurora Backups
+  Amazon Aurora Multi-AZ
+ Aurora DB Instance Accessibility
For more information on these checks, see [Trusted Advisor best practices (checks)](https://aws.amazon.com/premiumsupport/trustedadvisor/best-practices/). 

**Database activity streams**  
Database activity streams can protect your databases from internal threats by controlling DBA access to the database activity streams. Thus, the collection, transmission, storage, and subsequent processing of the database activity stream is beyond the access of the DBAs that manage the database. Database activity streams can provide safeguards for your database and meet compliance and regulatory requirements. For more information, see[Monitoring Amazon Aurora with Database Activity Streams](DBActivityStreams.md) .

For more information about monitoring Aurora see [Monitoring metrics in an Amazon Aurora cluster](MonitoringAurora.md) .

# Compliance validation for Amazon Aurora
<a name="RDS-compliance"></a>

Third-party auditors assess the security and compliance of Amazon Aurora as part of multiple AWS compliance programs. These include SOC, PCI, FedRAMP, HIPAA, and others. 

For a list of AWS services in scope of specific compliance programs, see [AWS services in scope by compliance program](https://aws.amazon.com/compliance/services-in-scope/). For general information, see [AWS compliance programs](https://aws.amazon.com/compliance/programs/).

You can download third-party audit reports using AWS Artifact. For more information, see [Downloading reports in AWS Artifact](https://docs.aws.amazon.com/artifact/latest/ug/downloading-documents.html). 

Your compliance responsibility when using Amazon Aurora is determined by the sensitivity of your data, your organization's compliance objectives, and applicable laws and regulations. AWS provides the following resources to help with compliance: 
+ [Security and compliance quick start guides](https://aws.amazon.com/quickstart/?awsf.quickstart-homepage-filter=categories%23security-identity-compliance) – These deployment guides discuss architectural considerations and provide steps for deploying security- and compliance-focused baseline environments on AWS.
+ [Architecting for HIPAA Security and Compliance on Amazon Web Services ](https://docs.aws.amazon.com/pdfs/whitepapers/latest/architecting-hipaa-security-and-compliance-on-aws/architecting-hipaa-security-and-compliance-on-aws.pdf) – This whitepaper describes how companies can use AWS to create HIPAA-compliant applications.
+ [AWS compliance resources](https://aws.amazon.com/compliance/resources/) – This collection of workbooks and guides that might apply to your industry and location.
+ [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html) – This AWS service assesses how well your resource configurations comply with internal practices, industry guidelines, and regulations.
+ [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) – This AWS service provides a comprehensive view of your security state within AWS. Security Hub CSPM uses security controls to evaluate your AWS resources and to check your compliance against security industry standards and best practices. For a list of supported services and controls, see [Security Hub CSPM controls reference](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-controls-reference.html).

# Resilience in Amazon Aurora
<a name="disaster-recovery-resiliency"></a>

The AWS global infrastructure is built around AWS Regions and Availability Zones. AWS Regions provide multiple physically separated and isolated Availability Zones, which are connected with low-latency, high-throughput, and highly redundant networking. With Availability Zones, you can design and operate applications and databases that automatically fail over between Availability Zones without interruption. Availability Zones are more highly available, fault tolerant, and scalable than traditional single or multiple data center infrastructures. 

For more information about AWS Regions and Availability Zones, see [AWS global infrastructure](https://aws.amazon.com/about-aws/global-infrastructure/).

In addition to the AWS global infrastructure, Aurora offers features to help support your data resiliency and backup needs.

## Backup and restore
<a name="disaster-recovery-resiliency.backup-restore"></a>

Aurora backs up your cluster volume automatically and retains restore data for the length of the *backup retention period*. Aurora backups are continuous and incremental so you can quickly restore to any point within the backup retention period. No performance impact or interruption of database service occurs as backup data is being written. You can specify a backup retention period, from 1 to 35 days, when you create or modify a DB cluster.

If you want to retain a backup beyond the backup retention period, you can also take a snapshot of the data in your cluster volume. Aurora retains incremental restore data for the entire backup retention period. Thus, you need to create a snapshot only for data that you want to retain beyond the backup retention period. You can create a new DB cluster from the snapshot.

You can recover your data by creating a new Aurora DB cluster from the backup data that Aurora retains, or from a DB cluster snapshot that you have saved. You can quickly create a new copy of a DB cluster from backup data to any point in time during your backup retention period. The continuous and incremental nature of Aurora backups during the backup retention period means you don't need to take frequent snapshots of your data to improve restore times.

For more information, see [Backing up and restoring an Amazon Aurora DB cluster](BackupRestoreAurora.md).

## Replication
<a name="disaster-recovery-resiliency.replication"></a>

Aurora Replicas are independent endpoints in an Aurora DB cluster, best used for scaling read operations and increasing availability. Up to 15 Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region. The DB cluster volume is made up of multiple copies of the data for the DB cluster. However, the data in the cluster volume is represented as a single, logical volume to the primary DB instance and to Aurora Replicas in the DB cluster. If the primary DB instance fails, an Aurora Replica is promoted to be the primary DB instance.

Aurora also supports replication options that are specific to Aurora MySQL and Aurora PostgreSQL.

For more information, see [Replication with Amazon Aurora](Aurora.Replication.md).

## Failover
<a name="disaster-recovery-resiliency.failover"></a>

Aurora stores copies of the data in a DB cluster across multiple Availability Zones in a single AWS Region. This storage occurs regardless of whether the DB instances in the DB cluster span multiple Availability Zones. When you create Aurora Replicas across Availability Zones, Aurora automatically provisions and maintains them synchronously. The primary DB instance is synchronously replicated across Availability Zones to Aurora Replicas to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups. Running a DB cluster with high availability can enhance availability during planned system maintenance, and help protect your databases against failure and Availability Zone disruption.

For more information, see [High availability for Amazon Aurora](Concepts.AuroraHighAvailability.md).

# Infrastructure security in Amazon Aurora
<a name="infrastructure-security"></a>

As a managed service, Amazon Relational Database Service is protected by AWS global network security. For information about AWS security services and how AWS protects infrastructure, see [AWS Cloud Security](https://aws.amazon.com/security/). To design your AWS environment using the best practices for infrastructure security, see [Infrastructure Protection](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/infrastructure-protection.html) in *Security Pillar AWS Well‐Architected Framework*.

You use AWS published API calls to access Amazon RDS through the network. Clients must support the following:
+ Transport Layer Security (TLS). We require TLS 1.2 and recommend TLS 1.3.
+ Cipher suites with perfect forward secrecy (PFS) such as DHE (Ephemeral Diffie-Hellman) or ECDHE (Elliptic Curve Ephemeral Diffie-Hellman). Most modern systems such as Java 7 and later support these modes.

In addition, Aurora offers features to help support infrastructure security.

## Security groups
<a name="infrastructure-security.security-groups"></a>

Security groups control the access that traffic has in and out of a DB cluster. By default, network access is turned off to a DB cluster. You can specify rules in a security group that allow access from an IP address range, port, or security group. After ingress rules are configured, the same rules apply to all DB clusters that are associated with that security group.

For more information, see [Controlling access with security groups](Overview.RDSSecurityGroups.md).

## Public accessibility
<a name="infrastructure-security.publicly-accessible"></a>

When you launch a DB instance inside a virtual private cloud (VPC) based on the Amazon VPC service, you can turn on or off public accessibility for that DB instance. To designate whether the DB instance that you create has a DNS name that resolves to a public IP address, you use the *Public accessibility* parameter. By using this parameter, you can designate whether there is public access to the DB instance. You can modify a DB instance to turn on or off public accessibility by modifying the *Public accessibility* parameter.

For more information, see [Hiding a DB cluster in a VPC from the internet](USER_VPC.WorkingWithRDSInstanceinaVPC.md#USER_VPC.Hiding).

**Note**  
If your DB instance is in a VPC but isn't publicly accessible, you can also use an AWS Site-to-Site VPN connection or an Direct Connect connection to access it from a private network. For more information, see [Internetwork traffic privacy](inter-network-traffic-privacy.md).

# Amazon RDS API and interface VPC endpoints (AWS PrivateLink)
<a name="vpc-interface-endpoints"></a>

You can establish a private connection between your VPC and Amazon RDS API endpoints by creating an *interface VPC endpoint*. Interface endpoints are powered by [AWS PrivateLink](https://aws.amazon.com/privatelink). 

AWS PrivateLink enables you to privately access Amazon RDS API operations without an internet gateway, NAT device, VPN connection, or Direct Connect connection. DB instances in your VPC don't need public IP addresses to communicate with Amazon RDS API endpoints to launch, modify, or terminate DB instances and DB clusters. Your DB instances also don't need public IP addresses to use any of the available RDS API operations. Traffic between your VPC and Amazon RDS doesn't leave the Amazon network. 

Each interface endpoint is represented by one or more elastic network interfaces in your subnets. For more information on elastic network interfaces, see [Elastic network interfaces](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) in the *Amazon EC2 User Guide.* 

For more information about VPC endpoints, see [Interface VPC endpoints (AWS PrivateLink)](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html) in the *Amazon VPC User Guide*. For more information about RDS API operations, see [Amazon RDS API Reference](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/).

You don't need an interface VPC endpoint to connect to a DB cluster. For more information, see [Scenarios for accessing a DB cluster in a VPC](USER_VPC.Scenarios.md).

## Considerations for VPC endpoints
<a name="vpc-endpoint-considerations"></a>

Before you set up an interface VPC endpoint for Amazon RDS API endpoints, ensure that you review [Interface endpoint properties and limitations](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#vpce-interface-limitations) in the *Amazon VPC User Guide*. 

All RDS API operations relevant to managing Amazon Aurora resources are available from your VPC using AWS PrivateLink.

VPC endpoint policies are supported for RDS API endpoints. By default, full access to RDS API operations is allowed through the endpoint. For more information, see [Controlling access to services with VPC endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-access.html) in the *Amazon VPC User Guide*.

## Availability
<a name="rds-and-vpc-interface-endpoints-availability"></a>

Amazon RDS API currently supports VPC endpoints in the following AWS Regions:
+ US East (Ohio)
+ US East (N. Virginia)
+ US West (N. California)
+ US West (Oregon)
+ Africa (Cape Town)
+ Asia Pacific (Hong Kong)
+ Asia Pacific (Mumbai)
+ Asia Pacific (New Zealand)
+ Asia Pacific (Osaka)
+ Asia Pacific (Seoul)
+ Asia Pacific (Singapore)
+ Asia Pacific (Sydney)
+ Asia Pacific (Taipei)
+ Asia Pacific (Thailand)
+ Asia Pacific (Tokyo)
+ Canada (Central)
+ Canada West (Calgary)
+ China (Beijing)
+ China (Ningxia)
+ Europe (Frankfurt)
+ Europe (Zurich)
+ Europe (Ireland)
+ Europe (London)
+ Europe (Paris)
+ Europe (Stockholm)
+ Europe (Milan)
+ Israel (Tel Aviv)
+ Mexico (Central)
+ Middle East (Bahrain)
+ South America (São Paulo)
+ AWS GovCloud (US-East)
+ AWS GovCloud (US-West)

## Creating an interface VPC endpoint for Amazon RDS API
<a name="vpc-endpoint-create"></a>

You can create a VPC endpoint for the Amazon RDS API using either the Amazon VPC console or the AWS Command Line Interface (AWS CLI). For more information, see [Creating an interface endpoint](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#create-interface-endpoint) in the *Amazon VPC User Guide*.

Create a VPC endpoint for Amazon RDS API using the service name `com.amazonaws.region.rds`.

Excluding AWS Regions in China, if you enable private DNS for the endpoint, you can make API requests to Amazon RDS with the VPC endpoint using its default DNS name for the AWS Region, for example `rds.us-east-1.amazonaws.com`. For the China (Beijing) and China (Ningxia) AWS Regions, you can make API requests with the VPC endpoint using `rds-api---cn-north-1.amazonaws.com.rproxy.goskope.com.cn` and `rds-api---cn-northwest-1.amazonaws.com.rproxy.goskope.com.cn`, respectively. 

For more information, see [Accessing a service through an interface endpoint](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html#access-service-though-endpoint) in the *Amazon VPC User Guide*.

## Creating a VPC endpoint policy for Amazon RDS API
<a name="vpc-endpoint-policy"></a>

You can attach an endpoint policy to your VPC endpoint that controls access to Amazon RDS API. The policy specifies the following information:
+ The principal that can perform actions.
+ The actions that can be performed.
+ The resources on which actions can be performed.

For more information, see [Controlling access to services with VPC endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-access.html) in the *Amazon VPC User Guide*. 

**Example: VPC endpoint policy for Amazon RDS API actions**  
The following is an example of an endpoint policy for Amazon RDS API. When attached to an endpoint, this policy grants access to the listed Amazon RDS API actions for all principals on all resources.

```
{
   "Statement":[
      {
         "Principal":"*",
         "Effect":"Allow",
         "Action":[
            "rds:CreateDBInstance",
            "rds:ModifyDBInstance",
            "rds:CreateDBSnapshot"
         ],
         "Resource":"*"
      }
   ]
}
```

**Example: VPC endpoint policy that denies all access from a specified AWS account**  
The following VPC endpoint policy denies AWS account `123456789012` all access to resources using the endpoint. The policy allows all actions from other accounts.

```
{
  "Statement": [
    {
      "Action": "*",
      "Effect": "Allow",
      "Resource": "*",
      "Principal": "*"
    },
    {
      "Action": "*",
      "Effect": "Deny",
      "Resource": "*",
      "Principal": { "AWS": [ "123456789012" ] }
     }
   ]
}
```

# Security best practices for Amazon Aurora
<a name="CHAP_BestPractices.Security"></a>

Use AWS Identity and Access Management (IAM) accounts to control access to Amazon RDS API operations, especially operations that create, modify, or delete Amazon Aurora resources. Such resources include DB clusters, security groups, and parameter groups. Also use IAM to control actions that perform common administrative actions such as backing up and restoring DB clusters. 
+ Create an individual user for each person who manages Amazon Aurora resources, including yourself. Don't use AWS root credentials to manage Amazon Aurora resources.
+ Grant each user the minimum set of permissions required to perform his or her duties.
+ Use IAM groups to effectively manage permissions for multiple users.
+ Rotate your IAM credentials regularly.
+ Configure AWS Secrets Manager to automatically rotate the secrets for Amazon Aurora. For more information, see [Rotating your AWS Secrets Manager secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html) in the *AWS Secrets Manager User Guide*. You can also retrieve the credential from AWS Secrets Manager programmatically. For more information, see [Retrieving the secret value](https://docs.aws.amazon.com/secretsmanager/latest/userguide/manage_retrieve-secret.html) in the *AWS Secrets Manager User Guide*. 

For more information about Amazon Aurora security, see [Security in Amazon Aurora](UsingWithRDS.md) . For more information about IAM, see [AWS Identity and Access Management](https://docs.aws.amazon.com/IAM/latest/UserGuide/Welcome.html). For information on IAM best practices, see [IAM best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/IAMBestPractices.html). 

AWS Security Hub CSPM uses security controls to evaluate resource configurations and security standards to help you comply with various compliance frameworks. For more information about using Security Hub CSPM to evaluate RDS resources, see [Amazon Relational Database Service controls](https://docs.aws.amazon.com/securityhub/latest/userguide/rds-controls.html) in the AWS Security Hub User Guide.

You can monitor your usage of RDS as it relates to security best practices by using Security Hub CSPM. For more information, see [What is AWS Security Hub CSPM?](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html). 

Use the AWS Management Console, the AWS CLI, or the RDS API to change the password for your master user. If you use another tool, such as a SQL client, to change the master user password, it might result in privileges being revoked for the user unintentionally.

Amazon GuardDuty is a continuous security monitoring service that analyzes and processes various data sources, including Amazon RDS login activity. It uses threat intelligence feeds and machine learning to identify unexpected, potentially unauthorized, suspicious login behavior, and malicious activity within your AWS environment.

 When Amazon GuardDuty RDS Protection detects a potentially suspicious or anomalous login attempt that indicates a threat to your database, GuardDuty generates a new finding with details about the potentially compromised database. For more information, see [Monitoring threats with Amazon GuardDuty RDS Protectionfor Amazon Aurora](guard-duty-rds-protection.md) .

# Controlling access with security groups
<a name="Overview.RDSSecurityGroups"></a>

VPC security groups control the access that traffic has in and out of a DB cluster. By default, network access is turned off for a DB cluster. You can specify rules in a security group that allow access from an IP address range, port, or security group. After ingress rules are configured, the same rules apply to all DB clusters that are associated with that security group. You can specify up to 20 rules in a security group.

## Overview of VPC security groups
<a name="Overview.RDSSecurityGroups.VPCSec"></a>

Each VPC security group rule makes it possible for a specific source to access a DB cluster in a VPC that is associated with that VPC security group. The source can be a range of addresses (for example, 203.0.113.0/24), or another VPC security group. By specifying a VPC security group as the source, you allow incoming traffic from all instances (typically application servers) that use the source VPC security group. VPC security groups can have rules that govern both inbound and outbound traffic. However, the outbound traffic rules typically don't apply to DB clusters. Outbound traffic rules apply only if the DB cluster acts as a client. You must use the [Amazon EC2 API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/Welcome.html) or the **Security Group** option on the VPC console to create VPC security groups. 

When you create rules for your VPC security group that allow access to the clusters in your VPC, you must specify a port for each range of addresses that the rule allows access for. For example, if you want to turn on Secure Shell (SSH) access for instances in the VPC, create a rule allowing access to TCP port 22 for the specified range of addresses.

You can configure multiple VPC security groups that allow access to different ports for different instances in your VPC. For example, you can create a VPC security group that allows access to TCP port 80 for web servers in your VPC. You can then create another VPC security group that allows access to TCP port 3306 for Aurora MySQL DB instances in your VPC.

**Note**  
In an Aurora DB cluster, the VPC security group associated with the DB cluster is also associated with all of the DB instances in the DB cluster. If you change the VPC security group for the DB cluster or for a DB instance, the change is applied automatically to all of the DB instances in the DB cluster.

For more information on VPC security groups, see [Security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) in the *Amazon Virtual Private Cloud User Guide*. 

**Note**  
If your DB cluster is in a VPC but isn't publicly accessible, you can also use an AWS Site-to-Site VPN connection or an Direct Connect connection to access it from a private network. For more information, see [Internetwork traffic privacy](inter-network-traffic-privacy.md) .

## Security group scenario
<a name="Overview.RDSSecurityGroups.Scenarios"></a>

A common use of a DB cluster in a VPC is to share data with an application server running in an Amazon EC2 instance in the same VPC, which is accessed by a client application outside the VPC. For this scenario, you use the RDS and VPC pages on the AWS Management Console or the RDS and EC2 API operations to create the necessary instances and security groups: 

1. Create a VPC security group (for example, `sg-0123ec2example`) and define inbound rules that use the IP addresses of the client application as the source. This security group allows your client application to connect to EC2 instances in a VPC that uses this security group.

1. Create an EC2 instance for the application and add the EC2 instance to the VPC security group (`sg-0123ec2example`) that you created in the previous step.

1. Create a second VPC security group (for example, `sg-6789rdsexample`) and create a new rule by specifying the VPC security group that you created in step 1 (`sg-0123ec2example`) as the source.

1. Create a new DB cluster and add the DB cluster to the VPC security group (`sg-6789rdsexample`) that you created in the previous step. When you create the DB cluster, use the same port number as the one specified for the VPC security group (`sg-6789rdsexample`) rule that you created in step 3.

The following diagram shows this scenario.

![\[DB cluster and EC2 instance in a VPC\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/con-VPC-sec-grp-aurora.png)


For detailed instructions about configuring a VPC for this scenario, see [Tutorial: Create a VPC for use with a DB cluster (IPv4 only)](CHAP_Tutorials.WebServerDB.CreateVPC.md) . For more information about using a VPC, see [Amazon VPC and Amazon Aurora](USER_VPC.md) .

## Creating a VPC security group
<a name="Overview.RDSSecurityGroups.Create"></a>

You can create a VPC security group for a DB instance by using the VPC console. For information about creating a security group, see [Provide access to the DB cluster in the VPC by creating a security group](CHAP_SettingUp_Aurora.md#CHAP_SettingUp_Aurora.SecurityGroup) and [Security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) in the *Amazon Virtual Private Cloud User Guide*. 

## Associating a security group with a DB cluster
<a name="Overview.RDSSecurityGroups.AssociateWithCluster"></a>

You can associate a security group with a DB cluster by using **Modify cluster** on the RDS console, the `ModifyDBCluster` Amazon RDS API, or the `modify-db-cluster` AWS CLI command.

The following CLI example associates a specific VPC group and removes DB security groups from the DB cluster

```
aws rds modify-db-cluster --db-cluster-identifier dbName --vpc-security-group-ids sg-ID
```

For information about modifying a DB cluster, see [Modifying an Amazon Aurora DB cluster](Aurora.Modifying.md) .

# Master user account privileges
<a name="UsingWithRDS.MasterAccounts"></a>

When you create a new DB cluster, the default master user that you use gets certain privileges for that DB cluster. You can't change the master user name after the DB cluster is created.

**Important**  
We strongly recommend that you do not use the master user directly in your applications. Instead, adhere to the best practice of using a database user created with the minimal privileges required for your application.

**Note**  
If you accidentally delete the permissions for the master user, you can restore them by modifying the DB cluster and setting a new master user password. For more information about modifying a DB cluster, see  [Modifying an Amazon Aurora DB cluster](Aurora.Modifying.md) . 

The following table shows the privileges and database roles the master user gets for each of the database engines. 

<a name="master-user-account-privileges"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.MasterAccounts.html)

# Using service-linked roles for Amazon Aurora
<a name="UsingWithRDS.IAM.ServiceLinkedRoles"></a>

Amazon Aurora uses AWS Identity and Access Management (IAM)[ service-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-service-linked-role). A service-linked role is a unique type of IAM role that is linked directly to Amazon Aurora. Service-linked roles are predefined by Amazon Aurora and include all the permissions that the service requires to call other AWS services on your behalf. 

A service-linked role makes using Amazon Aurora easier because you don't have to manually add the necessary permissions. Amazon Aurora defines the permissions of its service-linked roles, and unless defined otherwise, only Amazon Aurora can assume its roles. The defined permissions include the trust policy and the permissions policy, and that permissions policy cannot be attached to any other IAM entity.

You can delete the roles only after first deleting their related resources. This protects your Amazon Aurora resources because you can't inadvertently remove permission to access the resources.

For information about other services that support service-linked roles, see [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) and look for the services that have **Yes **in the **Service-Linked Role** column. Choose a **Yes** with a link to view the service-linked role documentation for that service.

## Service-linked role permissions for Amazon Aurora
<a name="service-linked-role-permissions"></a>

Amazon Aurora uses the service-linked role named AWSServiceRoleForRDS to allow Amazon RDS to call AWS services on behalf of your DB clusters.

The AWSServiceRoleForRDS service-linked role trusts the following services to assume the role:
+ `rds.amazonaws.com`

This service-linked role has a permissions policy attached to it called `AmazonRDSServiceRolePolicy` that grants it permissions to operate in your account.

For more information about this policy, including the JSON policy document, see [AmazonRDSServiceRolePolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSServiceRolePolicy.html) in the *AWS Managed Policy Reference Guide*.

**Note**  
You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or delete a service-linked role. If you encounter the following error message:  
**Unable to create the resource. Verify that you have permission to create service linked role. Otherwise wait and try again later.**  
 Make sure you have the following permissions enabled:   

```
{
    "Action": "iam:CreateServiceLinkedRole",
    "Effect": "Allow",
    "Resource": "arn:aws:iam::*:role/aws-service-role/rds.amazonaws.com/AWSServiceRoleForRDS",
    "Condition": {
        "StringLike": {
            "iam:AWSServiceName":"rds.amazonaws.com"
        }
    }
}
```
 For more information, see [Service-linked role permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#service-linked-role-permissions) in the *IAM User Guide*.

### Creating a service-linked role for Amazon Aurora
<a name="create-service-linked-role"></a>

You don't need to manually create a service-linked role. When you create a DB cluster, Amazon Aurora creates the service-linked role for you. 

**Important**  
If you were using the Amazon Aurora service before December 1, 2017, when it began supporting service-linked roles, then Amazon Aurora created the AWSServiceRoleForRDS role in your account. To learn more, see [A new role appeared in my AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_roles.html#troubleshoot_roles_new-role-appeared).

If you delete this service-linked role, and then need to create it again, you can use the same process to recreate the role in your account. When you create a DB cluster, Amazon Aurora creates the service-linked role for you again.

### Editing a service-linked role for Amazon Aurora
<a name="edit-service-linked-role"></a>

Amazon Aurora does not allow you to edit the AWSServiceRoleForRDS service-linked role. After you create a service-linked role, you cannot change the name of the role because various entities might reference the role. However, you can edit the description of the role using IAM. For more information, see [Editing a service-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#edit-service-linked-role) in the *IAM User Guide*.

### Deleting a service-linked role for Amazon Aurora
<a name="delete-service-linked-role"></a>

If you no longer need to use a feature or service that requires a service-linked role, we recommend that you delete that role. That way you don't have an unused entity that is not actively monitored or maintained. However, you must delete all of your DB clusters before you can delete the service-linked role.

#### Cleaning up a service-linked role
<a name="service-linked-role-review-before-delete"></a>

Before you can use IAM to delete a service-linked role, you must first confirm that the role has no active sessions and remove any resources used by the role.

**To check whether the service-linked role has an active session in the IAM console**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane of the IAM console, choose **Roles**. Then choose the name (not the check box) of the AWSServiceRoleForRDS role.

1. On the **Summary** page for the chosen role, choose the **Last Accessed** tab.

1. On the **Last Accessed** tab, review recent activity for the service-linked role.
**Note**  
If you are unsure whether Amazon Aurora is using the AWSServiceRoleForRDS role, you can try to delete the role. If the service is using the role, then the deletion fails and you can view the AWS Regions where the role is being used. If the role is being used, then you must wait for the session to end before you can delete the role. You cannot revoke the session for a service-linked role. 

If you want to remove the AWSServiceRoleForRDS role, you must first delete *all* of your DB clusters.

##### Deleting all of your clusters
<a name="delete-service-linked-role.delete-rds-clusters"></a>

Use one of the following procedures to delete a single cluster. Repeat the procedure for each of your clusters.

**To delete a cluster (console)**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the **Databases** list, choose the cluster that you want to delete.

1. For **Cluster Actions**, choose **Delete**.

1. Choose **Delete**.

**To delete a cluster (CLI)**  
See `[delete-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/delete-db-cluster.html)` in the *AWS CLI Command Reference*.

**To delete a cluster (API)**  
See `[DeleteDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DeleteDBCluster.html)` in the *Amazon RDS API Reference*.

You can use the IAM console, the IAM CLI, or the IAM API to delete the AWSServiceRoleForRDS service-linked role. For more information, see [Deleting a service-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#delete-service-linked-role) in the *IAM User Guide*.

## Service-linked role permissions for Amazon RDS Beta
<a name="slr-permissions-rdsbeta"></a>

Amazon Aurora uses the service-linked role named `AWSServiceRoleForRDSBeta` to allow Amazon Aurora to call AWS services on behalf of your RDS DB resources.

The AWSServiceRoleForRDSBeta service-linked role trusts the following services to assume the role:
+ `rds.amazonaws.com`

This service-linked role has a permissions policy attached to it called `AmazonRDSBetaServiceRolePolicy` that grants it permissions to operate in your account. For more information, see [AWS managed policy: AmazonRDSBetaServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSBetaServiceRolePolicy).

**Note**  
You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or delete a service-linked role. If you encounter the following error message:  
**Unable to create the resource. Verify that you have permission to create service linked role. Otherwise wait and try again later.**  
 Make sure you have the following permissions enabled:   

```
{
    "Action": "iam:CreateServiceLinkedRole",
    "Effect": "Allow",
    "Resource": "arn:aws:iam::*:role/aws-service-role/custom.rds.amazonaws.com/AmazonRDSBetaServiceRolePolicy",
    "Condition": {
        "StringLike": {
            "iam:AWSServiceName":"custom.rds.amazonaws.com"
        }
    }
}
```
 For more information, see [Service-linked role permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#service-linked-role-permissions) in the *IAM User Guide*.

## Service-linked role for Amazon RDS Preview
<a name="slr-permissions-rdspreview"></a>

Amazon Aurora uses the service-linked role named `AWSServiceRoleForRDSPreview` to allow Amazon Aurora to call AWS services on behalf of your RDS DB resources.

The AWSServiceRoleForRDSPreview service-linked role trusts the following services to assume the role:
+ `rds.amazonaws.com`

This service-linked role has a permissions policy attached to it called `AmazonRDSPreviewServiceRolePolicy` that grants it permissions to operate in your account. For more information, see [AWS managed policy: AmazonRDSPreviewServiceRolePolicy](rds-security-iam-awsmanpol.md#rds-security-iam-awsmanpol-AmazonRDSPreviewServiceRolePolicy).

**Note**  
You must configure permissions to allow an IAM entity (such as a user, group, or role) to create, edit, or delete a service-linked role. If you encounter the following error message:  
**Unable to create the resource. Verify that you have permission to create service linked role. Otherwise wait and try again later.**  
 Make sure you have the following permissions enabled:   

```
{
    "Action": "iam:CreateServiceLinkedRole",
    "Effect": "Allow",
    "Resource": "arn:aws:iam::*:role/aws-service-role/custom.rds.amazonaws.com/AmazonRDSPreviewServiceRolePolicy",
    "Condition": {
        "StringLike": {
            "iam:AWSServiceName":"custom.rds.amazonaws.com"
        }
    }
}
```
 For more information, see [Service-linked role permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#service-linked-role-permissions) in the *IAM User Guide*.

# Amazon VPC and Amazon Aurora
<a name="USER_VPC"></a>

Amazon Virtual Private Cloud (Amazon VPC) makes it possible for you to launch AWS resources, such as Aurora DB clusters, into a virtual private cloud (VPC). 

When you use a VPC, you have control over your virtual networking environment. You can choose your own IP address range, create subnets, and configure routing and access control lists. There is no additional cost to run your DB cluster in a VPC. 

Accounts have a default VPC. All new DB clusters are created in the default VPC unless you specify otherwise.

**Topics**
+ [Working with a DB cluster in a VPC](USER_VPC.WorkingWithRDSInstanceinaVPC.md)
+ [Scenarios for accessing a DB cluster in a VPC](USER_VPC.Scenarios.md)
+ [Tutorial: Create a VPC for use with a DB cluster (IPv4 only)](CHAP_Tutorials.WebServerDB.CreateVPC.md)
+ [Tutorial: Create a VPC for use with a DB cluster (dual-stack mode)](CHAP_Tutorials.CreateVPCDualStack.md)

Following, you can find a discussion about VPC functionality relevant to Amazon Aurora DB clusters. For more information about Amazon VPC, see [Amazon VPC Getting Started Guide](https://docs.aws.amazon.com/AmazonVPC/latest/GettingStartedGuide/) and [Amazon VPC User Guide](https://docs.aws.amazon.com/vpc/latest/userguide/).

# Working with a DB cluster in a VPC
<a name="USER_VPC.WorkingWithRDSInstanceinaVPC"></a>

Your DB cluster is in a virtual private cloud (VPC). A VPC is a virtual network that is logically isolated from other virtual networks in the AWS Cloud. Amazon VPC makes it possible for you to launch AWS resources, such as an Amazon Aurora DB cluster or Amazon EC2 instance, into a VPC. The VPC can either be a default VPC that comes with your account or one that you create. All VPCs are associated with your AWS account. 

Your default VPC has three subnets that you can use to isolate resources inside the VPC. The default VPC also has an internet gateway that can be used to provide access to resources inside the VPC from outside the VPC. 

For a list of scenarios involving Amazon Aurora DB clusters in a VPC , see [Scenarios for accessing a DB cluster in a VPC](USER_VPC.Scenarios.md). 

**Topics**
+ [Working with a DB cluster in a VPC](#Overview.RDSVPC.Create)
+ [VPC encryption control](#USER_VPC.EncryptionControl)
+ [Working with DB subnet groups](#USER_VPC.Subnets)
+ [Shared subnets](#USER_VPC.Shared_subnets)
+ [Amazon Aurora IP addressing](#USER_VPC.IP_addressing)
+ [Hiding a DB cluster in a VPC from the internet](#USER_VPC.Hiding)
+ [Creating a DB cluster in a VPC](#USER_VPC.InstanceInVPC)

In the following tutorials, you can learn to create a VPC that you can use for a common Amazon Aurora scenario:
+ [Tutorial: Create a VPC for use with a DB cluster (IPv4 only)](CHAP_Tutorials.WebServerDB.CreateVPC.md)
+ [Tutorial: Create a VPC for use with a DB cluster (dual-stack mode)](CHAP_Tutorials.CreateVPCDualStack.md)

## Working with a DB cluster in a VPC
<a name="Overview.RDSVPC.Create"></a>

Here are some tips on working with a DB cluster in a VPC:
+ Your VPC must have at least two subnets. These subnets must be in two different Availability Zones in the AWS Region where you want to deploy your DB cluster. A *subnet* is a segment of a VPC's IP address range that you can specify and that you can use to group DB clusters based on your security and operational needs. 
+ If you want your DB cluster in the VPC to be publicly accessible, make sure to turn on the VPC attributes *DNS hostnames* and *DNS resolution*. 
+ Your VPC must have a DB subnet group that you create. You create a DB subnet group by specifying the subnets you created. Amazon Aurora chooses a subnet and an IP address within that subnet to associate with the primary DB instance in your DB cluster. The primary DB instance uses the Availability Zone that contains the subnet.
+ Your VPC must have a VPC security group that allows access to the DB cluster.

  For more information, see [Scenarios for accessing a DB cluster in a VPC](USER_VPC.Scenarios.md).
+ The CIDR blocks in each of your subnets must be large enough to accommodate spare IP addresses for Amazon Aurora to use during maintenance activities, including failover and compute scaling. For example, a range such as 10.0.0.0/24 and 10.0.1.0/24 is typically large enough.
+ A VPC can have an *instance tenancy* attribute of either *default* or *dedicated*. All default VPCs have the instance tenancy attribute set to default, and a default VPC can support any DB instance class.

  If you choose to have your DB cluster in a dedicated VPC where the instance tenancy attribute is set to dedicated, the DB instance class of your DB cluster must be one of the approved Amazon EC2 dedicated instance types. For example, the r5.large EC2 dedicated instance corresponds to the db.r5.large DB instance class. For information about instance tenancy in a VPC, see [Dedicated instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-instance.html) in the *Amazon Elastic Compute Cloud User Guide*.

  For more information about the instance types that can be in a dedicated instance, see [Amazon EC2 dedicated instances](https://aws.amazon.com/ec2/purchasing-options/dedicated-instances/) on the Amazon EC2 pricing page. 
**Note**  
When you set the instance tenancy attribute to dedicated for a DB cluster, it doesn't guarantee that the DB cluster will run on a dedicated host.

## VPC encryption control
<a name="USER_VPC.EncryptionControl"></a>

VPC encryption controls allow you to enforce encryption-in-transit for all network traffic within your VPCs. Use encryption control to meet regulatory compliance requirements by ensuring that only encryption-capable Nitro-based hardware can be provisioned in designated VPCs. Encryption control also catches compatibility issues at API request time rather than during provisioning. Your existing workloads continue operating and only new incompatible requests are blocked.

Set your VPC encryption controls by setting the VPC control mode to :
+ *disabled* (default)
+ *monitor*
+ *enforced*

To check the current control mode for your VPC, use the AWS Management Console or [DescribeVpcs](https://docs.aws.amazon.com//AWSEC2/latest/APIReference/API_DescribeVpcs.html) CLI or API command.

If your VPC enforces encryption, you can only provision Nitro-based DB clusters that support encryption in transit in that VPC. For more information see, [DB instance class types](Concepts.DBInstanceClass.Types.md). For information about Nitro instances, see [Instances built on the AWS Nitro System](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) in the *Amazon EC2 User Guide*.

**Note**  
If you try to provision incompatible DB clusters in an encryption-enforced VPC, Aurora returns a `VpcEncryptionControlViolationException` exception.

For Aurora Serverless for MySQL and PostreSQL, encryption control requires platform version 3 or higher.

## Working with DB subnet groups
<a name="USER_VPC.Subnets"></a>

*Subnets* are segments of a VPC's IP address range that you designate to group your resources based on security and operational needs. A *DB subnet group* is a collection of subnets (typically private) that you create in a VPC and that you then designate for your DB clusters. By using a DB subnet group, you can specify a particular VPC when creating DB clusters using the AWS CLI or RDS API. If you use the console, you can choose the VPC and subnet groups you want to use.

Each DB subnet group should have subnets in at least two Availability Zones in a given AWS Region. When creating a DB cluster in a VPC, you choose a DB subnet group for it. From the DB subnet group, Amazon Aurora chooses a subnet and an IP address within that subnet to associate with the primary DB instance in your DB cluster. The DB uses the Availability Zone that contains the subnet. Aurora always assigns an IP address from a subnet that has free IP address space.

The subnets in a DB subnet group are either public or private. The subnets are public or private, depending on the configuration that you set for their network access control lists (network ACLs) and routing tables. For a DB cluster to be publicly accessible, all of the subnets in its DB subnet group must be public. If a subnet that's associated with a publicly accessible DB cluster changes from public to private, it can affect DB cluster availability.

To create a DB subnet group that supports dual-stack mode, make sure that each subnet that you add to the DB subnet group has an Internet Protocol version 6 (IPv6) CIDR block associated with it. For more information, see [Amazon Aurora IP addressing](#USER_VPC.IP_addressing) and [Migrating to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the *Amazon VPC User Guide.*

When Amazon Aurora creates a DB cluster in a VPC, it assigns a network interface to your DB cluster by using an IP address from your DB subnet group. However, we strongly recommend that you use the Domain Name System (DNS) name to connect to your DB cluster. We recommend this because the underlying IP address changes during failover. 

**Note**  
For each DB cluster that you run in a VPC, make sure to reserve at least one address in each subnet in the DB subnet group for use by Amazon Aurora for recovery actions. 

## Shared subnets
<a name="USER_VPC.Shared_subnets"></a>

You can create a DB cluster in a shared VPC.

Some considerations to keep in mind while using shared VPCs:
+ You can move a DB cluster from a shared VPC subnet to a non-shared VPC subnet and vice-versa.
+ Participants in a shared VPC must create a security group in the VPC to allow them to create a DB cluster.
+ Owners and participants in a shared VPC can access the database by using SQL queries. However, only the creator of a resource can make any API calls on the resource.



## Amazon Aurora IP addressing
<a name="USER_VPC.IP_addressing"></a>

IP addresses enable resources in your VPC to communicate with each other, and with resources over the internet. Amazon Aurora supports both IPv4 and IPv6 addressing protocols. By default, Amazon Aurora and Amazon VPC use the IPv4 addressing protocol. You can't turn off this behavior. When you create a VPC, make sure to specify an IPv4 CIDR block (a range of private IPv4 addresses). You can optionally assign an IPv6 CIDR block to your VPC and subnets, and assign IPv6 addresses from that block to DB clusters in your subnet.

Support for the IPv6 protocol expands the number of supported IP addresses. By using the IPv6 protocol, you ensure that you have sufficient available addresses for the future growth of the internet. New and existing RDS resources can use IPv4 and IPv6 addresses within your VPC. Configuring, securing, and translating network traffic between the two protocols used in different parts of an application can cause operational overhead. You can standardize on the IPv6 protocol for Amazon RDS resources to simplify your network configuration.

**Topics**
+ [IPv4 addresses](#USER_VPC.IP_addressing.IPv4)
+ [IPv6 addresses](#USER_VPC.IP_addressing.IPv6)
+ [Dual-stack mode](#USER_VPC.IP_addressing.dual-stack-mode)

### IPv4 addresses
<a name="USER_VPC.IP_addressing.IPv4"></a>

When you create a VPC, you must specify a range of IPv4 addresses for the VPC in the form of a CIDR block, such as `10.0.0.0/16`. A *DB subnet group* defines the range of IP addresses in this CIDR block that a DB cluster can use. These IP addresses can be private or public.

A private IPv4 address is an IP address that's not reachable over the internet. You can use private IPv4 addresses for communication between your DB cluster and other resources, such as Amazon EC2 instances, in the same VPC. Each DB cluster has a private IP address for communication in the VPC.

A public IP address is an IPv4 address that's reachable from the internet. You can use public addresses for communication between your DB cluster and resources on the internet, such as a SQL client. You control whether your DB cluster receives a public IP address.

Amazon RDS uses Public Elastic IPv4 addresses from EC2's public IPv4 address pool for publicly accessible database instances. These IP addresses are visible in your AWS account when using the `describe-addresses` CLI, API or viewing the Elastic IPs (EIP) section in the AWS Management Console. Each RDS-managed IP address is marked with a `service_managed` attribute set to `"rds"`.

While these IPs are visible in your account, they remain fully managed by Amazon RDS and cannot be modified or released. Amazon RDS releases IPs back into the public IPv4 address pool when no longer in use.

CloudTrail logs API calls related to RDS's EIP, such as the `AllocateAddress`. These API calls are invoked by the Service Principal `rds.amazonaws.com`.

**Note**  
IPs allocated by Amazon RDS do not count against your account's EIP limits.

For a tutorial that shows you how to create a VPC with only private IPv4 addresses that you can use for a common Amazon Aurora scenario, see [Tutorial: Create a VPC for use with a DB cluster (IPv4 only)](CHAP_Tutorials.WebServerDB.CreateVPC.md). 

### IPv6 addresses
<a name="USER_VPC.IP_addressing.IPv6"></a>

You can optionally associate an IPv6 CIDR block with your VPC and subnets, and assign IPv6 addresses from that block to the resources in your VPC. Each IPv6 address is globally unique. 

The IPv6 CIDR block for your VPC is automatically assigned from Amazon's pool of IPv6 addresses. You can't choose the range yourself.

When connecting to an IPv6 address, make sure that the following conditions are met:
+ The client is configured so that client to database traffic over IPv6 is allowed.
+ RDS security groups used by the DB instance are configured correctly so that client to database traffic over IPv6 is allowed.
+ The client operating system stack allows traffic on the IPv6 address, and operating system drivers and libraries are configured to choose the correct default DB instance endpoint (either IPv4 or IPv6).

For more information about IPv6, see [ IP Addressing](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html) in the *Amazon VPC User Guide*.

### Dual-stack mode
<a name="USER_VPC.IP_addressing.dual-stack-mode"></a>

A DB cluster runs in dual-stack mode when it can communicate over both IPv4 and IPv6 addressing protocols. Resources can then communicate with the DB cluster using either IPv4, IPv6, or both protocols. Private dual-stack mode DB instances have IPv6 endpoints that RDS restricts to VPC access only, ensuring your IPv6 endpoints remain private. Public dual-stack mode DB instances provide both IPv4 and IPv6 endpoints that you can access from the internet.

**Topics**
+ [Dual-stack mode and DB subnet groups](#USER_VPC.IP_addressing.dual-stack-db-subnet-groups)
+ [Working with dual-stack mode DB instances](#USER_VPC.IP_addressing.dual-stack-working-with)
+ [Modifying IPv4-only DB clusters to use dual-stack mode](#USER_VPC.IP_addressing.dual-stack-modifying-ipv4)
+ [Availability of dual-stack network DB clusters](#USER_VPC.IP_addressing.dual-stack-availability)
+ [Limitations for dual-stack network DB clusters](#USER_VPC.IP_addressing.dual-stack-limitations)

For a tutorial that shows you how to create a VPC with both IPv4 and IPv6 addresses that you can use for a common Amazon Aurora scenario, see [Tutorial: Create a VPC for use with a DB cluster (dual-stack mode)](CHAP_Tutorials.CreateVPCDualStack.md). 

#### Dual-stack mode and DB subnet groups
<a name="USER_VPC.IP_addressing.dual-stack-db-subnet-groups"></a>

To use dual-stack mode, make sure that each subnet in the DB subnet group that you associate with the DB cluster has an IPv6 CIDR block associated with it. You can create a new DB subnet group or modify an existing DB subnet group to meet this requirement. After a DB cluster is in dual-stack mode, clients can connect to it normally. Make sure that client security firewalls and RDS DB instance security groups are accurately configured to allow traffic over IPv6. To connect, clients use the DB cluster primary instance's endpoint. Client applications can specify which protocol is preferred when connecting to a database. In dual-stack mode, the DB cluster detects the client's preferred network protocol, either IPv4 or IPv6, and uses that protocol for the connection.

If a DB subnet group stops supporting dual-stack mode because of subnet deletion or CIDR disassociation, there's a risk of an incompatible network state for DB instances that are associated with the DB subnet group. Also, you can't use the DB subnet group when you create a new dual-stack mode DB cluster.

To determine whether a DB subnet group supports dual-stack mode by using the AWS Management Console, view the **Network type** on the details page of the DB subnet group. To determine whether a DB subnet group supports dual-stack mode by using the AWS CLI, run the [describe-db-subnet-groups](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-subnet-groups.html) command and view `SupportedNetworkTypes` in the output.

Read replicas are treated as independent DB instances and can have a network type that's different from the primary DB instance. If you change the network type of a read replica's primary DB instance, the read replica isn't affected. When you are restoring a DB instance, you can restore it to any network type that's supported.

#### Working with dual-stack mode DB instances
<a name="USER_VPC.IP_addressing.dual-stack-working-with"></a>

When you create or modify a DB cluster, you can specify dual-stack mode to allow your resources to communicate with your DB cluster over IPv4, IPv6, or both.

When you use the AWS Management Console to create or modify a DB instance, you can specify dual-stack mode in the **Network type** section. The following image shows the **Network type** section in the console.

![\[Network type section in the console with Dual-stack mode selected.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/dual-stack-mode.png)


When you use the AWS CLI to create or modify a DB cluster, set the `--network-type` option to `DUAL` to use dual-stack mode. When you use the RDS API to create or modify a DB cluster, set the `NetworkType` parameter to `DUAL` to use dual-stack mode. When you are modifying the network type of a DB instance, downtime is possible. If dual-stack mode isn't supported by the specified DB engine version or DB subnet group, the `NetworkTypeNotSupported` error is returned.

For more information about creating a DB cluster, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md). For more information about modifying a DB cluster, see [Modifying an Amazon Aurora DB cluster](Aurora.Modifying.md).

To determine whether a DB cluster is in dual-stack mode by using the console, view the **Network type** on the **Connectivity & security** tab for the DB cluster.

#### Modifying IPv4-only DB clusters to use dual-stack mode
<a name="USER_VPC.IP_addressing.dual-stack-modifying-ipv4"></a>

You can modify an IPv4-only DB cluster to use dual-stack mode. To do so, change the network type of the DB cluster. The modification might result in downtime.

It is recommended that you change the network type of your Amazon Aurora DB clusters during a maintenance window. Currently, setting the network type of new instances to dual-stack mode isn't supported. You can set network type manually by using the `modify-db-cluster` command. 

Before modifying a DB cluster to use dual-stack mode, make sure that its DB subnet group supports dual-stack mode. If the DB subnet group associated with the DB cluster doesn't support dual-stack mode, specify a different DB subnet group that supports it when you modify the DB cluster. Modifying the DB subnet group of a DB cluster can cause downtime.

If you modify the DB subnet group of a DB cluster before you change the DB cluster to use dual-stack mode, make sure that the DB subnet group is valid for the DB cluster before and after the change. 

We recommend that you run the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) API with only the `--network-type` parameter with value `DUAL` to change the network of an Amazon Aurora cluster to dual-stack mode. Adding other parameters along with the `--network-type` parameter in the same API call could result in downtime.

If you can't connect to the DB cluster after the change, make sure that the client and database security firewalls and route tables are accurately configured to allow traffic to the database on the selected network (either IPv4 or IPv6). You might also need to modify operating system parameter, libraries, or drivers to connect using an IPv6 address.

**To modify an IPv4-only DB cluster to use dual-stack mode**

1. Modify a DB subnet group to support dual-stack mode, or create a DB subnet group that supports dual-stack mode:

   1. Associate an IPv6 CIDR block with your VPC.

      For instructions, see [ Add an IPv6 CIDR block to your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/modify-vpcs.html#vpc-associate-ipv6-cidr) in the *Amazon VPC User Guide*.

   1. Attach the IPv6 CIDR block to all of the subnets in your the DB subnet group.

      For instructions, see [ Add an IPv6 CIDR block to your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/modify-subnets.html#subnet-associate-ipv6-cidr) in the *Amazon VPC User Guide*.

   1. Confirm that the DB subnet group supports dual-stack mode.

      If you are using the AWS Management Console, select the DB subnet group, and make sure that the **Supported network types** value is **Dual, IPv4**.

      If you are using the AWS CLI, run the [describe-db-subnet-groups](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-subnet-groups.html) command, and make sure that the `SupportedNetworkType` value for the DB instance is `Dual, IPv4`.

1. Modify the security group associated with the DB cluster to allow IPv6 connections to the database, or create a new security group that allows IPv6 connections.

   For instructions, see [ Security group rules](https://docs.aws.amazon.com/vpc/latest/userguide/security-group-rules.html) in the *Amazon VPC User Guide*.

1. Modify the DB cluster to support dual-stack mode. To do so, set the **Network type** to **Dual-stack mode**.

   If you are using the console, make sure that the following settings are correct:
   + **Network type** – **Dual-stack mode**  
![\[Network type section in the console with Dual-stack mode selected.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/dual-stack-mode.png)
   + **DB subnet group** – The DB subnet group that you configured in a previous step
   + **Security group** – The security that you configured in a previous step

   If you are using the AWS CLI, make sure that the following settings are correct:
   + `--network-type` – `dual`
   + `--db-subnet-group-name` – The DB subnet group that you configured in a previous step
   + `--vpc-security-group-ids` – The VPC security group that you configured in a previous step

   For example: 

   ```
   aws rds modify-db-cluster --db-cluster-identifier my-cluster --network-type "DUAL"
   ```

1. Confirm that the DB cluster supports dual-stack mode.

   If you are using the console, choose the **Configuration** tab for the DB cluster. On that tab, make sure that the **Network type** value is **Dual-stack mode**.

   If you are using the AWS CLI, run the [ describe-db-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) command, and make sure that the `NetworkType` value for the DB cluster is `dual`.

   Run the `dig` command on the writer DB instance endpoint to identify the IPv6 address associated with it.

   ```
   dig db-instance-endpoint AAAA
   ```

   Use the writer DB instance endpoint, not the IPv6 address, to connect to the DB cluster.

#### Availability of dual-stack network DB clusters
<a name="USER_VPC.IP_addressing.dual-stack-availability"></a>

Dual-stack network DB clusters are available in all AWS Regions except for the following:
+ Asia Pacific (Hyderabad)
+ Asia Pacific (Malaysia)
+ Asia Pacific (Melbourne)
+ Asia Pacific (Thailand)
+ Canada West (Calgary)
+ Europe (Spain)
+ Europe (Zurich)
+ Israel (Tel Aviv)
+ Mexico (Central)
+ Middle East (UAE)

The following DB engine versions support dual-stack network DB clusters:
+ Aurora MySQL versions: 
  + 3.02 and higher 3 versions
  + 2.09.1 and higher 2 versions

  For more information about Aurora MySQL versions, see the [https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/Welcome.html](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/Welcome.html).
+ Aurora PostgreSQL versions:
  + 15.2 and all higher versions
  + 14.3 and higher 14 versions
  + 13.7 and higher 13 versions

  For more information about Aurora PostgreSQL versions, see the [https://docs.aws.amazon.com/AmazonRDS/latest/AuroraPostgreSQLReleaseNotes/Welcome.html](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraPostgreSQLReleaseNotes/Welcome.html).

#### Limitations for dual-stack network DB clusters
<a name="USER_VPC.IP_addressing.dual-stack-limitations"></a>

The following limitations apply to dual-stack network DB clusters:
+ DB clusters can't use the IPv6 protocol exclusively. They can use IPv4 exclusively, or they can use the IPv4 and IPv6 protocol (dual-stack mode).
+ Amazon RDS doesn't support native IPv6 subnets.
+ You can't use RDS Proxy with dual-stack mode DB clusters.

## Hiding a DB cluster in a VPC from the internet
<a name="USER_VPC.Hiding"></a>

One common Amazon Aurora scenario is to have a VPC in which you have an Amazon EC2 instance with a public-facing web application and a DB cluster with a database that isn't publicly accessible. For example, you can create a VPC that has a public subnet and a private subnet. EC2 instances that function as web servers can be deployed in the public subnet. The DB clusters are deployed in the private subnet. In such a deployment, only the web servers have access to the DB clusters. For an illustration of this scenario, see [A DB cluster in a VPC accessed by an Amazon EC2 instance in the same VPC](USER_VPC.Scenarios.md#USER_VPC.Scenario1). 

When you launch a DB cluster inside a VPC, the DB cluster has a private IP address for traffic inside the VPC. This private IP address isn't publicly accessible. You can use the **Public access** option to designate whether the DB cluster also has a public IP address in addition to the private IP address. If the DB cluster is designated as publicly accessible, its DNS endpoint resolves to the private IP address from within the VPC. It resolves to the public IP address from outside of the VPC. Access to the DB cluster is ultimately controlled by the security group it uses. That public access is not permitted if the security group assigned to the DB cluster doesn't include inbound rules that permit it. In addition, for a DB cluster to be publicly accessible, the subnets in its DB subnet group must have an internet gateway. For more information, see [Can't connect to Amazon RDS DB instance](CHAP_Troubleshooting.md#CHAP_Troubleshooting.Connecting)

You can modify a DB cluster to turn on or off public accessibility by modifying the **Public access** option. The following illustration shows the **Public access** option in the **Additional connectivity configuration** section. To set the option, open the **Additional connectivity configuration** section in the **Connectivity** section. 

![\[Set your database Public access option in the Additional connectivity configuration section to No.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/VPC-example4.png)


For information about modifying a DB instance to set the **Public access** option, see [Modifying a DB instance in a DB cluster](Aurora.Modifying.md#Aurora.Modifying.Instance).

## Creating a DB cluster in a VPC
<a name="USER_VPC.InstanceInVPC"></a>

The following procedures help you create a DB cluster in a VPC. To use the default VPC, you can begin with step 2, and use the VPC and DB subnet group have already been created for you. If you want to create an additional VPC, you can create a new VPC. 

**Note**  
If you want your DB cluster in the VPC to be publicly accessible, you must update the DNS information for the VPC by enabling the VPC attributes *DNS hostnames* and *DNS resolution*. For information about updating the DNS information for a VPC instance, see [Updating DNS support for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html). 

Follow these steps to create a DB instance in a VPC:
+ [Step 1: Create a VPC](#USER_VPC.CreatingVPC) 
+  [Step 2: Create a DB subnet group](#USER_VPC.CreateDBSubnetGroup)
+  [Step 3: Create a VPC security group](#USER_VPC.CreateVPCSecurityGroup)
+  [Step 4: Create a DB instance in the VPC](#USER_VPC.CreateDBInstanceInVPC) 

### Step 1: Create a VPC
<a name="USER_VPC.CreatingVPC"></a>

Create a VPC with subnets in at least two Availability Zones. You use these subnets when you create a DB subnet group. If you have a default VPC, a subnet is automatically created for you in each Availability Zone in the AWS Region.

For more information, see [Create a VPC with private and public subnets](CHAP_Tutorials.WebServerDB.CreateVPC.md#CHAP_Tutorials.WebServerDB.CreateVPC.VPCAndSubnets), or see [Create a VPC](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-vpcs.html#Create-VPC) in the *Amazon VPC User Guide*. 

### Step 2: Create a DB subnet group
<a name="USER_VPC.CreateDBSubnetGroup"></a>

A DB subnet group is a collection of subnets (typically private) that you create for a VPC and that you then designate for your DB clusters. A DB subnet group allows you to specify a particular VPC when you create DB clusters using the AWS CLI or RDS API. If you use the console, you can just choose the VPC and subnets you want to use. Each DB subnet group must have at least one subnet in at least two Availability Zones in the AWS Region. As a best practice, each DB subnet group should have at least one subnet for every Availability Zone in the AWS Region.

For a DB cluster to be publicly accessible, the subnets in the DB subnet group must have an internet gateway. For more information about internet gateways for subnets, see [Connect to the internet using an internet gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html) in the *Amazon VPC User Guide*. 

When you create a DB cluster in a VPC, you can choose a DB subnet group. Amazon Aurora chooses a subnet and an IP address within that subnet to associate with your DB cluster. If no DB subnet groups exist, Amazon Aurora creates a default subnet group when you create a DB cluster. Amazon Aurora creates and associates an Elastic Network Interface to your DB cluster with that IP address. The DB cluster uses the Availability Zone that contains the subnet.

In this step, you create a DB subnet group and add the subnets that you created for your VPC.

**To create a DB subnet group**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Subnet groups**.

1. Choose **Create DB Subnet Group**.

1. For **Name**, type the name of your DB subnet group.

1. For **Description**, type a description for your DB subnet group. 

1. For **VPC**, choose the default VPC or the VPC that you created.

1. In the **Add subnets** section, choose the Availability Zones that include the subnets from **Availability Zones**, and then choose the subnets from **Subnets**.  
![\[Create a DB subnet group.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/RDSVPC101.png)

1. Choose **Create**. 

   Your new DB subnet group appears in the DB subnet groups list on the RDS console. You can choose the DB subnet group to see details, including all of the subnets associated with the group, in the details pane at the bottom of the window. 

### Step 3: Create a VPC security group
<a name="USER_VPC.CreateVPCSecurityGroup"></a>

Before you create your DB cluster, you can create a VPC security group to associate with your DB cluster. If you don't create a VPC security group, you can use the default security group when you create a DB cluster. For instructions on how to create a security group for your DB cluster, see [Create a VPC security group for a private DB cluster](CHAP_Tutorials.WebServerDB.CreateVPC.md#CHAP_Tutorials.WebServerDB.CreateVPC.SecurityGroupDB), or see [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) in the *Amazon VPC User Guide*. 

### Step 4: Create a DB instance in the VPC
<a name="USER_VPC.CreateDBInstanceInVPC"></a>

In this step, you create a DB cluster and use the VPC name, the DB subnet group, and the VPC security group you created in the previous steps.

**Note**  
If you want your DB cluster in the VPC to be publicly accessible, you must enable the VPC attributes *DNS hostnames* and *DNS resolution*. For more information, see [DNS attributes for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html) in the *Amazon VPC User Guide*.

For details on how to create a DB cluster, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md).

When prompted in the **Connectivity** section, enter the VPC name, the DB subnet group, and the VPC security group.

**Note**  
Updating VPCs isn't currently supported for Aurora DB clusters.

# Scenarios for accessing a DB cluster in a VPC
<a name="USER_VPC.Scenarios"></a>

Amazon Aurora supports the following scenarios for accessing a DB cluster in a VPC:
+ [An Amazon EC2 instance in the same VPC](#USER_VPC.Scenario1)
+ [An EC2 instance in a different VPC](#USER_VPC.Scenario3)
+ [A client application through the internet](#USER_VPC.Scenario4)
+ [A private network](#USER_VPC.NotPublic)

## A DB cluster in a VPC accessed by an Amazon EC2 instance in the same VPC
<a name="USER_VPC.Scenario1"></a>

A common use of a DB cluster in a VPC is to share data with an application server that is running in an Amazon EC2 instance in the same VPC.

The following diagram shows this scenario.

![\[VPC scenario with a public web server and a private database.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/con-VPC-sec-grp-aurora.png)


The simplest way to manage access between EC2 instances and DB clusters in the same VPC is to do the following:
+ Create a VPC security group for your DB clusters to be in. This security group can be used to restrict access to the DB clusters. For example, you can create a custom rule for this security group. This might allow TCP access using the port that you assigned to the DB cluster when you created it and an IP address you use to access the DB cluster for development or other purposes.
+ Create a VPC security group for your EC2 instances (web servers and clients) to be in. This security group can, if needed, allow access to the EC2 instance from the internet by using the VPC's routing table. For example, you can set rules on this security group to allow TCP access to the EC2 instance over port 22.
+ Create custom rules in the security group for your DB clusters that allow connections from the security group you created for your EC2 instances. These rules might allow any member of the security group to access the DB clusters.

There is an additional public and private subnet in a separate Availability Zone. An RDS DB subnet group requires a subnet in at least two Availability Zones. The additional subnet makes it easy to switch to a Multi-AZ DB instance deployment in the future.

For a tutorial that shows you how to create a VPC with both public and private subnets for this scenario, see [Tutorial: Create a VPC for use with a DB cluster (IPv4 only)](CHAP_Tutorials.WebServerDB.CreateVPC.md). 

**Tip**  
You can set up network connectivity between an Amazon EC2 instance and a DB cluster automatically when you create the DB cluster. For more information, see [Configure automatic network connectivity with an EC2 instance](Aurora.CreateInstance.md#Aurora.CreateInstance.Prerequisites.VPC.Automatic) .

**To create a rule in a VPC security group that allows connections from another security group, do the following:**

1.  Sign in to the AWS Management Console and open the Amazon VPC console at [https://console.aws.amazon.com/vpc](https://console.aws.amazon.com/vpc).

1.  In the navigation pane, choose **Security groups**.

1. Choose or create a security group for which you want to allow access to members of another security group. In the preceding scenario, this is the security group that you use for your DB clusters. Choose the **Inbound rules** tab, and then choose **Edit inbound rules**.

1. On the **Edit inbound rules** page, choose **Add rule**.

1. For **Type**, choose the entry that corresponds to the port you used when you created your DB cluster, such as **MYSQL/Aurora**.

1. In the **Source** box, start typing the ID of the security group, which lists the matching security groups. Choose the security group with members that you want to have access to the resources protected by this security group. In the scenario preceding, this is the security group that you use for your EC2 instance.

1. If required, repeat the steps for the TCP protocol by creating a rule with **All TCP** as the **Type** and your security group in the **Source** box. If you intend to use the UDP protocol, create a rule with **All UDP** as the **Type** and your security group in **Source**.

1. Choose **Save rules**.

The following screen shows an inbound rule with a security group for its source.

![\[Adding a security group to another security group's rules.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/con-vpc-add-sg-rule.png)


For more information about connecting to the DB cluster from your EC2 instance, see [Connecting to an Amazon Aurora DB cluster](Aurora.Connecting.md).

## A DB cluster in a VPC accessed by an EC2 instance in a different VPC
<a name="USER_VPC.Scenario3"></a>

When your DB clusters is in a different VPC from the EC2 instance you are using to access it, you can use VPC peering to access the DB cluster.

The following diagram shows this scenario. 

![\[A DB instance in a VPC accessed by an Amazon EC2 instance in a different VPC.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/RDSVPC2EC2VPC-aurora.png)


A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IP addresses. Resources in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region. To learn more about VPC peering, see [VPC peering](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-peering.html) in the *Amazon Virtual Private Cloud User Guide*.

## A DB cluster in a VPC accessed by a client application through the internet
<a name="USER_VPC.Scenario4"></a>

To access a DB clusters in a VPC from a client application through the internet, you configure a VPC with a single public subnet, and an internet gateway to enable communication over the internet.

The following diagram shows this scenario.

![\[A DB cluster in a VPC accessed by a client application through the internet.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/GS-VPC-network-aurora.png)


We recommend the following configuration:

 
+ A VPC of size /16 (for example CIDR: 10.0.0.0/16). This size provides 65,536 private IP addresses.
+ A subnet of size /24 (for example CIDR: 10.0.0.0/24). This size provides 256 private IP addresses.
+ An Amazon Aurora DB cluster that is associated with the VPC and the subnet. Amazon RDS assigns an IP address within the subnet to your DB cluster.
+ An internet gateway which connects the VPC to the internet and to other AWS products.
+ A security group associated with the DB cluster. The security group's inbound rules allow your client application to access to your DB cluster.

For information about creating a DB clusters in a VPC, see [Creating a DB cluster in a VPC](USER_VPC.WorkingWithRDSInstanceinaVPC.md#USER_VPC.InstanceInVPC).

## A DB cluster in a VPC accessed by a private network
<a name="USER_VPC.NotPublic"></a>

If your DB cluster isn't publicly accessible, you have the following options for accessing it from a private network:
+ An AWS Site-to-Site VPN connection. For more information, see [What is AWS Site-to-Site VPN?](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html)
+ An Direct Connect connection. For more information, see [What is Direct Connect?](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html)
+ An AWS Client VPN connection. For more information, see [What is AWS Client VPN?](https://docs.aws.amazon.com//vpn/latest/clientvpn-admin/what-is.html)

The following diagram shows a scenario with an AWS Site-to-Site VPN connection. 

![\[DB clusters in a VPC accessed by a private network.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/site-to-site-vpn-connection-aurora.png)


For more information, see [Internetwork traffic privacy](inter-network-traffic-privacy.md).

# Tutorial: Create a VPC for use with a DB cluster (IPv4 only)
<a name="CHAP_Tutorials.WebServerDB.CreateVPC"></a>

A common scenario includes a DB cluster in a virtual private cloud (VPC) based on the Amazon VPC service. This VPC shares data with a web server that is running in the same VPC. In this tutorial, you create the VPC for this scenario.

The following diagram shows this scenario. For information about other scenarios, see [Scenarios for accessing a DB cluster in a VPC](USER_VPC.Scenarios.md). 

![\[Single VPC scenario\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/con-VPC-sec-grp-aurora.png)


Your DB cluster needs to be available only to your web server, and not to the public internet. Thus, you create a VPC with both public and private subnets. The web server is hosted in the public subnet, so that it can reach the public internet. The DB cluster is hosted in a private subnet. The web server can connect to the DB cluster because it is hosted within the same VPC. But the DB cluster isn't available to the public internet, providing greater security.

This tutorial configures an additional public and private subnet in a separate Availability Zone. These subnets aren't used by the tutorial. An RDS DB subnet group requires a subnet in at least two Availability Zones. The additional subnet makes it easier to configure more than one Aurora DB instance.

This tutorial describes configuring a VPC for Amazon Aurora DB clusters. For a tutorial that shows you how to create a web server for this VPC scenario, see [Tutorial: Create a web server and an Amazon Aurora DB cluster](TUT_WebAppWithRDS.md). For more information about Amazon VPC, see [Amazon VPC Getting Started Guide](https://docs.aws.amazon.com/AmazonVPC/latest/GettingStartedGuide/) and [Amazon VPC User Guide](https://docs.aws.amazon.com/vpc/latest/userguide/). 

**Tip**  
You can set up network connectivity between an Amazon EC2 instance and a DB cluster automatically when you create the DB cluster. The network configuration is similar to the one described in this tutorial. For more information, see [Configure automatic network connectivity with an EC2 instance](Aurora.CreateInstance.md#Aurora.CreateInstance.Prerequisites.VPC.Automatic).

## Create a VPC with private and public subnets
<a name="CHAP_Tutorials.WebServerDB.CreateVPC.VPCAndSubnets"></a>

Use the following procedure to create a VPC with both public and private subnets. 

**To create a VPC and subnets**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the top-right corner of the AWS Management Console, choose the Region to create your VPC in. This example uses the US West (Oregon) Region.

1. In the upper-left corner, choose **VPC dashboard**. To begin creating a VPC, choose **Create VPC**.

1. For **Resources to create** under **VPC settings**, choose **VPC and more**.

1. For the **VPC settings**, set these values:
   + **Name tag auto-generation** – **tutorial**
   + **IPv4 CIDR block** – **10.0.0.0/16**
   + **IPv6 CIDR block** – **No IPv6 CIDR block**
   + **Tenancy** – **Default**
   + **Number of Availability Zones (AZs)** – **2**
   + **Customize AZs** – Keep the default values.
   + **Number of public subnet** – **2**
   + **Number of private subnets** – **2**
   + **Customize subnets CIDR blocks** – Keep the default values.
   + **NAT gateways (\$1)** – **None**
   + **VPC endpoints** – **None**
   + **DNS options** – Keep the default values.

1. Choose **Create VPC**.

## Create a VPC security group for a public web server
<a name="CHAP_Tutorials.WebServerDB.CreateVPC.SecurityGroupEC2"></a>

Next, you create a security group for public access. To connect to public EC2 instances in your VPC, you add inbound rules to your VPC security group. These allow traffic to connect from the internet.

**To create a VPC security group**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. Choose **VPC Dashboard**, choose **Security Groups**, and then choose **Create security group**. 

1. On the **Create security group** page, set these values: 
   + **Security group name:** **tutorial-securitygroup**
   + **Description:** **Tutorial Security Group**
   + **VPC:** Choose the VPC that you created earlier, for example: **vpc-*identifier* (tutorial-vpc)** 

1. Add inbound rules to the security group.

   1. Determine the IP address to use to connect to EC2 instances in your VPC using Secure Shell (SSH). To determine your public IP address, in a different browser window or tab, you can use the service at [https://checkip.amazonaws.com](https://checkip.amazonaws.com). An example of an IP address is `203.0.113.25/32`.

      In many cases, you might connect through an internet service provider (ISP) or from behind your firewall without a static IP address. If so, find the range of IP addresses used by client computers.
**Warning**  
If you use `0.0.0.0/0` for SSH access, you make it possible for all IP addresses to access your public instances using SSH. This approach is acceptable for a short time in a test environment, but it's unsafe for production environments. In production, authorize only a specific IP address or range of addresses to access your instances using SSH.

   1. In the **Inbound rules** section, choose **Add rule**.

   1. Set the following values for your new inbound rule to allow SSH access to your Amazon EC2 instance. If you do this, you can connect to your Amazon EC2 instance to install the web server and other utilities. You also connect to your EC2 instance to upload content for your web server. 
      + **Type:** **SSH**
      + **Source:** The IP address or range from Step a, for example: **203.0.113.25/32**.

   1. Choose **Add rule**.

   1. Set the following values for your new inbound rule to allow HTTP access to your web server:
      + **Type:** **HTTP**
      + **Source:** **0.0.0.0/0**

1. Choose **Create security group** to create the security group.

   Note the security group ID because you need it later in this tutorial.

## Create a VPC security group for a private DB cluster
<a name="CHAP_Tutorials.WebServerDB.CreateVPC.SecurityGroupDB"></a>

To keep your DB cluster private, create a second security group for private access. To connect to private DB clusters in your VPC, you add inbound rules to your VPC security group that allow traffic from your web server only.

**To create a VPC security group**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. Choose **VPC Dashboard**, choose **Security Groups**, and then choose **Create security group**.

1. On the **Create security group** page, set these values:
   + **Security group name:** **tutorial-db-securitygroup**
   + **Description:** **Tutorial DB Instance Security Group**
   + **VPC:** Choose the VPC that you created earlier, for example: **vpc-*identifier* (tutorial-vpc)**

1. Add inbound rules to the security group.

   1. In the **Inbound rules** section, choose **Add rule**.

   1. Set the following values for your new inbound rule to allow MySQL traffic on port 3306 from your Amazon EC2 instance. If you do this, you can connect from your web server to your DB cluster. By doing so, you can store and retrieve data from your web application to your database. 
      + **Type:** **MySQL/Aurora**
      + **Source:** The identifier of the **tutorial-securitygroup** security group that you created previously in this tutorial, for example: **sg-9edd5cfb**.

1. Choose **Create security group** to create the security group.

## Create a DB subnet group
<a name="CHAP_Tutorials.WebServerDB.CreateVPC.DBSubnetGroup"></a>

A *DB subnet group* is a collection of subnets that you create in a VPC and that you then designate for your DB clusters. A DB subnet group makes it possible for you to specify a particular VPC when creating DB clusters.

**To create a DB subnet group**

1. Identify the private subnets for your database in the VPC.

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **Subnets**.

   1. Note the subnet IDs of the subnets named **tutorial-subnet-private1-us-west-2a** and **tutorial-subnet-private2-us-west-2b**.

      You need the subnet IDs when you create your DB subnet group.

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

   Make sure that you connect to the Amazon RDS console, not to the Amazon VPC console.

1. In the navigation pane, choose **Subnet groups**.

1. Choose **Create DB subnet group**.

1. On the **Create DB subnet group** page, set these values in **Subnet group details**:
   + **Name:** **tutorial-db-subnet-group**
   + **Description:** **Tutorial DB Subnet Group**
   + **VPC:** **tutorial-vpc (vpc-*identifier*)** 

1. In the **Add subnets** section, choose the **Availability Zones** and **Subnets**.

   For this tutorial, choose **us-west-2a** and **us-west-2b** for the **Availability Zones**. For **Subnets**, choose the private subnets you identified in the previous step.

1. Choose **Create**. 

   Your new DB subnet group appears in the DB subnet groups list on the RDS console. You can choose the DB subnet group to see details in the details pane at the bottom of the window. These details include all of the subnets associated with the group.

**Note**  
If you created this VPC to complete [Tutorial: Create a web server and an Amazon Aurora DB cluster](TUT_WebAppWithRDS.md), create the DB cluster by following the instructions in [Create an Amazon Aurora DB cluster](CHAP_Tutorials.WebServerDB.CreateDBCluster.md).

## Deleting the VPC
<a name="CHAP_Tutorials.WebServerDB.CreateVPC.Delete"></a>

After you create the VPC and other resources for this tutorial, you can delete them if they are no longer needed.

**Note**  
If you added resources in the VPC that you created for this tutorial, you might need to delete these before you can delete the VPC. For example, these resources might include Amazon EC2 instances or Amazon RDS DB clusters. For more information, see [Delete your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-vpcs.html#VPC_Deleting) in the *Amazon VPC User Guide*.

**To delete a VPC and related resources**

1. Delete the DB subnet group.

   1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

   1. In the navigation pane, choose **Subnet groups**.

   1. Select the DB subnet group you want to delete, such as **tutorial-db-subnet-group**.

   1. Choose **Delete**, and then choose **Delete** in the confirmation window.

1. Note the VPC ID.

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **VPCs**.

   1. In the list, identify the VPC that you created, such as **tutorial-vpc**.

   1. Note the **VPC ID** of the VPC that you created. You need the VPC ID in later steps.

1. Delete the security groups.

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **Security Groups**.

   1. Select the security group for the Amazon RDS DB instance, such as **tutorial-db-securitygroup**.

   1. For **Actions**, choose **Delete security groups**, and then choose **Delete** on the confirmation page.

   1. On the **Security Groups** page, select the security group for the Amazon EC2 instance, such as **tutorial-securitygroup**.

   1. For **Actions**, choose **Delete security groups**, and then choose **Delete** on the confirmation page.

1. Delete the VPC.

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **VPCs**.

   1. Select the VPC you want to delete, such as **tutorial-vpc**.

   1. For **Actions**, choose **Delete VPC**.

      The confirmation page shows other resources that are associated with the VPC that will also be deleted, including the subnets associated with it.

   1. On the confirmation page, enter **delete**, and then choose **Delete**.

# Tutorial: Create a VPC for use with a DB cluster (dual-stack mode)
<a name="CHAP_Tutorials.CreateVPCDualStack"></a>

A common scenario includes a DB cluster in a virtual private cloud (VPC) based on the Amazon VPC service. This VPC shares data with a public Amazon EC2 instance that is running in the same VPC.

In this tutorial, you create the VPC for this scenario that works with a database running in dual-stack mode. Dual-stack mode to enable connection over the IPv6 addressing protocol. For more information about IP addressing, see [Amazon Aurora IP addressing](USER_VPC.WorkingWithRDSInstanceinaVPC.md#USER_VPC.IP_addressing).

Dual-stack network clusters are supported in most regions. For more information see [Availability of dual-stack network DB clusters](USER_VPC.WorkingWithRDSInstanceinaVPC.md#USER_VPC.IP_addressing.dual-stack-availability). To see the limitations of dual-stack mode, see [Limitations for dual-stack network DB clusters](USER_VPC.WorkingWithRDSInstanceinaVPC.md#USER_VPC.IP_addressing.dual-stack-limitations).

The following diagram shows this scenario.

 

![\[VPC scenario for dual-stack mode\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/con-VPC-sec-grp-dual-stack-aurora.png)


For information about other scenarios, see [Scenarios for accessing a DB cluster in a VPC](USER_VPC.Scenarios.md).

Your DB cluster needs to be available only to your Amazon EC2 instance, and not to the public internet. Thus, you create a VPC with both public and private subnets. The Amazon EC2 instance is hosted in the public subnet, so that it can reach the public internet. The DB cluster is hosted in a private subnet. The Amazon EC2 instance can connect to the DB cluster because it's hosted within the same VPC. However, the DB cluster is not available to the public internet, providing greater security.

This tutorial configures an additional public and private subnet in a separate Availability Zone. These subnets aren't used by the tutorial. An RDS DB subnet group requires a subnet in at least two Availability Zones. The additional subnet makes it easy to configure more than one Aurora DB instance.

To create a DB cluster that uses dual-stack mode, specify **Dual-stack mode** for the **Network type** setting. You can also modify a DB cluster with the same setting. For more information about creating a DB cluster, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md). For more information about modifying a DB cluster, see [Modifying an Amazon Aurora DB cluster](Aurora.Modifying.md).

This tutorial describes configuring a VPC for Amazon Aurora DB clusters. For more information about Amazon VPC, see [Amazon VPC User Guide](https://docs.aws.amazon.com/vpc/latest/userguide/). 

## Create a VPC with private and public subnets
<a name="CHAP_Tutorials.CreateVPCDualStack.VPCAndSubnets"></a>

Use the following procedure to create a VPC with both public and private subnets. 

**To create a VPC and subnets**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the upper-right corner of the AWS Management Console, choose the Region to create your VPC in. This example uses the US East (Ohio) Region.

1. In the upper-left corner, choose **VPC dashboard**. To begin creating a VPC, choose **Create VPC**.

1. For **Resources to create** under **VPC settings**, choose **VPC and more**.

1. For the remaining **VPC settings**, set these values:
   + **Name tag auto-generation** – **tutorial-dual-stack**
   + **IPv4 CIDR block** – **10.0.0.0/16**
   + **IPv6 CIDR block** – **Amazon-provided IPv6 CIDR block**
   + **Tenancy** – **Default**
   + **Number of Availability Zones (AZs)** – **2**
   + **Customize AZs** – Keep the default values.
   + **Number of public subnet** – **2**
   + **Number of private subnets** – **2**
   + **Customize subnets CIDR blocks** – Keep the default values.
   + **NAT gateways (\$1)** – **None**
   + **Egress only internet gateway** – **No**
   + **VPC endpoints** – **None**
   + **DNS options** – Keep the default values.
**Note**  
Amazon RDS requires at least two subnets in two different Availability Zones to support Multi-AZ DB instance deployments. This tutorial creates a Single-AZ deployment, but the requirement makes it easy to convert to a Multi-AZ DB instance deployment in the future.

1. Choose **Create VPC**.

## Create a VPC security group for a public Amazon EC2 instance
<a name="CHAP_Tutorials.CreateVPCDualStack.SecurityGroupEC2"></a>

Next, you create a security group for public access. To connect to public EC2 instances in your VPC, add inbound rules to your VPC security group that allow traffic to connect from the internet.

**To create a VPC security group**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. Choose **VPC Dashboard**, choose **Security Groups**, and then choose **Create security group**. 

1. On the **Create security group** page, set these values:
   + **Security group name:** **tutorial-dual-stack-securitygroup**
   + **Description:** **Tutorial Dual-Stack Security Group**
   + **VPC:** Choose the VPC that you created earlier, for example: **vpc-*identifier* (tutorial-dual-stack-vpc)** 

1. Add inbound rules to the security group.

   1. Determine the IP address to use to connect to EC2 instances in your VPC using Secure Shell (SSH).

      An example of an Internet Protocol version 4 (IPv4) address is `203.0.113.25/32`. An example of an Internet Protocol version 6 (IPv6) address range is `2001:db8:1234:1a00::/64`.

      In many cases, you might connect through an internet service provider (ISP) or from behind your firewall without a static IP address. If so, find the range of IP addresses used by client computers.
**Warning**  
If you use `0.0.0.0/0` for IPv4 or `::0` for IPv6, you make it possible for all IP addresses to access your public instances using SSH. This approach is acceptable for a short time in a test environment, but it's unsafe for production environments. In production, authorize only a specific IP address or range of addresses to access your instances.

   1. In the **Inbound rules** section, choose **Add rule**.

   1. Set the following values for your new inbound rule to allow Secure Shell (SSH) access to your Amazon EC2 instance. If you do this, you can connect to your EC2 instance to install SQL clients and other applications. Specify an IP address so you can access your EC2 instance:
      + **Type:** **SSH**
      + **Source:** The IP address or range from step a. An example of an IPv4 IP address is **203.0.113.25/32**. An example of an IPv6 IP address is **2001:DB8::/32**.

1. Choose **Create security group** to create the security group.

   Note the security group ID because you need it later in this tutorial.

## Create a VPC security group for a private DB cluster
<a name="CHAP_Tutorials.CreateVPCDualStack.SecurityGroupDB"></a>

To keep your DB cluster private, create a second security group for private access. To connect to private DB clusters in your VPC, add inbound rules to your VPC security group. These allow traffic from your Amazon EC2 instance only.

**To create a VPC security group**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. Choose **VPC Dashboard**, choose **Security Groups**, and then choose **Create security group**.

1. On the **Create security group** page, set these values:
   + **Security group name:** **tutorial-dual-stack-db-securitygroup**
   + **Description:** **Tutorial Dual-Stack DB Instance Security Group**
   + **VPC:** Choose the VPC that you created earlier, for example: **vpc-*identifier* (tutorial-dual-stack-vpc)**

1. Add inbound rules to the security group:

   1. In the **Inbound rules** section, choose **Add rule**.

   1. Set the following values for your new inbound rule to allow MySQL traffic on port 3306 from your Amazon EC2 instance. If you do, you can connect from your EC2 instance to your DB cluster. Doing this means that you can send data from your EC2 instance to your database.
      + **Type:** **MySQL/Aurora**
      + **Source:** The identifier of the **tutorial-dual-stack-securitygroup** security group that you created previously in this tutorial, for example **sg-9edd5cfb**.

1. To create the security group, choose **Create security group**.

## Create a DB subnet group
<a name="CHAP_Tutorials.CreateVPCDualStack.DBSubnetGroup"></a>

A *DB subnet group* is a collection of subnets that you create in a VPC and that you then designate for your DB clusters. By using a DB subnet group, you can specify a particular VPC when creating DB clusters. To create a DB subnet group that is `DUAL` compatible, all subnets must be `DUAL` compatible. To be `DUAL` compatible, a subnet must have an IPv6 CIDR associated with it.

**To create a DB subnet group**

1. Identify the private subnets for your database in the VPC.

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **Subnets**.

   1. Note the subnet IDs of the subnets named **tutorial-dual-stack-subnet-private1-us-west-2a** and **tutorial-dual-stack-subnet-private2-us-west-2b**.

      You will need the subnet IDs when you create your DB subnet group.

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

   Make sure that you connect to the Amazon RDS console, not to the Amazon VPC console.

1. In the navigation pane, choose **Subnet groups**.

1. Choose **Create DB subnet group**.

1. On the **Create DB subnet group** page, set these values in **Subnet group details**:
   + **Name:** **tutorial-dual-stack-db-subnet-group**
   + **Description:** **Tutorial Dual-Stack DB Subnet Group**
   + **VPC:** **tutorial-dual-stack-vpc (vpc-*identifier*)** 

1. In the **Add subnets** section, choose values for the **Availability Zones** and **Subnets** options.

   For this tutorial, choose **us-east-2a** and **us-east-2b** for the **Availability Zones**. For **Subnets**, choose the private subnets you identified in the previous step.

1. Choose **Create**. 

Your new DB subnet group appears in the DB subnet groups list on the RDS console. You can choose the DB subnet group to see its details. These include the supported addressing protocols and all of the subnets associated with the group and the network type supported by the DB subnet group.

## Create an Amazon EC2 instance in dual-stack mode
<a name="CHAP_Tutorials.CreateVPCDualStack.CreateEC2Instance"></a>

To create an Amazon EC2 instance, follow the instructions in [Launch an instance using the new launch instance wizard](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-instance-wizard.html) in the *Amazon EC2 User Guide*.

On the **Configure Instance Details** page, set these values and keep the other values as their defaults:
+ **Network** – Choose an existing VPC with both public and private subnets, such as **tutorial-dual-stack-vpc** (vpc-*identifier*) created in [Create a VPC with private and public subnets](#CHAP_Tutorials.CreateVPCDualStack.VPCAndSubnets).
+ **Subnet** – Choose an existing public subnet, such as **subnet-*identifier* \$1 tutorial-dual-stack-subnet-public1-us-east-2a \$1 us-east-2a** created in [Create a VPC security group for a public Amazon EC2 instance](#CHAP_Tutorials.CreateVPCDualStack.SecurityGroupEC2).
+ **Auto-assign Public IP** – Choose **Enable**.
+ **Auto-assign IPv6 IP** – Choose **Enable**.
+ **Firewall (security groups)** – Choose **Select an existing security group**.
+ **Common security groups** – Choose an existing security group, such as the `tutorial-securitygroup` created in [Create a VPC security group for a public Amazon EC2 instance](#CHAP_Tutorials.CreateVPCDualStack.SecurityGroupEC2). Make sure that the security group that you choose includes inbound rules for Secure Shell (SSH) and HTTP access.

## Create a DB cluster in dual-stack mode
<a name="CHAP_Tutorials.CreateVPCDualStack.CreateDBInstance"></a>

In this step, you create a DB cluster that runs in dual-stack mode.

**To create a DB instance**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the upper-right corner of the console, choose the AWS Region where you want to create the DB cluster. This example uses the US East (Ohio) Region.

1. In the navigation pane, choose **Databases**.

1. Choose **Create database**.

1. On the **Create database** page, make sure that the **Standard create** option is chosen, and then choose the Aurora MySQL DB engine type.

1. In the **Connectivity** section, set these values:
   + **Network type** – Choose **Dual-stack mode**.  
![\[Network type section in the console with Dual-stack mode selected\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/dual-stack-mode.png)
   + **Virtual private cloud (VPC)** – Choose an existing VPC with both public and private subnets, such as **tutorial-dual-stack-vpc** (vpc-*identifier*) created in [Create a VPC with private and public subnets](#CHAP_Tutorials.CreateVPCDualStack.VPCAndSubnets).

     The VPC must have subnets in different Availability Zones.
   + **DB subnet group** – Choose a DB subnet group for the VPC, such as **tutorial-dual-stack-db-subnet-group** created in [Create a DB subnet group](#CHAP_Tutorials.CreateVPCDualStack.DBSubnetGroup).
   + **Public access** – Choose **No**.
   + **VPC security group (firewall)** – Select **Choose existing**.
   + **Existing VPC security groups** – Choose an existing VPC security group that is configured for private access, such as **tutorial-dual-stack-db-securitygroup** created in [Create a VPC security group for a private DB cluster](#CHAP_Tutorials.CreateVPCDualStack.SecurityGroupDB).

     Remove other security groups, such as the default security group, by choosing the **X** associated with each.
   + **Availability Zone** – Choose **us-west-2a**.

     To avoid cross-AZ traffic, make sure the DB instance and the EC2 instance are in the same Availability Zone.

1. For the remaining sections, specify your DB cluster settings. For information about each setting, see [Settings for Aurora DB clusters](Aurora.CreateInstance.md#Aurora.CreateInstance.Settings).

## Connect to your Amazon EC2 instance and DB cluster
<a name="CHAP_Tutorials.CreateVPCDualStack.Connect"></a>

After you create your Amazon EC2 instance and DB cluster in dual-stack mode, you can connect to each one using the IPv6 protocol. To connect to an Amazon EC2 instance using the IPv6 protocol, follow the instructions in [Connect to your Linux instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html) in the *Amazon EC2 User Guide*.

To connect to your Aurora MySQL DB cluster from the Amazon EC2 instance, follow the instructions in [Connect to an Aurora MySQL DB cluster](CHAP_GettingStartedAurora.CreatingConnecting.Aurora.md#CHAP_GettingStartedAurora.Aurora.Connect).

## Deleting the VPC
<a name="CHAP_Tutorials.CreateVPCDualStack.Delete"></a>

After you create the VPC and other resources for this tutorial, you can delete them if they are no longer needed.

If you added resources in the VPC that you created for this tutorial, you might need to delete these before you can delete the VPC. Examples of resources are Amazon EC2 instances or DB clusters. For more information, see [Delete your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-vpcs.html#VPC_Deleting) in the *Amazon VPC User Guide*.

**To delete a VPC and related resources**

1. Delete the DB subnet group:

   1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

   1. In the navigation pane, choose **Subnet groups**.

   1. Select the DB subnet group to delete, such as **tutorial-db-subnet-group**.

   1. Choose **Delete**, and then choose **Delete** in the confirmation window.

1. Note the VPC ID:

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **VPCs**.

   1. In the list, identify the VPC you created, such as **tutorial-dual-stack-vpc**.

   1. Note the **VPC ID** value of the VPC that you created. You need this VPC ID in subsequent steps.

1. Delete the security groups:

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **Security Groups**.

   1. Select the security group for the Amazon RDS DB instance, such as **tutorial-dual-stack-db-securitygroup**.

   1. For **Actions**, choose **Delete security groups**, and then choose **Delete** on the confirmation page.

   1. On the **Security Groups** page, select the security group for the Amazon EC2 instance, such as **tutorial-dual-stack-securitygroup**.

   1. For **Actions**, choose **Delete security groups**, and then choose **Delete** on the confirmation page.

1. Delete the NAT gateway:

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **NAT Gateways**.

   1. Select the NAT gateway of the VPC that you created. Use the VPC ID to identify the correct NAT gateway.

   1. For **Actions**, choose **Delete NAT gateway**.

   1. On the confirmation page, enter **delete**, and then choose **Delete**.

1. Delete the VPC:

   1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

   1. Choose **VPC Dashboard**, and then choose **VPCs**.

   1. Select the VPC that you want to delete, such as **tutorial-dual-stack-vpc**.

   1. For **Actions**, choose **Delete VPC**.

      The confirmation page shows other resources that are associated with the VPC that will also be deleted, including the subnets associated with it.

   1. On the confirmation page, enter **delete**, and then choose **Delete**.

1. Release the Elastic IP addresses:

   1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

   1. Choose **EC2 Dashboard**, and then choose **Elastic IPs**.

   1. Select the Elastic IP address that you want to release.

   1. For **Actions**, choose **Release Elastic IP addresses**.

   1. On the confirmation page, choose **Release**.