

# Integrating Amazon Aurora MySQL with other AWS services
<a name="AuroraMySQL.Integrating"></a>

Amazon Aurora MySQL integrates with other AWS services so that you can extend your Aurora MySQL DB cluster to use additional capabilities in the AWS Cloud. Your Aurora MySQL DB cluster can use AWS services to do the following:
+ Synchronously or asynchronously invoke an AWS Lambda function using the native functions `lambda_sync` or `lambda_async`. For more information, see [Invoking a Lambda function from an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Lambda.md).
+ Load data from text or XML files stored in an Amazon Simple Storage Service (Amazon S3) bucket into your DB cluster using the `LOAD DATA FROM S3` or `LOAD XML FROM S3` command. For more information, see [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md).
+ Save data to text files stored in an Amazon S3 bucket from your DB cluster using the `SELECT INTO OUTFILE S3` command. For more information, see [Saving data from an Amazon Aurora MySQL DB cluster into text files in an Amazon S3 bucket](AuroraMySQL.Integrating.SaveIntoS3.md). 
+ Automatically add or remove Aurora Replicas with Application Auto Scaling. For more information, see [Amazon Aurora Auto Scaling with Aurora Replicas](Aurora.Integrating.AutoScaling.md).
+  Perform sentiment analysis with Amazon Comprehend, or a wide variety of machine learning algorithms with SageMaker AI. For more information, see [Using Amazon Aurora machine learning](aurora-ml.md). 

Aurora secures the ability to access other AWS services by using AWS Identity and Access Management (IAM). You grant permission to access other AWS services by creating an IAM role with the necessary permissions, and then associating the role with your DB cluster. For details and instructions on how to permit your Aurora MySQL DB cluster to access other AWS services on your behalf, see [Authorizing Amazon Aurora MySQL to access other AWS services on your behalf](AuroraMySQL.Integrating.Authorizing.md).

# Authorizing Amazon Aurora MySQL to access other AWS services on your behalf
<a name="AuroraMySQL.Integrating.Authorizing"></a>

For your Aurora MySQL DB cluster to access other services on your behalf, create and configure an AWS Identity and Access Management (IAM) role. This role authorizes database users in your DB cluster to access other AWS services. For more information, see [Setting up IAM roles to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.md).

You must also configure your Aurora DB cluster to allow outbound connections to the target AWS service. For more information, see [Enabling network communication from Amazon Aurora to other AWS services](AuroraMySQL.Integrating.Authorizing.Network.md).

If you do so, your database users can perform these actions using other AWS services:
+ Synchronously or asynchronously invoke an AWS Lambda function using the native functions `lambda_sync` or `lambda_async`. Or, asynchronously invoke an AWS Lambda function using the `mysql.lambda_async` procedure. For more information, see [Invoking a Lambda function with an Aurora MySQL native function](AuroraMySQL.Integrating.NativeLambda.md).
+ Load data from text or XML files stored in an Amazon S3 bucket into your DB cluster by using the `LOAD DATA FROM S3` or `LOAD XML FROM S3` statement. For more information, see [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md).
+ Save data from your DB cluster into text files stored in an Amazon S3 bucket by using the `SELECT INTO OUTFILE S3` statement. For more information, see [Saving data from an Amazon Aurora MySQL DB cluster into text files in an Amazon S3 bucket](AuroraMySQL.Integrating.SaveIntoS3.md).
+ Export log data to Amazon CloudWatch Logs MySQL. For more information, see [Publishing Amazon Aurora MySQL logs to Amazon CloudWatch Logs](AuroraMySQL.Integrating.CloudWatch.md).
+ Automatically add or remove Aurora Replicas with Application Auto Scaling. For more information, see [Amazon Aurora Auto Scaling with Aurora Replicas](Aurora.Integrating.AutoScaling.md).

# Setting up IAM roles to access AWS services
<a name="AuroraMySQL.Integrating.Authorizing.IAM"></a>

To permit your Aurora DB cluster to access another AWS service, do the following:

1. Create an IAM policy that grants permission to the AWS service. For more information, see the following topics.
   + [Creating an IAM policy to access Amazon S3 resources](AuroraMySQL.Integrating.Authorizing.IAM.S3CreatePolicy.md)
   + [Creating an IAM policy to access AWS Lambda resources](AuroraMySQL.Integrating.Authorizing.IAM.LambdaCreatePolicy.md)
   + [Creating an IAM policy to access CloudWatch Logs resources](AuroraMySQL.Integrating.Authorizing.IAM.CWCreatePolicy.md)
   + [Creating an IAM policy to access AWS KMS resources](AuroraMySQL.Integrating.Authorizing.IAM.KMSCreatePolicy.md)

1. Create an IAM role and attach the policy that you created. For more information, see [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

1. Associate that IAM role with your Aurora DB cluster. For more information, see [Associating an IAM role with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.md).

# Creating an IAM policy to access Amazon S3 resources
<a name="AuroraMySQL.Integrating.Authorizing.IAM.S3CreatePolicy"></a>

Aurora can access Amazon S3 resources to either load data to or save data from an Aurora DB cluster. However, you must first create an IAM policy that provides the bucket and object permissions that allow Aurora to access Amazon S3.

The following table lists the Aurora features that can access an Amazon S3 bucket on your behalf, and the minimum required bucket and object permissions required by each feature.


| Feature | Bucket permissions | Object permissions | 
| --- | --- | --- | 
|  `LOAD DATA FROM S3`  |  `ListBucket`  |  `GetObject` `GetObjectVersion`  | 
| LOAD XML FROM S3 |  `ListBucket`  |  `GetObject` `GetObjectVersion`  | 
|  `SELECT INTO OUTFILE S3`  |  `ListBucket`  |  `AbortMultipartUpload` `DeleteObject` `GetObject` `ListMultipartUploadParts` `PutObject`  | 

The following policy adds the permissions that might be required by Aurora to access an Amazon S3 bucket on your behalf. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowAuroraToExampleBucket",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:AbortMultipartUpload",
                "s3:ListBucket",
                "s3:DeleteObject",
                "s3:GetObjectVersion",
                "s3:ListMultipartUploadParts"
            ],
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket/*",
                "arn:aws:s3:::amzn-s3-demo-bucket"
            ]
        }
    ]
}
```

------

**Note**  
 Make sure to include both entries for the `Resource` value. Aurora needs the permissions on both the bucket itself and all the objects inside the bucket.   
Based on your use case, you might not need to add all of the permissions in the sample policy. Also, other permissions might be required. For example, if your Amazon S3 bucket is encrypted, you need to add `kms:Decrypt` permissions.

You can use the following steps to create an IAM policy that provides the minimum required permissions for Aurora to access an Amazon S3 bucket on your behalf. To allow Aurora to access all of your Amazon S3 buckets, you can skip these steps and use either the `AmazonS3ReadOnlyAccess` or `AmazonS3FullAccess` predefined IAM policy instead of creating your own.

**To create an IAM policy to grant access to your Amazon S3 resources**

1. Open the [IAM Management Console](https://console.aws.amazon.com/iam/home?#home).

1. In the navigation pane, choose **Policies**.

1. Choose **Create policy**.

1. On the **Visual editor** tab, choose **Choose a service**, and then choose **S3**.

1. For **Actions**, choose **Expand all**, and then choose the bucket permissions and object permissions needed for the IAM policy.

   Object permissions are permissions for object operations in Amazon S3, and need to be granted for objects in a bucket, not the bucket itself. For more information about permissions for object operations in Amazon S3, see [Permissions for object operations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-actions.html#using-with-s3-actions-related-to-objects).

1. Choose **Resources**, and choose **Add ARN** for **bucket**.

1. In the **Add ARN(s)** dialog box, provide the details about your resource, and choose **Add**.

   Specify the Amazon S3 bucket to allow access to. For instance, if you want to allow Aurora to access the Amazon S3 bucket named *amzn-s3-demo-bucket*, then set the Amazon Resource Name (ARN) value to `arn:aws:s3:::amzn-s3-demo-bucket`.

1. If the **object** resource is listed, choose **Add ARN** for **object**.

1. In the **Add ARN(s)** dialog box, provide the details about your resource.

   For the Amazon S3 bucket, specify the Amazon S3 bucket to allow access to. For the object, you can choose **Any** to grant permissions to any object in the bucket.
**Note**  
You can set **Amazon Resource Name (ARN)** to a more specific ARN value in order to allow Aurora to access only specific files or folders in an Amazon S3 bucket. For more information about how to define an access policy for Amazon S3, see [Managing access permissions to your Amazon S3 resources](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-access-control.html).

1. (Optional) Choose **Add ARN** for **bucket** to add another Amazon S3 bucket to the policy, and repeat the previous steps for the bucket.
**Note**  
You can repeat this to add corresponding bucket permission statements to your policy for each Amazon S3 bucket that you want Aurora to access. Optionally, you can also grant access to all buckets and objects in Amazon S3.

1. Choose **Review policy**.

1. For **Name**, enter a name for your IAM policy, for example `AllowAuroraToExampleBucket`. You use this name when you create an IAM role to associate with your Aurora DB cluster. You can also add an optional **Description** value.

1. Choose **Create policy**.

1. Complete the steps in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

# Creating an IAM policy to access AWS Lambda resources
<a name="AuroraMySQL.Integrating.Authorizing.IAM.LambdaCreatePolicy"></a>

You can create an IAM policy that provides the minimum required permissions for Aurora to invoke an AWS Lambda function on your behalf.

The following policy adds the permissions required by Aurora to invoke an AWS Lambda function on your behalf.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowAuroraToExampleFunction",
      "Effect": "Allow",
      "Action": "lambda:InvokeFunction",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:example_function"
    }
  ]
}
```

------

You can use the following steps to create an IAM policy that provides the minimum required permissions for Aurora to invoke an AWS Lambda function on your behalf. To allow Aurora to invoke all of your AWS Lambda functions, you can skip these steps and use the predefined `AWSLambdaRole` policy instead of creating your own.

**To create an IAM policy to grant invoke to your AWS Lambda functions**

1. Open the [IAM console](https://console.aws.amazon.com/iam/home?#home).

1. In the navigation pane, choose **Policies**.

1. Choose **Create policy**.

1. On the **Visual editor** tab, choose **Choose a service**, and then choose **Lambda**.

1. For **Actions**, choose **Expand all**, and then choose the AWS Lambda permissions needed for the IAM policy.

   Ensure that `InvokeFunction` is selected. It is the minimum required permission to enable Amazon Aurora to invoke an AWS Lambda function.

1. Choose **Resources** and choose **Add ARN** for **function**.

1. In the **Add ARN(s)** dialog box, provide the details about your resource.

   Specify the Lambda function to allow access to. For instance, if you want to allow Aurora to access a Lambda function named `example_function`, then set the ARN value to `arn:aws:lambda:::function:example_function`. 

   For more information on how to define an access policy for AWS Lambda, see [Authentication and access control for AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/lambda-auth-and-access-control.html).

1. Optionally, choose **Add additional permissions** to add another AWS Lambda function to the policy, and repeat the previous steps for the function.
**Note**  
You can repeat this to add corresponding function permission statements to your policy for each AWS Lambda function that you want Aurora to access.

1. Choose **Review policy**.

1. Set **Name** to a name for your IAM policy, for example `AllowAuroraToExampleFunction`. You use this name when you create an IAM role to associate with your Aurora DB cluster. You can also add an optional **Description** value.

1. Choose **Create policy**.

1. Complete the steps in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

# Creating an IAM policy to access CloudWatch Logs resources
<a name="AuroraMySQL.Integrating.Authorizing.IAM.CWCreatePolicy"></a>

Aurora can access CloudWatch Logs to export audit log data from an Aurora DB cluster. However, you must first create an IAM policy that provides the log group and log stream permissions that allow Aurora to access CloudWatch Logs. 

The following policy adds the permissions required by Aurora to access Amazon CloudWatch Logs on your behalf, and the minimum required permissions to create log groups and export data. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "EnableCreationAndManagementOfRDSCloudwatchLogEvents",
            "Effect": "Allow",
            "Action": [
                "logs:GetLogEvents",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:log-group:/aws/rds/*:log-stream:*"
        },
        {
            "Sid": "EnableCreationAndManagementOfRDSCloudwatchLogGroupsAndStreams",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:DescribeLogStreams",
                "logs:PutRetentionPolicy",
                "logs:CreateLogGroup"
            ],
            "Resource": "arn:aws:logs:*:*:log-group:/aws/rds/*"
        }
    ]
}
```

------

You can modify the ARNs in the policy to restrict access to a specific AWS Region and account.

You can use the following steps to create an IAM policy that provides the minimum required permissions for Aurora to access CloudWatch Logs on your behalf. To allow Aurora full access to CloudWatch Logs, you can skip these steps and use the `CloudWatchLogsFullAccess` predefined IAM policy instead of creating your own. For more information, see [Using identity-based policies (IAM policies) for CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-identity-based-access-control-cwl.html#managed-policies-cwl) in the* Amazon CloudWatch User Guide.*

**To create an IAM policy to grant access to your CloudWatch Logs resources**

1. Open the [IAM console](https://console.aws.amazon.com/iam/home?#home).

1. In the navigation pane, choose **Policies**.

1. Choose **Create policy**.

1. On the **Visual editor** tab, choose **Choose a service**, and then choose **CloudWatch Logs**.

1. For **Actions**, choose **Expand all** (on the right), and then choose the Amazon CloudWatch Logs permissions needed for the IAM policy.

   Ensure that the following permissions are selected:
   + `CreateLogGroup`
   + `CreateLogStream`
   + `DescribeLogStreams`
   + `GetLogEvents`
   + `PutLogEvents`
   + `PutRetentionPolicy`

1. Choose **Resources** and choose **Add ARN** for **log-group**.

1. In the **Add ARN(s)** dialog box, enter the following values:
   + **Region** – An AWS Region or `*`
   + **Account** – An account number or `*`
   + **Log Group Name** – `/aws/rds/*`

1. In the **Add ARN(s)** dialog box, choose **Add**.

1. Choose **Add ARN** for **log-stream**.

1. In the **Add ARN(s)** dialog box, enter the following values:
   + **Region** – An AWS Region or `*`
   + **Account** – An account number or `*`
   + **Log Group Name** – `/aws/rds/*`
   + **Log Stream Name** – `*`

1. In the **Add ARN(s)** dialog box, choose **Add**.

1. Choose **Review policy**.

1. Set **Name** to a name for your IAM policy, for example `AmazonRDSCloudWatchLogs`. You use this name when you create an IAM role to associate with your Aurora DB cluster. You can also add an optional **Description** value.

1. Choose **Create policy**.

1. Complete the steps in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

# Creating an IAM policy to access AWS KMS resources
<a name="AuroraMySQL.Integrating.Authorizing.IAM.KMSCreatePolicy"></a>

Aurora can access the AWS KMS keys used for encrypting their database backups. However, you must first create an IAM policy that provides the permissions that allow Aurora to access KMS keys.

The following policy adds the permissions required by Aurora to access KMS keys on your behalf.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowAuroraToAccessKey",
      "Effect": "Allow",
      "Action": [
        "kms:Decrypt"
      ],
      "Resource": "arn:aws:kms:us-east-1:123456789012:key/key-ID"
    }
  ]
}
```

------

You can use the following steps to create an IAM policy that provides the minimum required permissions for Aurora to access KMS keys on your behalf.

**To create an IAM policy to grant access to your KMS keys**

1. Open the [IAM console](https://console.aws.amazon.com/iam/home?#home).

1. In the navigation pane, choose **Policies**.

1. Choose **Create policy**.

1. On the **Visual editor** tab, choose **Choose a service**, and then choose **KMS**.

1. For **Actions**, choose **Write**, and then choose **Decrypt**.

1. Choose **Resources**, and choose **Add ARN**.

1. In the **Add ARN(s)** dialog box, enter the following values:
   + **Region** – Type the AWS Region, such as `us-west-2`.
   + **Account** – Type the user account number.
   + **Log Stream Name** – Type the KMS key identifier.

1. In the **Add ARN(s)** dialog box, choose **Add**.

1. Choose **Review policy**.

1. Set **Name** to a name for your IAM policy, for example `AmazonRDSKMSKey`. You use this name when you create an IAM role to associate with your Aurora DB cluster. You can also add an optional **Description** value.

1. Choose **Create policy**.

1. Complete the steps in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

# Creating an IAM role to allow Amazon Aurora to access AWS services
<a name="AuroraMySQL.Integrating.Authorizing.IAM.CreateRole"></a>

After creating an IAM policy to allow Aurora to access AWS resources, you must create an IAM role and attach the IAM policy to the new IAM role.

To create an IAM role to permit your Amazon RDS cluster to communicate with other AWS services on your behalf, take the following steps.<a name="Create.IAMRole.AWSServices"></a>

**To create an IAM role to allow Amazon RDS to access AWS services**

1. Open the [IAM console](https://console.aws.amazon.com/iam/home?#home).

1. In the navigation pane, choose **Roles**.

1. Choose **Create role**.

1. Under **AWS service**, choose **RDS**.

1. Under **Select your use case**, choose **RDS – Add Role to Database**.

1. Choose **Next**.

1. On the **Permissions policies** page, enter the name of your policy in the **Search** field.

1. When it appears in the list, select the policy that you defined earlier using the instructions in one of the following sections:
   + [Creating an IAM policy to access Amazon S3 resources](AuroraMySQL.Integrating.Authorizing.IAM.S3CreatePolicy.md)
   + [Creating an IAM policy to access AWS Lambda resources](AuroraMySQL.Integrating.Authorizing.IAM.LambdaCreatePolicy.md)
   + [Creating an IAM policy to access CloudWatch Logs resources](AuroraMySQL.Integrating.Authorizing.IAM.CWCreatePolicy.md)
   + [Creating an IAM policy to access AWS KMS resources](AuroraMySQL.Integrating.Authorizing.IAM.KMSCreatePolicy.md)

1. Choose **Next**.

1. In **Role name**, enter a name for your IAM role, for example `RDSLoadFromS3`. You can also add an optional **Description** value.

1. Choose **Create Role**.

1. Complete the steps in [Associating an IAM role with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.md).

# Associating an IAM role with an Amazon Aurora MySQL DB cluster
<a name="AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster"></a>

To permit database users in an Amazon Aurora DB cluster to access other AWS services, you associate the IAM role that you created in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md) with that DB cluster. You can also have AWS create a new IAM role by associating the service directly.

To associate an IAM role with a DB cluster you do two things:

1. Add the role to the list of associated roles for a DB cluster by using the RDS console, the [add-role-to-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/add-role-to-db-cluster.html) AWS CLI command, or the [AddRoleToDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_AddRoleToDBCluster.html) RDS API operation.

   You can add a maximum of five IAM roles for each Aurora DB cluster.

1. Set the cluster-level parameter for the related AWS service to the ARN for the associated IAM role.

   The following table describes the cluster-level parameter names for the IAM roles used to access other AWS services.    
<a name="aurora_cluster_params_iam_roles"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.html)

To associate an IAM role to permit your Amazon RDS cluster to communicate with other AWS services on your behalf, take the following steps.

## Console
<a name="AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.CON"></a>

**To associate an IAM role with an Aurora DB cluster using the console**

1. Open the RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose **Databases**.

1. Choose the name of the Aurora DB cluster that you want to associate an IAM role with to show its details.

1. On the **Connectivity & security** tab, in the **Manage IAM roles** section, do one of the following:
   + **Select IAM roles to add to this cluster** (default)
   + **Select a service to connect to this cluster**  
![\[Associate an IAM role with a DB cluster\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/AuroraAssociateIAMRole-02.png)

1. To use an existing IAM role, choose it from the menu, then choose **Add role**.

   If adding the role is successful, its status shows as `Pending`, then `Available`.

1. To connect a service directly:

   1. Choose **Select a service to connect to this cluster**.

   1. Choose the service from the menu, then choose **Connect service**.

   1. For **Connect cluster to *Service Name***, enter the Amazon Resource Name (ARN) to use to connect to the service, then choose **Connect service**.

   AWS creates a new IAM role for connecting to the service. Its status shows as `Pending`, then `Available`.

1. (Optional) To stop associating an IAM role with a DB cluster and remove the related permission, choose the role and then choose **Delete**.

**To set the cluster-level parameter for the associated IAM role**

1. In the RDS console, choose **Parameter groups** in the navigation pane.

1. If you are already using a custom DB parameter group, you can select that group to use instead of creating a new DB cluster parameter group. If you are using the default DB cluster parameter group, create a new DB cluster parameter group, as described in the following steps:

   1. Choose **Create parameter group**.

   1. For **Parameter group family**, choose `aurora-mysql8.0` for an Aurora MySQL 8.0-compatible DB cluster, or `aurora-mysql5.7` for an Aurora MySQL 5.7-compatible DB cluster.

   1. For **Type**, choose **DB Cluster Parameter Group**. 

   1. For **Group name**, type the name of your new DB cluster parameter group.

   1. For **Description**, type a description for your new DB cluster parameter group.  
![\[Create a DB cluster parameter group\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/AuroraAssociateIAMRole-03.png)

   1. Choose **Create**. 

1. On the **Parameter groups** page, select your DB cluster parameter group and choose **Edit** for **Parameter group actions**.

1. Set the appropriate cluster-level [parameters](#aurora_cluster_params_iam_roles) to the related IAM role ARN values.

   For example, you can set just the `aws_default_s3_role` parameter to `arn:aws:iam::123456789012:role/AllowS3Access`.

1. Choose **Save changes**.

1. To change the DB cluster parameter group for your DB cluster, complete the following steps:

   1. Choose **Databases**, and then choose your Aurora DB cluster.

   1. Choose **Modify**.

   1. Scroll to **Database options** and set **DB cluster parameter group** to the DB cluster parameter group.

   1. Choose **Continue**.

   1. Verify your changes and then choose **Apply immediately**.

   1. Choose **Modify cluster**.

   1. Choose **Databases**, and then choose the primary instance for your DB cluster.

   1. For **Actions**, choose **Reboot**.

      When the instance has rebooted, your IAM role is associated with your DB cluster.

      For more information about cluster parameter groups, see [Aurora MySQL configuration parameters](AuroraMySQL.Reference.ParameterGroups.md).

## CLI
<a name="AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.CLI"></a>

**To associate an IAM role with a DB cluster by using the AWS CLI**

1. Call the `add-role-to-db-cluster` command from the AWS CLI to add the ARNs for your IAM roles to the DB cluster, as shown following. 

   ```
   PROMPT> aws rds add-role-to-db-cluster --db-cluster-identifier my-cluster --role-arn arn:aws:iam::123456789012:role/AllowAuroraS3Role
   PROMPT> aws rds add-role-to-db-cluster --db-cluster-identifier my-cluster --role-arn arn:aws:iam::123456789012:role/AllowAuroraLambdaRole
   ```

1. If you are using the default DB cluster parameter group, create a new DB cluster parameter group. If you are already using a custom DB parameter group, you can use that group instead of creating a new DB cluster parameter group.

   To create a new DB cluster parameter group, call the `create-db-cluster-parameter-group` command from the AWS CLI, as shown following.

   ```
   PROMPT> aws rds create-db-cluster-parameter-group  --db-cluster-parameter-group-name AllowAWSAccess \
        --db-parameter-group-family aurora5.7 --description "Allow access to Amazon S3 and AWS Lambda"
   ```

   For an Aurora MySQL 5.7-compatible DB cluster, specify `aurora-mysql5.7` for `--db-parameter-group-family`. For an Aurora MySQL 8.0-compatible DB cluster, specify `aurora-mysql8.0` for `--db-parameter-group-family`.

1. Set the appropriate cluster-level parameter or parameters and the related IAM role ARN values in your DB cluster parameter group, as shown following. 

   ```
   PROMPT> aws rds modify-db-cluster-parameter-group --db-cluster-parameter-group-name AllowAWSAccess \
       --parameters "ParameterName=aws_default_s3_role,ParameterValue=arn:aws:iam::123456789012:role/AllowAuroraS3Role,method=pending-reboot" \
       --parameters "ParameterName=aws_default_lambda_role,ParameterValue=arn:aws:iam::123456789012:role/AllowAuroraLambdaRole,method=pending-reboot"
   ```

1. Modify the DB cluster to use the new DB cluster parameter group and then reboot the cluster, as shown following.

   ```
   PROMPT> aws rds modify-db-cluster --db-cluster-identifier my-cluster --db-cluster-parameter-group-name AllowAWSAccess
   PROMPT> aws rds reboot-db-instance --db-instance-identifier my-cluster-primary
   ```

   When the instance has rebooted, your IAM roles are associated with your DB cluster.

   For more information about cluster parameter groups, see [Aurora MySQL configuration parameters](AuroraMySQL.Reference.ParameterGroups.md).

# Enabling network communication from Amazon Aurora to other AWS services
<a name="AuroraMySQL.Integrating.Authorizing.Network"></a>

To use certain other AWS services with Amazon Aurora, the network configuration of your Aurora DB cluster must allow outbound connections to endpoints for those services. The following operations require this network configuration.
+  Invoking AWS Lambda functions. To learn about this feature, see [Invoking a Lambda function with an Aurora MySQL native function](AuroraMySQL.Integrating.NativeLambda.md). 
+  Accessing files from Amazon S3. To learn about this feature, see [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md) and [Saving data from an Amazon Aurora MySQL DB cluster into text files in an Amazon S3 bucket](AuroraMySQL.Integrating.SaveIntoS3.md). 
+ Accessing AWS KMS endpoints. AWS KMS access is required to use database activity streams with Aurora MySQL. To learn about this feature, see [Monitoring Amazon Aurora with Database Activity Streams](DBActivityStreams.md).
+ Accessing SageMaker AI endpoints. SageMaker AI access is required to use SageMaker AI machine learning with Aurora MySQL. To learn about this feature, see [Using Amazon Aurora machine learning with Aurora MySQL](mysql-ml.md).

Aurora returns the following error messages if it can't connect to a service endpoint.

```
ERROR 1871 (HY000): S3 API returned error: Network Connection
```

```
ERROR 1873 (HY000): Lambda API returned error: Network Connection. Unable to connect to endpoint
```

```
ERROR 1815 (HY000): Internal error: Unable to initialize S3Stream
```

For database activity streams using Aurora MySQL, the activity stream stops functioning if the DB cluster can't access the AWS KMS endpoint. Aurora notifies you about this issue using RDS Events.

If you encounter these messages while using the corresponding AWS services, check if your Aurora DB cluster is public or private. If your Aurora DB cluster is private, you must configure it to enable connections.

For an Aurora DB cluster to be public, it must be marked as publicly accessible. If you look at the details for the DB cluster in the AWS Management Console, **Publicly Accessible** is **Yes** if this is the case. The DB cluster must also be in an Amazon VPC public subnet. For more information about publicly accessible DB instances, see [Working with a DB cluster in a VPC](USER_VPC.WorkingWithRDSInstanceinaVPC.md). For more information about public Amazon VPC subnets, see [Your VPC and subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html).

If your Aurora DB cluster isn't publicly accessible and in a VPC public subnet, it is private. You might have a DB cluster that is private and want to use one of the features that requires this network configuration. If so, configure the cluster so that it can connect to Internet addresses through Network Address Translation (NAT). As an alternative for Amazon S3, Amazon SageMaker AI, and AWS Lambda, you can instead configure the VPC to have a VPC endpoint for the other service associated with the DB cluster's route table, see [Working with a DB cluster in a VPC](USER_VPC.WorkingWithRDSInstanceinaVPC.md). For more information about configuring NAT in your VPC, see [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html). For more information about configuring VPC endpoints, see [VPC endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html). You can also create an S3 gateway endpoint to access your S3 bucket. For more information, see [Gateway endpoints for Amazon S3](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html).

You might also have to open the ephemeral ports for your network access control lists (ACLs) in the outbound rules for your VPC security group. For more information on ephemeral ports for network ACLs, see [Ephemeral ports](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html#nacl-ephemeral-ports) in the *Amazon Virtual Private Cloud User Guide*.

## Related topics
<a name="AuroraMySQL.Integrating.Authorizing.RelatedTopics"></a>
+ [Integrating Aurora with other AWS services](Aurora.Integrating.md)
+ [Managing an Amazon Aurora DB cluster](CHAP_Aurora.md)

# Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket
<a name="AuroraMySQL.Integrating.LoadFromS3"></a><a name="load_from_s3"></a><a name="load_data"></a><a name="load_xml"></a>

You can use the `LOAD DATA FROM S3` or `LOAD XML FROM S3` statement to load data from files stored in an Amazon S3 bucket. In Aurora MySQL, the files are first stored on the local disk, and then imported to the database. After the imports to the database are done, the local files are deleted.

**Contents**
+ [Giving Aurora access to Amazon S3](#AuroraMySQL.Integrating.LoadFromS3.Authorize)
+ [Granting privileges to load data in Amazon Aurora MySQL](#AuroraMySQL.Integrating.LoadFromS3.Grant)
+ [Specifying the path (URI) to an Amazon S3 bucket](#AuroraMySQL.Integrating.LoadFromS3.URI)
+ [LOAD DATA FROM S3](#AuroraMySQL.Integrating.LoadFromS3.Text)
  + [Syntax](#AuroraMySQL.Integrating.LoadFromS3.Text.Syntax)
  + [Parameters](#AuroraMySQL.Integrating.LoadFromS3.Text.Parameters)
  + [Using a manifest to specify data files to load](#AuroraMySQL.Integrating.LoadFromS3.Manifest)
    + [Verifying loaded files using the aurora\$1s3\$1load\$1history table](#AuroraMySQL.Integrating.LoadFromS3.Manifest.History)
  + [Examples](#AuroraMySQL.Integrating.LoadFromS3.Text.Examples)
+ [LOAD XML FROM S3](#AuroraMySQL.Integrating.LoadFromS3.XML)
  + [Syntax](#AuroraMySQL.Integrating.LoadFromS3.XML.Syntax)
  + [Parameters](#AuroraMySQL.Integrating.LoadFromS3.XML.Parameters)

## Giving Aurora access to Amazon S3
<a name="AuroraMySQL.Integrating.LoadFromS3.Authorize"></a>

Before you can load data from an Amazon S3 bucket, you must first give your Aurora MySQL DB cluster permission to access Amazon S3.

**To give Aurora MySQL access to Amazon S3**

1. Create an AWS Identity and Access Management (IAM) policy that provides the bucket and object permissions that allow your Aurora MySQL DB cluster to access Amazon S3. For instructions, see [Creating an IAM policy to access Amazon S3 resources](AuroraMySQL.Integrating.Authorizing.IAM.S3CreatePolicy.md).
**Note**  
In Aurora MySQL version 3.05 and higher, you can load objects that are encrypted using customer-managed AWS KMS keys. To do so, include the `kms:Decrypt` permission in your IAM policy. For more information, see [Creating an IAM policy to access AWS KMS resources](AuroraMySQL.Integrating.Authorizing.IAM.KMSCreatePolicy.md).  
You don't need this permission to load objects that are encrypted using AWS managed keys or Amazon S3 managed keys (SSE-S3).

1. Create an IAM role, and attach the IAM policy you created in [Creating an IAM policy to access Amazon S3 resources](AuroraMySQL.Integrating.Authorizing.IAM.S3CreatePolicy.md) to the new IAM role. For instructions, see [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

1. Make sure the DB cluster is using a custom DB cluster parameter group.

   For more information about creating a custom DB cluster parameter group, see [Creating a DB cluster parameter groupin Amazon Aurora](USER_WorkingWithParamGroups.CreatingCluster.md).

1. For Aurora MySQL version 2, set either the `aurora_load_from_s3_role` or `aws_default_s3_role` DB cluster parameter to the Amazon Resource Name (ARN) of the new IAM role. If an IAM role isn't specified for `aurora_load_from_s3_role`, Aurora uses the IAM role specified in `aws_default_s3_role`.

   For Aurora MySQL version 3, use `aws_default_s3_role`.

   If the cluster is part of an Aurora global database, set this parameter for each Aurora cluster in the global database. Although only the primary cluster in an Aurora global database can load data, another cluster might be promoted by the failover mechanism and become the primary cluster.

   For more information about DB cluster parameters, see [Amazon Aurora DB cluster and DB instance parameters](USER_WorkingWithDBClusterParamGroups.md#Aurora.Managing.ParameterGroups).

1. To permit database users in an Aurora MySQL DB cluster to access Amazon S3, associate the role that you created in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md) with the DB cluster. For an Aurora global database, associate the role with each Aurora cluster in the global database. For information about associating an IAM role with a DB cluster, see [Associating an IAM role with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.md).

1. Configure your Aurora MySQL DB cluster to allow outbound connections to Amazon S3. For instructions, see [Enabling network communication from Amazon Aurora to other AWS services](AuroraMySQL.Integrating.Authorizing.Network.md). 

   If your DB cluster isn't publicly accessible and in a VPC public subnet, it is private. You can create an S3 gateway endpoint to access your S3 bucket. For more information, see [Gateway endpoints for Amazon S3](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html).

    For an Aurora global database, enable outbound connections for each Aurora cluster in the global database. 

## Granting privileges to load data in Amazon Aurora MySQL
<a name="AuroraMySQL.Integrating.LoadFromS3.Grant"></a>

The database user that issues the `LOAD DATA FROM S3` or `LOAD XML FROM S3` statement must have a specific role or privilege to issue either statement. In Aurora MySQL version 3, you grant the `AWS_LOAD_S3_ACCESS` role. In Aurora MySQL version 2, you grant the `LOAD FROM S3` privilege. The administrative user for a DB cluster is granted the appropriate role or privilege by default. You can grant the privilege to another user by using one of the following statements.

 Use the following statement for Aurora MySQL version 3: 

```
GRANT AWS_LOAD_S3_ACCESS TO 'user'@'domain-or-ip-address'
```

**Tip**  
When you use the role technique in Aurora MySQL version 3, you can also activate the role by using the `SET ROLE role_name` or `SET ROLE ALL` statement. If you aren't familiar with the MySQL 8.0 role system, you can learn more in [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model). For more details, see [Using roles](https://dev.mysql.com/doc/refman/8.0/en/roles.html) in the *MySQL Reference Manual*.  
This only applies to the current active session. When you reconnect, you must run the `SET ROLE` statement again to grant privileges. For more information, see [SET ROLE statement](https://dev.mysql.com/doc/refman/8.0/en/set-role.html) in the *MySQL Reference Manual*.  
You can use the `activate_all_roles_on_login` DB cluster parameter to automatically activate all roles when a user connects to a DB instance. When this parameter is set, you generally don't have to call the `SET ROLE` statement explicitly to activate a role. For more information, see [activate\$1all\$1roles\$1on\$1login](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_activate_all_roles_on_login) in the *MySQL Reference Manual*.  
However, you must call `SET ROLE ALL` explicitly at the beginning of a stored procedure to activate the role, when the stored procedure is called by a different user.

Use the following statement for Aurora MySQL version 2:

```
GRANT LOAD FROM S3 ON *.* TO 'user'@'domain-or-ip-address'
```

The `AWS_LOAD_S3_ACCESS` role and `LOAD FROM S3` privilege are specific to Amazon Aurora and are not available for external MySQL databases or RDS for MySQL DB instances. If you have set up replication between an Aurora DB cluster as the replication source and a MySQL database as the replication client, then the `GRANT` statement for the role or privilege causes replication to stop with an error. You can safely skip the error to resume replication. To skip the error on an RDS for MySQL instance, use the [mysql\$1rds\$1skip\$1repl\$1error](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql_rds_skip_repl_error.html) procedure. To skip the error on an external MySQL database, use the [slave\$1skip\$1errors](https://dev.mysql.com/doc/refman/5.7/en/replication-options-replica.html#sysvar_slave_skip_errors) system variable (Aurora MySQL version 2) or [replica\$1skip\$1errors](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#sysvar_replica_skip_errors) system variable (Aurora MySQL version 3).

**Note**  
The database user must have `INSERT` privileges for the database into which it's loading data.

## Specifying the path (URI) to an Amazon S3 bucket
<a name="AuroraMySQL.Integrating.LoadFromS3.URI"></a>

The syntax for specifying the path (URI) to files stored on an Amazon S3 bucket is as follows.

```
s3-region://amzn-s3-demo-bucket/file-name-or-prefix
```

The path includes the following values:
+ `region` (optional) – The AWS Region that contains the Amazon S3 bucket to load from. This value is optional. If you don't specify a `region` value, then Aurora loads your file from Amazon S3 in the same region as your DB cluster.
+ `bucket-name` – The name of the Amazon S3 bucket that contains the data to load. Object prefixes that identify a virtual folder path are supported.
+ `file-name-or-prefix` – The name of the Amazon S3 text file or XML file, or a prefix that identifies one or more text or XML files to load. You can also specify a manifest file that identifies one or more text files to load. For more information about using a manifest file to load text files from Amazon S3, see [Using a manifest to specify data files to load](#AuroraMySQL.Integrating.LoadFromS3.Manifest).

**To copy the URI for files in an S3 bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane, choose **Buckets**, and then choose the bucket whose URI you want to copy.

1. Select the prefix or file that you want to load from S3.

1. Choose **Copy S3 URI**.

## LOAD DATA FROM S3
<a name="AuroraMySQL.Integrating.LoadFromS3.Text"></a>

You can use the `LOAD DATA FROM S3` statement to load data from any text file format that is supported by the MySQL [LOAD DATA INFILE](https://dev.mysql.com/doc/refman/8.0/en/load-data.html) statement, such as text data that is comma-delimited. Compressed files are not supported.

**Note**  
Make sure that your Aurora MySQL DB cluster allows outbound connections to S3. For more information, see [Enabling network communication from Amazon Aurora to other AWS services](AuroraMySQL.Integrating.Authorizing.Network.md).

### Syntax
<a name="AuroraMySQL.Integrating.LoadFromS3.Text.Syntax"></a>

```
LOAD DATA [FROM] S3 [FILE | PREFIX | MANIFEST] 'S3-URI'
    [REPLACE | IGNORE]
    INTO TABLE tbl_name
    [PARTITION (partition_name,...)]
    [CHARACTER SET charset_name]
    [{FIELDS | COLUMNS}
        [TERMINATED BY 'string']
        [[OPTIONALLY] ENCLOSED BY 'char']
        [ESCAPED BY 'char']
    ]
    [LINES
        [STARTING BY 'string']
        [TERMINATED BY 'string']
    ]
    [IGNORE number {LINES | ROWS}]
    [(col_name_or_user_var,...)]
    [SET col_name = expr,...]
```

**Note**  
In Aurora MySQL version 3.05 and higher, the keyword `FROM` is optional.

### Parameters
<a name="AuroraMySQL.Integrating.LoadFromS3.Text.Parameters"></a>

The `LOAD DATA FROM S3` statement uses the following required and optional parameters. You can find more details about some of these parameters in [LOAD DATA Statement](https://dev.mysql.com/doc/refman/8.0/en/load-data.html) in the MySQL documentation.

**FILE \$1 PREFIX \$1 MANIFEST**  
Identifies whether to load the data from a single file, from all files that match a given prefix, or from all files in a specified manifest. `FILE` is the default.

**S3-URI**  
Specifies the URI for a text or manifest file to load, or an Amazon S3 prefix to use. Specify the URI using the syntax described in [Specifying the path (URI) to an Amazon S3 bucket](#AuroraMySQL.Integrating.LoadFromS3.URI).

**REPLACE \$1 IGNORE**  
Determines what action to take if an input row has the same unique key values as an existing row in the database table.  
+ Specify `REPLACE` if you want the input row to replace the existing row in the table.
+ Specify `IGNORE` if you want to discard the input row.

**INTO TABLE**  
Identifies the name of the database table to load the input rows into.

**PARTITION**  
Requires that all input rows be inserted into the partitions identified by the specified list of comma-separated partition names. If an input row cannot be inserted into one of the specified partitions, then the statement fails and an error is returned.

**CHARACTER SET**  
Identifies the character set of the data in the input file.

**FIELDS \$1 COLUMNS**  
Identifies how the fields or columns in the input file are delimited. Fields are tab-delimited by default.

**LINES**  
Identifies how the lines in the input file are delimited. Lines are delimited by a newline character (`'\n'`) by default.

**IGNORE *number* LINES \$1 ROWS**  
Specifies to ignore a certain number of lines or rows at the start of the input file. For example, you can use `IGNORE 1 LINES` to skip over an initial header line containing column names, or `IGNORE 2 ROWS` to skip over the first two rows of data in the input file. If you also use `PREFIX`, `IGNORE` skips a certain number of lines or rows at the start of the first input file.

**col\$1name\$1or\$1user\$1var, ...**  
Specifies a comma-separated list of one or more column names or user variables that identify which columns to load by name. The name of a user variable used for this purpose must match the name of an element from the text file, prefixed with @. You can employ user variables to store the corresponding field values for subsequent reuse.  
For example, the following statement loads the first column from the input file into the first column of `table1`, and sets the value of the `table_column2` column in `table1` to the input value of the second column divided by 100.  

```
LOAD DATA FROM S3 's3://amzn-s3-demo-bucket/data.txt'
    INTO TABLE table1
    (column1, @var1)
    SET table_column2 = @var1/100;
```

**SET**  
Specifies a comma-separated list of assignment operations that set the values of columns in the table to values not included in the input file.  
For example, the following statement sets the first two columns of `table1` to the values in the first two columns from the input file, and then sets the value of the `column3` in `table1` to the current time stamp.  

```
LOAD DATA FROM S3  's3://amzn-s3-demo-bucket/data.txt'
    INTO TABLE table1
    (column1, column2)
    SET column3 = CURRENT_TIMESTAMP;
```
You can use subqueries in the right side of `SET` assignments. For a subquery that returns a value to be assigned to a column, you can use only a scalar subquery. Also, you cannot use a subquery to select from the table that is being loaded. 

You can't use the `LOCAL` keyword of the `LOAD DATA FROM S3` statement if you're loading data from an Amazon S3 bucket.

### Using a manifest to specify data files to load
<a name="AuroraMySQL.Integrating.LoadFromS3.Manifest"></a>

You can use the `LOAD DATA FROM S3` statement with the `MANIFEST` keyword to specify a manifest file in JSON format that lists the text files to be loaded into a table in your DB cluster.

The following JSON schema describes the format and content of a manifest file.

```
{
    "$schema": "http://json-schema.org/draft-04/schema#",
    "additionalProperties": false,
    "definitions": {},
    "id": "Aurora_LoadFromS3_Manifest",
    "properties": {
        "entries": {
            "additionalItems": false,
            "id": "/properties/entries",
            "items": {
                "additionalProperties": false,
                "id": "/properties/entries/items",
                "properties": {
                    "mandatory": {
                        "default": "false",
                        "id": "/properties/entries/items/properties/mandatory",
                        "type": "boolean"
                    },
                    "url": {
                        "id": "/properties/entries/items/properties/url",
                        "maxLength": 1024,
                        "minLength": 1,
                        "type": "string"
                    }
                },
                "required": [
                    "url"
                ],
                "type": "object"
            },
            "type": "array",
            "uniqueItems": true
        }
    },
    "required": [
        "entries"
    ],
    "type": "object"
}
```

Each `url` in the manifest must specify a URL with the bucket name and full object path for the file, not just a prefix. You can use a manifest to load files from different buckets, different regions, or files that do not share the same prefix. If a region is not specified in the URL, the region of the target Aurora DB cluster is used. The following example shows a manifest file that loads four files from different buckets.

```
{
  "entries": [
    {
      "url":"s3://aurora-bucket/2013-10-04-customerdata", 
      "mandatory":true
    },
    {
      "url":"s3-us-west-2://aurora-bucket-usw2/2013-10-05-customerdata",
      "mandatory":true
    },
    {
      "url":"s3://aurora-bucket/2013-10-04-customerdata", 
      "mandatory":false
    },
    {
      "url":"s3://aurora-bucket/2013-10-05-customerdata"
    }
  ]
}
```

The optional `mandatory` flag specifies whether `LOAD DATA FROM S3` should return an error if the file is not found. The `mandatory` flag defaults to `false`. Regardless of how `mandatory` is set, `LOAD DATA FROM S3` terminates if no files are found.

Manifest files can have any extension. The following example runs the `LOAD DATA FROM S3` statement with the manifest in the previous example, which is named **customer.manifest**. 

```
LOAD DATA FROM S3 MANIFEST 's3-us-west-2://aurora-bucket/customer.manifest'
    INTO TABLE CUSTOMER
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    (ID, FIRSTNAME, LASTNAME, EMAIL);
```

After the statement completes, an entry for each successfully loaded file is written to the `aurora_s3_load_history` table. 

#### Verifying loaded files using the aurora\$1s3\$1load\$1history table
<a name="AuroraMySQL.Integrating.LoadFromS3.Manifest.History"></a>

Every successful `LOAD DATA FROM S3` statement updates the `aurora_s3_load_history` table in the `mysql` schema with an entry for each file that was loaded.

After you run the `LOAD DATA FROM S3` statement, you can verify which files were loaded by querying the `aurora_s3_load_history` table. To see the files that were loaded from one iteration of the statement, use the `WHERE` clause to filter the records on the Amazon S3 URI for the manifest file used in the statement. If you have used the same manifest file before, filter the results using the `timestamp` field.

```
select * from mysql.aurora_s3_load_history where load_prefix = 'S3_URI';
```

The following table describes the fields in the `aurora_s3_load_history` table.


| Field | Description | 
| --- | --- | 
| `load_prefix` |  The URI that was specified in the load statement. This URI can map to any of the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.LoadFromS3.html)  | 
|  `file_name`  |  The name of a file that was loaded into Aurora from Amazon S3 using the URI identified in the `load_prefix` field.  | 
| `version_number` |  The version number of the file identified by the `file_name` field that was loaded, if the Amazon S3 bucket has a version number.  | 
|  `bytes_loaded`  |  The size of the file loaded, in bytes.  | 
| `load_timestamp`  |  The timestamp when the `LOAD DATA FROM S3` statement completed.  | 

### Examples
<a name="AuroraMySQL.Integrating.LoadFromS3.Text.Examples"></a>

The following statement loads data from an Amazon S3 bucket that is in the same region as the Aurora DB cluster. The statement reads the comma-delimited data in the file `customerdata.txt` that is in the *amzn-s3-demo-bucket* Amazon S3 bucket, and then loads the data into the table `store-schema.customer-table`.

```
LOAD DATA FROM S3 's3://amzn-s3-demo-bucket/customerdata.csv' 
    INTO TABLE store-schema.customer-table
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    (ID, FIRSTNAME, LASTNAME, ADDRESS, EMAIL, PHONE);
```

The following statement loads data from an Amazon S3 bucket that is in a different region from the Aurora DB cluster. The statement reads the comma-delimited data from all files that match the `employee-data` object prefix in the *amzn-s3-demo-bucket* Amazon S3 bucket in the `us-west-2` region, and then loads the data into the `employees` table.

```
LOAD DATA FROM S3 PREFIX 's3-us-west-2://amzn-s3-demo-bucket/employee_data'
    INTO TABLE employees
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    (ID, FIRSTNAME, LASTNAME, EMAIL, SALARY);
```

The following statement loads data from the files specified in a JSON manifest file named q1\$1sales.json into the `sales` table. 

```
LOAD DATA FROM S3 MANIFEST 's3-us-west-2://amzn-s3-demo-bucket1/q1_sales.json'
    INTO TABLE sales
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    (MONTH, STORE, GROSS, NET);
```

## LOAD XML FROM S3
<a name="AuroraMySQL.Integrating.LoadFromS3.XML"></a>

You can use the `LOAD XML FROM S3` statement to load data from XML files stored on an Amazon S3 bucket in one of three different XML formats:
+ Column names as attributes of a `<row>` element. The attribute value identifies the contents of the table field.

  ```
  <row column1="value1" column2="value2" .../>
  ```
+ Column names as child elements of a `<row>` element. The value of the child element identifies the contents of the table field.

  ```
  <row>
    <column1>value1</column1>
    <column2>value2</column2>
  </row>
  ```
+ Column names in the `name` attribute of `<field>` elements in a `<row>` element. The value of the `<field>` element identifies the contents of the table field. 

  ```
  <row>
    <field name='column1'>value1</field>
    <field name='column2'>value2</field>
  </row>
  ```

### Syntax
<a name="AuroraMySQL.Integrating.LoadFromS3.XML.Syntax"></a>

```
LOAD XML FROM S3 'S3-URI'
    [REPLACE | IGNORE]
    INTO TABLE tbl_name
    [PARTITION (partition_name,...)]
    [CHARACTER SET charset_name]
    [ROWS IDENTIFIED BY '<element-name>']
    [IGNORE number {LINES | ROWS}]
    [(field_name_or_user_var,...)]
    [SET col_name = expr,...]
```

### Parameters
<a name="AuroraMySQL.Integrating.LoadFromS3.XML.Parameters"></a>

The `LOAD XML FROM S3` statement uses the following required and optional parameters. You can find more details about some of these parameters in [LOAD XML Statement](https://dev.mysql.com/doc/refman/8.0/en/load-xml.html) in the MySQL documentation.

**FILE \$1 PREFIX**  
Identifies whether to load the data from a single file, or from all files that match a given prefix. `FILE` is the default.

**REPLACE \$1 IGNORE**  
Determines what action to take if an input row has the same unique key values as an existing row in the database table.  
+ Specify `REPLACE` if you want the input row to replace the existing row in the table.
+ Specify `IGNORE` if you want to discard the input row. `IGNORE` is the default.

**INTO TABLE**  
Identifies the name of the database table to load the input rows into.

**PARTITION**  
Requires that all input rows be inserted into the partitions identified by the specified list of comma-separated partition names. If an input row cannot be inserted into one of the specified partitions, then the statement fails and an error is returned.

**CHARACTER SET**  
Identifies the character set of the data in the input file.

**ROWS IDENTIFIED BY**  
Identifies the element name that identifies a row in the input file. The default is `<row>`.

**IGNORE *number* LINES \$1 ROWS**  
Specifies to ignore a certain number of lines or rows at the start of the input file. For example, you can use `IGNORE 1 LINES` to skip over the first line in the text file, or `IGNORE 2 ROWS` to skip over the first two rows of data in the input XML.

**field\$1name\$1or\$1user\$1var, ...**  
Specifies a comma-separated list of one or more XML element names or user variables that identify which elements to load by name. The name of a user variable used for this purpose must match the name of an element from the XML file, prefixed with @. You can employ user variables to store the corresponding field values for subsequent reuse.  
For example, the following statement loads the first column from the input file into the first column of `table1`, and sets the value of the `table_column2` column in `table1` to the input value of the second column divided by 100.  

```
LOAD XML FROM S3 's3://amzn-s3-demo-bucket/data.xml'
   INTO TABLE table1
   (column1, @var1)
   SET table_column2 = @var1/100;
```

**SET**  
Specifies a comma-separated list of assignment operations that set the values of columns in the table to values not included in the input file.  
For example, the following statement sets the first two columns of `table1` to the values in the first two columns from the input file, and then sets the value of the `column3` in `table1` to the current time stamp.  

```
LOAD XML FROM S3 's3://amzn-s3-demo-bucket/data.xml'
   INTO TABLE table1
   (column1, column2)
   SET column3 = CURRENT_TIMESTAMP;
```
You can use subqueries in the right side of `SET` assignments. For a subquery that returns a value to be assigned to a column, you can use only a scalar subquery. Also, you can't use a subquery to select from the table that's being loaded.

# Saving data from an Amazon Aurora MySQL DB cluster into text files in an Amazon S3 bucket
<a name="AuroraMySQL.Integrating.SaveIntoS3"></a><a name="save_into_s3"></a><a name="select_into_outfile"></a>

You can use the `SELECT INTO OUTFILE S3` statement to query data from an Amazon Aurora MySQL DB cluster and save it into text files stored in an Amazon S3 bucket. In Aurora MySQL, the files are first stored on the local disk, and then exported to S3. After the exports are done, the local files are deleted.

You can encrypt the Amazon S3 bucket using an Amazon S3 managed key (SSE-S3) or AWS KMS key (SSE-KMS: AWS managed key or customer managed key).

The `LOAD DATA FROM S3` statement can use files created by the `SELECT INTO OUTFILE S3` statement to load data into an Aurora DB cluster. For more information, see [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md).

**Note**  
You can also save DB cluster data and DB cluster snapshot data to Amazon S3 using the AWS Management Console, AWS CLI, or Amazon RDS API. For more information, see [Exporting DB cluster data to Amazon S3](export-cluster-data.md) and [Exporting DB cluster snapshot data to Amazon S3](aurora-export-snapshot.md).

**Contents**
+ [Giving Aurora MySQL access to Amazon S3](#AuroraMySQL.Integrating.SaveIntoS3.Authorize)
+ [Granting privileges to save data in Aurora MySQL](#AuroraMySQL.Integrating.SaveIntoS3.Grant)
+ [Specifying a path to an Amazon S3 bucket](#AuroraMySQL.Integrating.SaveIntoS3.URI)
+ [Creating a manifest to list data files](#AuroraMySQL.Integrating.SaveIntoS3.Manifest)
+ [SELECT INTO OUTFILE S3](#AuroraMySQL.Integrating.SaveIntoS3.Statement)
  + [Syntax](#AuroraMySQL.Integrating.SaveIntoS3.Statement.Syntax)
  + [Parameters](#AuroraMySQL.Integrating.SaveIntoS3.Statement.Parameters)
  + [Considerations](#AuroraMySQL.Integrating.SaveIntoS3.Considerations)
  + [Examples](#AuroraMySQL.Integrating.SaveIntoS3.Examples)

## Giving Aurora MySQL access to Amazon S3
<a name="AuroraMySQL.Integrating.SaveIntoS3.Authorize"></a>

Before you can save data into an Amazon S3 bucket, you must first give your Aurora MySQL DB cluster permission to access Amazon S3.

**To give Aurora MySQL access to Amazon S3**

1. Create an AWS Identity and Access Management (IAM) policy that provides the bucket and object permissions that allow your Aurora MySQL DB cluster to access Amazon S3. For instructions, see [Creating an IAM policy to access Amazon S3 resources](AuroraMySQL.Integrating.Authorizing.IAM.S3CreatePolicy.md).
**Note**  
In Aurora MySQL version 3.05 and higher, you can encrypt objects using AWS KMS customer managed keys. To do so, include the `kms:GenerateDataKey` permission in your IAM policy. For more information, see [Creating an IAM policy to access AWS KMS resources](AuroraMySQL.Integrating.Authorizing.IAM.KMSCreatePolicy.md).  
You don't need this permission to encrypt objects using AWS managed keys or Amazon S3 managed keys (SSE-S3).

1. Create an IAM role, and attach the IAM policy you created in [Creating an IAM policy to access Amazon S3 resources](AuroraMySQL.Integrating.Authorizing.IAM.S3CreatePolicy.md) to the new IAM role. For instructions, see [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

1. For Aurora MySQL version 2, set either the `aurora_select_into_s3_role` or `aws_default_s3_role` DB cluster parameter to the Amazon Resource Name (ARN) of the new IAM role. If an IAM role isn't specified for `aurora_select_into_s3_role`, Aurora uses the IAM role specified in `aws_default_s3_role`.

   For Aurora MySQL version 3, use `aws_default_s3_role`.

   If the cluster is part of an Aurora global database, set this parameter for each Aurora cluster in the global database.

   For more information about DB cluster parameters, see [Amazon Aurora DB cluster and DB instance parameters](USER_WorkingWithDBClusterParamGroups.md#Aurora.Managing.ParameterGroups).

1. To permit database users in an Aurora MySQL DB cluster to access Amazon S3, associate the role that you created in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md) with the DB cluster.

   For an Aurora global database, associate the role with each Aurora cluster in the global database.

   For information about associating an IAM role with a DB cluster, see [Associating an IAM role with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.md).

1. Configure your Aurora MySQL DB cluster to allow outbound connections to Amazon S3. For instructions, see [Enabling network communication from Amazon Aurora to other AWS services](AuroraMySQL.Integrating.Authorizing.Network.md). 

    For an Aurora global database, enable outbound connections for each Aurora cluster in the global database. 

## Granting privileges to save data in Aurora MySQL
<a name="AuroraMySQL.Integrating.SaveIntoS3.Grant"></a>

The database user that issues the `SELECT INTO OUTFILE S3` statement must have a specific role or privilege. In Aurora MySQL version 3, you grant the `AWS_SELECT_S3_ACCESS` role. In Aurora MySQL version 2, you grant the `SELECT INTO S3` privilege. The administrative user for a DB cluster is granted the appropriate role or privilege by default. You can grant the privilege to another user by using one of the following statements.

 Use the following statement for Aurora MySQL version 3: 

```
GRANT AWS_SELECT_S3_ACCESS TO 'user'@'domain-or-ip-address'
```

**Tip**  
When you use the role technique in Aurora MySQL version 3, you can also activate the role by using the `SET ROLE role_name` or `SET ROLE ALL` statement. If you aren't familiar with the MySQL 8.0 role system, you can learn more in [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model). For more details, see [Using roles](https://dev.mysql.com/doc/refman/8.0/en/roles.html) in the *MySQL Reference Manual*.  
This only applies to the current active session. When you reconnect, you must run the `SET ROLE` statement again to grant privileges. For more information, see [SET ROLE statement](https://dev.mysql.com/doc/refman/8.0/en/set-role.html) in the *MySQL Reference Manual*.  
You can use the `activate_all_roles_on_login` DB cluster parameter to automatically activate all roles when a user connects to a DB instance. When this parameter is set, you generally don't have to call the `SET ROLE` statement explicitly to activate a role. For more information, see [activate\$1all\$1roles\$1on\$1login](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_activate_all_roles_on_login) in the *MySQL Reference Manual*.  
However, you must call `SET ROLE ALL` explicitly at the beginning of a stored procedure to activate the role, when the stored procedure is called by a different user.

Use the following statement for Aurora MySQL version 2:

```
GRANT SELECT INTO S3 ON *.* TO 'user'@'domain-or-ip-address'
```

The `AWS_SELECT_S3_ACCESS` role and `SELECT INTO S3` privilege are specific to Amazon Aurora MySQL and are not available for MySQL databases or RDS for MySQL DB instances. If you have set up replication between an Aurora MySQL DB cluster as the replication source and a MySQL database as the replication client, then the `GRANT` statement for the role or privilege causes replication to stop with an error. You can safely skip the error to resume replication. To skip the error on an RDS for MySQL DB instance, use the [mysql\$1rds\$1skip\$1repl\$1error](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql_rds_skip_repl_error.html) procedure. To skip the error on an external MySQL database, use the [slave\$1skip\$1errors](https://dev.mysql.com/doc/refman/5.7/en/replication-options-replica.html#sysvar_slave_skip_errors) system variable (Aurora MySQL version 2) or [replica\$1skip\$1errors](https://dev.mysql.com/doc/refman/8.0/en/replication-options-replica.html#sysvar_replica_skip_errors) system variable (Aurora MySQL version 3).

## Specifying a path to an Amazon S3 bucket
<a name="AuroraMySQL.Integrating.SaveIntoS3.URI"></a>

The syntax for specifying a path to store the data and manifest files on an Amazon S3 bucket is similar to that used in the `LOAD DATA FROM S3 PREFIX` statement, as shown following.

```
s3-region://bucket-name/file-prefix
```

The path includes the following values:
+ `region` (optional) – The AWS Region that contains the Amazon S3 bucket to save the data into. This value is optional. If you don't specify a `region` value, then Aurora saves your files into Amazon S3 in the same region as your DB cluster.
+ `bucket-name` – The name of the Amazon S3 bucket to save the data into. Object prefixes that identify a virtual folder path are supported.
+ `file-prefix` – The Amazon S3 object prefix that identifies the files to be saved in Amazon S3. 

The data files created by the `SELECT INTO OUTFILE S3` statement use the following path, in which *00000* represents a 5-digit, zero-based integer number.

```
s3-region://bucket-name/file-prefix.part_00000
```

For example, suppose that a `SELECT INTO OUTFILE S3` statement specifies `s3-us-west-2://bucket/prefix` as the path in which to store data files and creates three data files. The specified Amazon S3 bucket contains the following data files.
+ s3-us-west-2://bucket/prefix.part\$100000
+ s3-us-west-2://bucket/prefix.part\$100001
+ s3-us-west-2://bucket/prefix.part\$100002

## Creating a manifest to list data files
<a name="AuroraMySQL.Integrating.SaveIntoS3.Manifest"></a>

You can use the `SELECT INTO OUTFILE S3` statement with the `MANIFEST ON` option to create a manifest file in JSON format that lists the text files created by the statement. The `LOAD DATA FROM S3` statement can use the manifest file to load the data files back into an Aurora MySQL DB cluster. For more information about using a manifest to load data files from Amazon S3 into an Aurora MySQL DB cluster, see [Using a manifest to specify data files to load](AuroraMySQL.Integrating.LoadFromS3.md#AuroraMySQL.Integrating.LoadFromS3.Manifest). 

The data files included in the manifest created by the `SELECT INTO OUTFILE S3` statement are listed in the order that they're created by the statement. For example, suppose that a `SELECT INTO OUTFILE S3` statement specified `s3-us-west-2://bucket/prefix` as the path in which to store data files and creates three data files and a manifest file. The specified Amazon S3 bucket contains a manifest file named `s3-us-west-2://bucket/prefix.manifest`, that contains the following information.

```
{
  "entries": [
    {
      "url":"s3-us-west-2://bucket/prefix.part_00000"
    },
    {
      "url":"s3-us-west-2://bucket/prefix.part_00001"
    },
    {
      "url":"s3-us-west-2://bucket/prefix.part_00002"
    }
  ]
}
```

## SELECT INTO OUTFILE S3
<a name="AuroraMySQL.Integrating.SaveIntoS3.Statement"></a>

You can use the `SELECT INTO OUTFILE S3` statement to query data from a DB cluster and save it directly into delimited text files stored in an Amazon S3 bucket.

Compressed files aren't supported. Encrypted files are supported starting in Aurora MySQL version 2.09.0.

### Syntax
<a name="AuroraMySQL.Integrating.SaveIntoS3.Statement.Syntax"></a>

```
SELECT
    [ALL | DISTINCT | DISTINCTROW ]
        [HIGH_PRIORITY]
        [STRAIGHT_JOIN]
        [SQL_SMALL_RESULT] [SQL_BIG_RESULT] [SQL_BUFFER_RESULT]
        [SQL_CACHE | SQL_NO_CACHE] [SQL_CALC_FOUND_ROWS]
    select_expr [, select_expr ...]
    [FROM table_references
        [PARTITION partition_list]
    [WHERE where_condition]
    [GROUP BY {col_name | expr | position}
        [ASC | DESC], ... [WITH ROLLUP]]
    [HAVING where_condition]
    [ORDER BY {col_name | expr | position}
         [ASC | DESC], ...]
    [LIMIT {[offset,] row_count | row_count OFFSET offset}]
INTO OUTFILE S3 's3_uri'
[CHARACTER SET charset_name]
    [export_options]
    [MANIFEST {ON | OFF}]
    [OVERWRITE {ON | OFF}]
    [ENCRYPTION {ON | OFF | SSE_S3 | SSE_KMS ['cmk_id']}]

export_options:
    [FORMAT {CSV|TEXT} [HEADER]]
    [{FIELDS | COLUMNS}
        [TERMINATED BY 'string']
        [[OPTIONALLY] ENCLOSED BY 'char']
        [ESCAPED BY 'char']
    ]
    [LINES
        [STARTING BY 'string']
        [TERMINATED BY 'string']
]
```

### Parameters
<a name="AuroraMySQL.Integrating.SaveIntoS3.Statement.Parameters"></a>

The `SELECT INTO OUTFILE S3` statement uses the following required and optional parameters that are specific to Aurora.

**s3-uri**  
Specifies the URI for an Amazon S3 prefix to use. Use the syntax described in [Specifying a path to an Amazon S3 bucket](#AuroraMySQL.Integrating.SaveIntoS3.URI).

**FORMAT \$1CSV\$1TEXT\$1 [HEADER]**  
Optionally saves the data in CSV format.  
The `TEXT` option is the default and produces the existing MySQL export format.  
The `CSV` option produces comma-separated data values. The CSV format follows the specification in [RFC-4180](https://tools.ietf.org/html/rfc4180). If you specify the optional keyword `HEADER`, the output file contains one header line. The labels in the header line correspond to the column names from the `SELECT` statement. You can use the CSV files for training data models for use with AWS ML services. For more information about using exported Aurora data with AWS ML services, see [Exporting data to Amazon S3 for SageMaker AI model training (Advanced)](mysql-ml.md#exporting-data-to-s3-for-model-training).

**MANIFEST \$1ON \$1 OFF\$1**  
Indicates whether a manifest file is created in Amazon S3. The manifest file is a JavaScript Object Notation (JSON) file that can be used to load data into an Aurora DB cluster with the `LOAD DATA FROM S3 MANIFEST` statement. For more information about `LOAD DATA FROM S3 MANIFEST`, see [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md).  
If `MANIFEST ON` is specified in the query, the manifest file is created in Amazon S3 after all data files have been created and uploaded. The manifest file is created using the following path:  

```
s3-region://bucket-name/file-prefix.manifest
```
For more information about the format of the manifest file's contents, see [Creating a manifest to list data files](#AuroraMySQL.Integrating.SaveIntoS3.Manifest).

**OVERWRITE \$1ON \$1 OFF\$1**  
Indicates whether existing files in the specified Amazon S3 bucket are overwritten. If `OVERWRITE ON` is specified, existing files that match the file prefix in the URI specified in `s3-uri`are overwritten. Otherwise, an error occurs.

**ENCRYPTION \$1ON \$1 OFF \$1 SSE\$1S3 \$1 SSE\$1KMS ['*cmk\$1id*']\$1**  
Indicates whether to use server-side encryption with Amazon S3 managed keys (SSE-S3) or AWS KMS keys (SSE-KMS, including AWS managed keys and customer managed keys). The `SSE_S3` and `SSE_KMS` settings are available in Aurora MySQL version 3.05 and higher.  
You can also use the `aurora_select_into_s3_encryption_default` session variable instead of the `ENCRYPTION` clause, as shown in the following example. Use either the SQL clause or the session variable, but not both.  

```
set session set session aurora_select_into_s3_encryption_default={ON | OFF | SSE_S3 | SSE_KMS};
```
The `SSE_S3` and `SSE_KMS` settings are available in Aurora MySQL version 3.05 and higher.  
When you set `aurora_select_into_s3_encryption_default` to the following value:  
+ `OFF` – The default encryption policy of the S3 bucket is followed. The default value of `aurora_select_into_s3_encryption_default` is `OFF`.
+ `ON` or `SSE_S3` – The S3 object is encrypted using Amazon S3 managed keys (SSE-S3).
+ `SSE_KMS` – The S3 object is encrypted using an AWS KMS key.

  In this case, you also include the session variable `aurora_s3_default_cmk_id`, for example:

  ```
  set session aurora_select_into_s3_encryption_default={SSE_KMS};
  set session aurora_s3_default_cmk_id={NULL | 'cmk_id'};
  ```
  + When `aurora_s3_default_cmk_id` is `NULL`, the S3 object is encrypted using an AWS managed key.
  + When `aurora_s3_default_cmk_id` is a nonempty string `cmk_id`, the S3 object is encrypted using a customer managed key.

    The value of `cmk_id` can't be an empty string.
When you use the `SELECT INTO OUTFILE S3` command, Aurora determines the encryption as follows:  
+ If the `ENCRYPTION` clause is present in the SQL command, Aurora relies only on the value of `ENCRYPTION`, and doesn't use a session variable.
+ If the `ENCRYPTION` clause isn't present, Aurora relies on the value of the session variable.
For more information, see [Using server-side encryption with Amazon S3 managed keys (SSE-S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html) and [Using server-side encryption withAWS KMS keys (SSE-KMS)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html) in the *Amazon Simple Storage Service User Guide*.

You can find more details about other parameters in [SELECT statement](https://dev.mysql.com/doc/refman/8.0/en/select.html) and [LOAD DATA statement](https://dev.mysql.com/doc/refman/8.0/en/load-data.html), in the MySQL documentation.

### Considerations
<a name="AuroraMySQL.Integrating.SaveIntoS3.Considerations"></a>

The number of files written to the Amazon S3 bucket depends on the amount of data selected by the `SELECT INTO OUTFILE S3` statement and the file size threshold for Aurora MySQL. The default file size threshold is 6 gigabytes (GB). If the data selected by the statement is less than the file size threshold, a single file is created; otherwise, multiple files are created. Other considerations for files created by this statement include the following:
+ Aurora MySQL guarantees that rows in data files are not split across file boundaries. For multiple files, the size of every data file except the last is typically close to the file size threshold. However, occasionally staying under the file size threshold results in a row being split across two data files. In this case, Aurora MySQL creates a data file that keeps the row intact, but might be larger than the file size threshold. 
+ Because each `SELECT` statement in Aurora MySQL runs as an atomic transaction, a `SELECT INTO OUTFILE S3` statement that selects a large data set might run for some time. If the statement fails for any reason, you might need to start over and issue the statement again. If the statement fails, however, files already uploaded to Amazon S3 remain in the specified Amazon S3 bucket. You can use another statement to upload the remaining data instead of starting over again.
+ If the amount of data to be selected is large (more than 25 GB), we recommend that you use multiple `SELECT INTO OUTFILE S3` statements to save the data to Amazon S3. Each statement should select a different portion of the data to be saved, and also specify a different `file_prefix` in the `s3-uri` parameter to use when saving the data files. Partitioning the data to be selected with multiple statements makes it easier to recover from an error in one statement. If an error occurs for one statement, only a portion of data needs to be re-selected and uploaded to Amazon S3. Using multiple statements also helps to avoid a single long-running transaction, which can improve performance.
+ If multiple `SELECT INTO OUTFILE S3` statements that use the same `file_prefix` in the `s3-uri` parameter run in parallel to select data into Amazon S3, the behavior is undefined.
+ Metadata, such as table schema or file metadata, is not uploaded by Aurora MySQL to Amazon S3.
+ In some cases, you might re-run a `SELECT INTO OUTFILE S3` query, such as to recover from a failure. In these cases, you must either remove any existing data files in the Amazon S3 bucket with the same file prefix specified in `s3-uri`, or include `OVERWRITE ON` in the `SELECT INTO OUTFILE S3` query.

The `SELECT INTO OUTFILE S3` statement returns a typical MySQL error number and response on success or failure. If you don't have access to the MySQL error number and response, the easiest way to determine when it's done is by specifying `MANIFEST ON` in the statement. The manifest file is the last file written by the statement. In other words, if you have a manifest file, the statement has completed.

Currently, there's no way to directly monitor the progress of the `SELECT INTO OUTFILE S3` statement while it runs. However, suppose that you're writing a large amount of data from Aurora MySQL to Amazon S3 using this statement, and you know the size of the data selected by the statement. In this case, you can estimate progress by monitoring the creation of data files in Amazon S3.

To do so, you can use the fact that a data file is created in the specified Amazon S3 bucket for about every 6 GB of data selected by the statement. Divide the size of the data selected by 6 GB to get the estimated number of data files to create. You can then estimate the progress of the statement by monitoring the number of files uploaded to Amazon S3 while the statement runs.

### Examples
<a name="AuroraMySQL.Integrating.SaveIntoS3.Examples"></a>

The following statement selects all of the data in the `employees` table and saves the data into an Amazon S3 bucket that is in a different region from the Aurora MySQL DB cluster. The statement creates data files in which each field is terminated by a comma (`,`) character and each row is terminated by a newline (`\n`) character. The statement returns an error if files that match the `sample_employee_data` file prefix exist in the specified Amazon S3 bucket.

```
SELECT * FROM employees INTO OUTFILE S3 's3-us-west-2://aurora-select-into-s3-pdx/sample_employee_data'
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n';
```

The following statement selects all of the data in the `employees` table and saves the data into an Amazon S3 bucket that is in the same region as the Aurora MySQL DB cluster. The statement creates data files in which each field is terminated by a comma (`,`) character and each row is terminated by a newline (`\n`) character, and also a manifest file. The statement returns an error if files that match the `sample_employee_data` file prefix exist in the specified Amazon S3 bucket.

```
SELECT * FROM employees INTO OUTFILE S3 's3://aurora-select-into-s3-pdx/sample_employee_data'
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    MANIFEST ON;
```

The following statement selects all of the data in the `employees` table and saves the data into an Amazon S3 bucket that is in a different region from the Aurora DB cluster. The statement creates data files in which each field is terminated by a comma (`,`) character and each row is terminated by a newline (`\n`) character. The statement overwrites any existing files that match the `sample_employee_data` file prefix in the specified Amazon S3 bucket.

```
SELECT * FROM employees INTO OUTFILE S3 's3-us-west-2://aurora-select-into-s3-pdx/sample_employee_data'
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    OVERWRITE ON;
```

The following statement selects all of the data in the `employees` table and saves the data into an Amazon S3 bucket that is in the same region as the Aurora MySQL DB cluster. The statement creates data files in which each field is terminated by a comma (`,`) character and each row is terminated by a newline (`\n`) character, and also a manifest file. The statement overwrites any existing files that match the `sample_employee_data` file prefix in the specified Amazon S3 bucket.

```
SELECT * FROM employees INTO OUTFILE S3 's3://aurora-select-into-s3-pdx/sample_employee_data'
    FIELDS TERMINATED BY ','
    LINES TERMINATED BY '\n'
    MANIFEST ON
    OVERWRITE ON;
```

# Invoking a Lambda function from an Amazon Aurora MySQL DB cluster
<a name="AuroraMySQL.Integrating.Lambda"></a><a name="lambda"></a>

You can invoke an AWS Lambda function from an Amazon Aurora MySQL-Compatible Edition DB cluster with the native function `lambda_sync` or `lambda_async`. Before invoking a Lambda function from an Aurora MySQL, the Aurora DB cluster must have access to Lambda. For details about granting access to Aurora MySQL, see [Giving Aurora access to Lambda](AuroraMySQL.Integrating.LambdaAccess.md). For information about the `lambda_sync` and `lambda_async` stored functions, see [Invoking a Lambda function with an Aurora MySQL native function](AuroraMySQL.Integrating.NativeLambda.md). 

 You can also call an AWS Lambda function by using a stored procedure. However, using a stored procedure is deprecated. We strongly recommend using an Aurora MySQL native function if you are using one of the following Aurora MySQL versions: 
+ Aurora MySQL version 2, for MySQL 5.7-compatible clusters.
+ Aurora MySQL version 3.01 and higher, for MySQL 8.0-compatible clusters. The stored procedure isn't available in Aurora MySQL version 3.

For information about giving Aurora access to Lambda and invoking a Lambda function, see the following topics.

**Topics**
+ [Giving Aurora access to Lambda](AuroraMySQL.Integrating.LambdaAccess.md)
+ [Invoking a Lambda function with an Aurora MySQL native function](AuroraMySQL.Integrating.NativeLambda.md)
+ [Invoking a Lambda function with an Aurora MySQL stored procedure (deprecated)](AuroraMySQL.Integrating.ProcLambda.md)

# Giving Aurora access to Lambda
<a name="AuroraMySQL.Integrating.LambdaAccess"></a>

Before you can invoke Lambda functions from an Aurora MySQL DB cluster, make sure to first give your cluster permission to access Lambda.

**To give Aurora MySQL access to Lambda**

1. Create an AWS Identity and Access Management (IAM) policy that provides the permissions that allow your Aurora MySQL DB cluster to invoke Lambda functions. For instructions, see [Creating an IAM policy to access AWS Lambda resources](AuroraMySQL.Integrating.Authorizing.IAM.LambdaCreatePolicy.md).

1. Create an IAM role, and attach the IAM policy you created in [Creating an IAM policy to access AWS Lambda resources](AuroraMySQL.Integrating.Authorizing.IAM.LambdaCreatePolicy.md) to the new IAM role. For instructions, see [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md).

1. Set the `aws_default_lambda_role` DB cluster parameter to the Amazon Resource Name (ARN) of the new IAM role.

   If the cluster is part of an Aurora global database, apply the same setting for each Aurora cluster in the global database. 

   For more information about DB cluster parameters, see [Amazon Aurora DB cluster and DB instance parameters](USER_WorkingWithDBClusterParamGroups.md#Aurora.Managing.ParameterGroups).

1. To permit database users in an Aurora MySQL DB cluster to invoke Lambda functions, associate the role that you created in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md) with the DB cluster. For information about associating an IAM role with a DB cluster, see [Associating an IAM role with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.md).

    If the cluster is part of an Aurora global database, associate the role with each Aurora cluster in the global database. 

1. Configure your Aurora MySQL DB cluster to allow outbound connections to Lambda. For instructions, see [Enabling network communication from Amazon Aurora to other AWS services](AuroraMySQL.Integrating.Authorizing.Network.md).

    If the cluster is part of an Aurora global database, enable outbound connections for each Aurora cluster in the global database. 

# Invoking a Lambda function with an Aurora MySQL native function
<a name="AuroraMySQL.Integrating.NativeLambda"></a>

**Note**  
You can call the native functions `lambda_sync` and `lambda_async` when you use Aurora MySQL version 2, or Aurora MySQL version 3.01 and higher. For more information about Aurora MySQL versions, see [Database engine updates for Amazon Aurora MySQLLong-term support (LTS) and beta releases for Amazon Aurora MySQL](AuroraMySQL.Updates.md).

You can invoke an AWS Lambda function from an Aurora MySQL DB cluster by calling the native functions `lambda_sync` and `lambda_async`. This approach can be useful when you want to integrate your database running on Aurora MySQL with other AWS services. For example, you might want to send a notification using Amazon Simple Notification Service (Amazon SNS) whenever a row is inserted into a specific table in your database.

**Contents**
+ [Working with native functions to invoke a Lambda function](#AuroraMySQL.Integrating.NativeLambda.lambda_functions)
  + [Granting the role in Aurora MySQL version 3](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.v3)
  + [Granting the privilege in Aurora MySQL version 2](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.v2)
  + [Syntax for the lambda\$1sync function](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.Sync.Syntax)
  + [Parameters for the lambda\$1sync function](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.Sync.Parameters)
  + [Example for the lambda\$1sync function](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.Sync.Example)
  + [Syntax for the lambda\$1async function](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.Async.Syntax)
  + [Parameters for the lambda\$1async function](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.Async.Parameters)
  + [Example for the lambda\$1async function](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.Async.Example)
  + [Invoking a Lambda function within a trigger](#AuroraMySQL.Integrating.NativeLambda.lambda_functions.trigger)

## Working with native functions to invoke a Lambda function
<a name="AuroraMySQL.Integrating.NativeLambda.lambda_functions"></a>

The `lambda_sync` and `lambda_async` functions are built-in, native functions that invoke a Lambda function synchronously or asynchronously. When you must know the result of the Lambda function before moving on to another action, use the synchronous function `lambda_sync`. When you don't need to know the result of the Lambda function before moving on to another action, use the asynchronous function `lambda_async`.

### Granting the role in Aurora MySQL version 3
<a name="AuroraMySQL.Integrating.NativeLambda.lambda_functions.v3"></a>

In Aurora MySQL version 3, the user invoking a native function must be granted the `AWS_LAMBDA_ACCESS` role. To grant this role to a user, connect to the DB instance as the administrative user, and run the following statement.

```
GRANT AWS_LAMBDA_ACCESS TO user@domain-or-ip-address
```

You can revoke this role by running the following statement.

```
REVOKE AWS_LAMBDA_ACCESS FROM user@domain-or-ip-address
```

**Tip**  
When you use the role technique in Aurora MySQL version 3, you can also activate the role by using the `SET ROLE role_name` or `SET ROLE ALL` statement. If you aren't familiar with the MySQL 8.0 role system, you can learn more in [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model). For more details, see [Using roles](https://dev.mysql.com/doc/refman/8.0/en/roles.html) in the *MySQL Reference Manual*.  
This only applies to the current active session. When you reconnect, you must run the `SET ROLE` statement again to grant privileges. For more information, see [SET ROLE statement](https://dev.mysql.com/doc/refman/8.0/en/set-role.html) in the *MySQL Reference Manual*.  
You can use the `activate_all_roles_on_login` DB cluster parameter to automatically activate all roles when a user connects to a DB instance. When this parameter is set, you generally don't have to call the `SET ROLE` statement explicitly to activate a role. For more information, see [activate\$1all\$1roles\$1on\$1login](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_activate_all_roles_on_login) in the *MySQL Reference Manual*.  
However, you must call `SET ROLE ALL` explicitly at the beginning of a stored procedure to activate the role, when the stored procedure is called by a different user.

If you get an error such as the following when you try to invoke a Lambda function, then run a `SET ROLE` statement.

```
SQL Error [1227] [42000]: Access denied; you need (at least one of) the Invoke Lambda privilege(s) for this operation
```

Make sure that you're granting the role to the correct user, as shown in the `mysql.users` table entries. There might be multiple users with the same name, but on different hosts. Depending on which application or host is invoking the `lambda_sync` function, MySQL selects the user with the best match according to the `host` column entries.

### Granting the privilege in Aurora MySQL version 2
<a name="AuroraMySQL.Integrating.NativeLambda.lambda_functions.v2"></a>

In Aurora MySQL version 2, the user invoking a native function must be granted the `INVOKE LAMBDA` privilege. To grant this privilege to a user, connect to the DB instance as the administrative user, and run the following statement.

```
GRANT INVOKE LAMBDA ON *.* TO user@domain-or-ip-address
```

You can revoke this privilege by running the following statement.

```
REVOKE INVOKE LAMBDA ON *.* FROM user@domain-or-ip-address
```

### Syntax for the lambda\$1sync function
<a name="AuroraMySQL.Integrating.NativeLambda.lambda_functions.Sync.Syntax"></a>

You invoke the `lambda_sync` function synchronously with the `RequestResponse` invocation type. The function returns the result of the Lambda invocation in a JSON payload. The function has the following syntax.

```
lambda_sync (
  lambda_function_ARN,
  JSON_payload
)
```

### Parameters for the lambda\$1sync function
<a name="AuroraMySQL.Integrating.NativeLambda.lambda_functions.Sync.Parameters"></a>

The `lambda_sync` function has the following parameters.

* lambda\$1function\$1ARN *  
The Amazon Resource Name (ARN) of the Lambda function to invoke.

* JSON\$1payload *  
The payload for the invoked Lambda function, in JSON format.

**Note**  
Aurora MySQL version 3 supports the JSON parsing functions from MySQL 8.0. However, Aurora MySQL version 2 doesn't include those functions. JSON parsing isn't required when a Lambda function returns an atomic value, such as a number or a string.

### Example for the lambda\$1sync function
<a name="AuroraMySQL.Integrating.NativeLambda.lambda_functions.Sync.Example"></a>

The following query based on `lambda_sync` invokes the Lambda function `BasicTestLambda` synchronously using the function ARN. The payload for the function is `{"operation": "ping"}`.

```
SELECT lambda_sync(
    'arn:aws:lambda:us-east-1:123456789012:function:BasicTestLambda',
    '{"operation": "ping"}');
```

### Syntax for the lambda\$1async function
<a name="AuroraMySQL.Integrating.NativeLambda.lambda_functions.Async.Syntax"></a>

You invoke the `lambda_async` function asynchronously with the `Event` invocation type. The function returns the result of the Lambda invocation in a JSON payload. The function has the following syntax.

```
lambda_async (
  lambda_function_ARN,
  JSON_payload
)
```

### Parameters for the lambda\$1async function
<a name="AuroraMySQL.Integrating.NativeLambda.lambda_functions.Async.Parameters"></a>

The `lambda_async` function has the following parameters.

* lambda\$1function\$1ARN *  
The Amazon Resource Name (ARN) of the Lambda function to invoke.

* JSON\$1payload *  
The payload for the invoked Lambda function, in JSON format.

**Note**  
Aurora MySQL version 3 supports the JSON parsing functions from MySQL 8.0. However, Aurora MySQL version 2 doesn't include those functions. JSON parsing isn't required when a Lambda function returns an atomic value, such as a number or a string.

### Example for the lambda\$1async function
<a name="AuroraMySQL.Integrating.NativeLambda.lambda_functions.Async.Example"></a>

The following query based on `lambda_async` invokes the Lambda function `BasicTestLambda` asynchronously using the function ARN. The payload for the function is `{"operation": "ping"}`.

```
SELECT lambda_async(
    'arn:aws:lambda:us-east-1:123456789012:function:BasicTestLambda',
    '{"operation": "ping"}');
```

### Invoking a Lambda function within a trigger
<a name="AuroraMySQL.Integrating.NativeLambda.lambda_functions.trigger"></a>

You can use triggers to call Lambda on data-modifying statements. The following example uses the `lambda_async` native function and stores the result in a variable.

```
mysql>SET @result=0;
mysql>DELIMITER //
mysql>CREATE TRIGGER myFirstTrigger
      AFTER INSERT
          ON Test_trigger FOR EACH ROW
      BEGIN
      SELECT lambda_async(
          'arn:aws:lambda:us-east-1:123456789012:function:BasicTestLambda',
          '{"operation": "ping"}')
          INTO @result;
      END; //
mysql>DELIMITER ;
```

**Note**  
Triggers aren't run once per SQL statement, but once per row modified, one row at a time. When a trigger runs, the process is synchronous. The data-modifying statement only returns when the trigger completes.  
Be careful when invoking an AWS Lambda function from triggers on tables that experience high write traffic. `INSERT`, `UPDATE`, and `DELETE` triggers are activated per row. A write-heavy workload on a table with `INSERT`, `UPDATE`, or `DELETE` triggers results in a large number of calls to your AWS Lambda function.

# Invoking a Lambda function with an Aurora MySQL stored procedure (deprecated)
<a name="AuroraMySQL.Integrating.ProcLambda"></a>

You can invoke an AWS Lambda function from an Aurora MySQL DB cluster by calling the `mysql.lambda_async` procedure. This approach can be useful when you want to integrate your database running on Aurora MySQL with other AWS services. For example, you might want to send a notification using Amazon Simple Notification Service (Amazon SNS) whenever a row is inserted into a specific table in your database. 

**Contents**
+ [Aurora MySQL version considerations](#AuroraMySQL.Integrating.ProcLambda.caveats)
+ [Working with the mysql.lambda\$1async procedure to invoke a Lambda function (deprecated)](#AuroraMySQL.Integrating.Lambda.mysql_lambda_async)
  + [Syntax](#AuroraMySQL.Integrating.Lambda.mysql_lambda_async.Syntax)
  + [Parameters](#AuroraMySQL.Integrating.Lambda.mysql_lambda_async.Parameters)
  + [Examples](#AuroraMySQL.Integrating.Lambda.mysql_lambda_async.Examples)

## Aurora MySQL version considerations
<a name="AuroraMySQL.Integrating.ProcLambda.caveats"></a>

Starting in Aurora MySQL version 2, you can use the native function method instead of these stored procedures to invoke a Lambda function. For more information about the native functions, see [Working with native functions to invoke a Lambda function](AuroraMySQL.Integrating.NativeLambda.md#AuroraMySQL.Integrating.NativeLambda.lambda_functions).

In Aurora MySQL version 2, the stored procedure `mysql.lambda_async` is no longer supported. We strongly recommend that you work with native Lambda functions instead.

In Aurora MySQL version 3, the stored procedure isn't available.

## Working with the mysql.lambda\$1async procedure to invoke a Lambda function (deprecated)
<a name="AuroraMySQL.Integrating.Lambda.mysql_lambda_async"></a>

The `mysql.lambda_async` procedure is a built-in stored procedure that invokes a Lambda function asynchronously. To use this procedure, your database user must have `EXECUTE` privilege on the `mysql.lambda_async` stored procedure.

### Syntax
<a name="AuroraMySQL.Integrating.Lambda.mysql_lambda_async.Syntax"></a>

The `mysql.lambda_async` procedure has the following syntax.

```
CALL mysql.lambda_async (
  lambda_function_ARN,
  lambda_function_input
)
```

### Parameters
<a name="AuroraMySQL.Integrating.Lambda.mysql_lambda_async.Parameters"></a>

The `mysql.lambda_async` procedure has the following parameters.

* lambda\$1function\$1ARN *  
The Amazon Resource Name (ARN) of the Lambda function to invoke.

* lambda\$1function\$1input *  
The input string, in JSON format, for the invoked Lambda function.

### Examples
<a name="AuroraMySQL.Integrating.Lambda.mysql_lambda_async.Examples"></a>

As a best practice, we recommend that you wrap calls to the `mysql.lambda_async` procedure in a stored procedure that can be called from different sources such as triggers or client code. This approach can help to avoid impedance mismatch issues and make it easier to invoke Lambda functions. 

**Note**  
Be careful when invoking an AWS Lambda function from triggers on tables that experience high write traffic. `INSERT`, `UPDATE`, and `DELETE` triggers are activated per row. A write-heavy workload on a table with `INSERT`, `UPDATE`, or `DELETE` triggers results in a large number of calls to your AWS Lambda function.   
Although calls to the `mysql.lambda_async` procedure are asynchronous, triggers are synchronous. A statement that results in a large number of trigger activations doesn't wait for the call to the AWS Lambda function to complete, but it does wait for the triggers to complete before returning control to the client.

**Example: Invoke an AWS Lambda function to send email**  
The following example creates a stored procedure that you can call in your database code to send an email using a Lambda function.  
**AWS Lambda Function**  

```
import boto3

ses = boto3.client('ses')

def SES_send_email(event, context):

    return ses.send_email(
        Source=event['email_from'],
        Destination={
            'ToAddresses': [
            event['email_to'],
            ]
        },

        Message={
            'Subject': {
            'Data': event['email_subject']
            },
            'Body': {
                'Text': {
                    'Data': event['email_body']
                }
            }
        }
    )
```
**Stored Procedure**  

```
DROP PROCEDURE IF EXISTS SES_send_email;
DELIMITER ;;
  CREATE PROCEDURE SES_send_email(IN email_from VARCHAR(255),
                                  IN email_to VARCHAR(255),
                                  IN subject VARCHAR(255),
                                  IN body TEXT) LANGUAGE SQL
  BEGIN
    CALL mysql.lambda_async(
         'arn:aws:lambda:us-west-2:123456789012:function:SES_send_email',
         CONCAT('{"email_to" : "', email_to,
             '", "email_from" : "', email_from,
             '", "email_subject" : "', subject,
             '", "email_body" : "', body, '"}')
     );
  END
  ;;
DELIMITER ;
```
**Call the Stored Procedure to Invoke the AWS Lambda Function**  

```
mysql> call SES_send_email('example_from@amazon.com', 'example_to@amazon.com', 'Email subject', 'Email content');
```

**Example: Invoke an AWS Lambda function to publish an event from a trigger**  
The following example creates a stored procedure that publishes an event by using Amazon SNS. The code calls the procedure from a trigger when a row is added to a table.  
**AWS Lambda Function**  

```
import boto3

sns = boto3.client('sns')

def SNS_publish_message(event, context):

    return sns.publish(
        TopicArn='arn:aws:sns:us-west-2:123456789012:Sample_Topic',
        Message=event['message'],
        Subject=event['subject'],
        MessageStructure='string'
    )
```
**Stored Procedure**  

```
DROP PROCEDURE IF EXISTS SNS_Publish_Message;
DELIMITER ;;
CREATE PROCEDURE SNS_Publish_Message (IN subject VARCHAR(255),
                                      IN message TEXT) LANGUAGE SQL
BEGIN
  CALL mysql.lambda_async('arn:aws:lambda:us-west-2:123456789012:function:SNS_publish_message',
     CONCAT('{ "subject" : "', subject,
            '", "message" : "', message, '" }')
     );
END
;;
DELIMITER ;
```
**Table**  

```
CREATE TABLE 'Customer_Feedback' (
  'id' int(11) NOT NULL AUTO_INCREMENT,
  'customer_name' varchar(255) NOT NULL,
  'customer_feedback' varchar(1024) NOT NULL,
  PRIMARY KEY ('id')
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
```
**Trigger**  

```
DELIMITER ;;
CREATE TRIGGER TR_Customer_Feedback_AI
  AFTER INSERT ON Customer_Feedback
  FOR EACH ROW
BEGIN
  SELECT CONCAT('New customer feedback from ', NEW.customer_name), NEW.customer_feedback INTO @subject, @feedback;
  CALL SNS_Publish_Message(@subject, @feedback);
END
;;
DELIMITER ;
```
**Insert a Row into the Table to Trigger the Notification**  

```
mysql> insert into Customer_Feedback (customer_name, customer_feedback) VALUES ('Sample Customer', 'Good job guys!');
```

# Publishing Amazon Aurora MySQL logs to Amazon CloudWatch Logs
<a name="AuroraMySQL.Integrating.CloudWatch"></a>

You can configure your Aurora MySQL DB cluster to publish general, slow, audit, and error log data to a log group in Amazon CloudWatch Logs. With CloudWatch Logs, you can perform real-time analysis of the log data, and use CloudWatch to create alarms and view metrics. You can use CloudWatch Logs to store your log records in highly durable storage.

To publish logs to CloudWatch Logs, the respective logs must be enabled. Error logs are enabled by default, but you must enable the other types of logs explicitly. For information about enabling logs in MySQL, see [Selecting general query and slow query log output destinations](https://dev.mysql.com/doc/refman/8.0/en/log-destinations.html) in the MySQL documentation. For more information about enabling Aurora MySQL audit logs, see [Enabling Advanced Auditing](AuroraMySQL.Auditing.md#AuroraMySQL.Auditing.Enable).

**Note**  
If exporting log data is disabled, Aurora doesn't delete existing log groups or log streams. If exporting log data is disabled, existing log data remains available in CloudWatch Logs, depending on log retention, and you still incur charges for stored audit log data. You can delete log streams and log groups using the CloudWatch Logs console, the AWS CLI, or the CloudWatch Logs API.
An alternative way to publish audit logs to CloudWatch Logs is by enabling Advanced Auditing, then creating a custom DB cluster parameter group and setting the `server_audit_logs_upload` parameter to `1`. The default for the `server_audit_logs_upload` DB cluster parameter is `0`. For information on enabling Advanced Auditing, see [Using Advanced Auditing with an Amazon Aurora MySQL DB cluster](AuroraMySQL.Auditing.md).  
If you use this alternative method, you must have an IAM role to access CloudWatch Logs and set the `aws_default_logs_role` cluster-level parameter to the ARN for this role. For information about creating the role, see [Setting up IAM roles to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.md). However, if you have the `AWSServiceRoleForRDS` service-linked role, it provides access to CloudWatch Logs and overrides any custom-defined roles. For information about service-linked roles for Amazon RDS, see [Using service-linked roles for Amazon Aurora](UsingWithRDS.IAM.ServiceLinkedRoles.md).
If you don't want to export audit logs to CloudWatch Logs, make sure that all methods of exporting audit logs are disabled. These methods are the AWS Management Console, the AWS CLI, the RDS API, and the `server_audit_logs_upload` parameter.

## Console
<a name="AuroraMySQL.Integrating.CloudWatch.Console"></a>

You can publish Aurora MySQL logs for provisioned clusters to CloudWatch Logs with the console.

**To publish Aurora MySQL logs from the console**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the Aurora MySQL DB cluster that you want to publish the log data for.

1. Choose **Modify**.

1. In the **Log exports** section, choose the logs that you want to start publishing to CloudWatch Logs.

1. Choose **Continue**, and then choose **Modify DB Cluster** on the summary page.

## AWS CLI
<a name="AuroraMySQL.Integrating.CloudWatch.CLI"></a>

You can publish Aurora MySQL logs for provisioned clusters with the AWS CLI. To do so, you run the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) AWS CLI command with the following options: 
+ `--db-cluster-identifier`—The DB cluster identifier.
+ `--cloudwatch-logs-export-configuration`—The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

You can also publish Aurora MySQL logs by running one of the following AWS CLI commands: 
+ [create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html)
+ [restore-db-cluster-from-s3](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-from-s3.html)
+ [restore-db-cluster-from-snapshot](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-from-snapshot.html)
+ [restore-db-cluster-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-to-point-in-time.html)

Run one of these AWS CLI commands with the following options:
+ `--db-cluster-identifier`—The DB cluster identifier.
+ `--engine`—The database engine.
+ `--enable-cloudwatch-logs-exports`—The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

Other options might be required depending on the AWS CLI command that you run.

**Example**  
The following command modifies an existing Aurora MySQL DB cluster to publish log files to CloudWatch Logs.  
For Linux, macOS, or Unix:  

```
1. aws rds modify-db-cluster \
2.     --db-cluster-identifier mydbcluster \
3.     --cloudwatch-logs-export-configuration '{"EnableLogTypes":["error","general","slowquery","audit","instance"]}'
```
For Windows:  

```
1. aws rds modify-db-cluster ^
2.     --db-cluster-identifier mydbcluster ^
3.     --cloudwatch-logs-export-configuration '{"EnableLogTypes":["error","general","slowquery","audit","instance"]}'
```

**Example**  
The following command creates an Aurora MySQL DB cluster to publish log files to CloudWatch Logs.  
For Linux, macOS, or Unix:  

```
1. aws rds create-db-cluster \
2.     --db-cluster-identifier mydbcluster \
3.     --engine aurora \
4.     --enable-cloudwatch-logs-exports '["error","general","slowquery","audit","instance"]'
```
For Windows:  

```
1. aws rds create-db-cluster ^
2.     --db-cluster-identifier mydbcluster ^
3.     --engine aurora ^
4.     --enable-cloudwatch-logs-exports '["error","general","slowquery","audit","instance"]'
```

## RDS API
<a name="AuroraMySQL.Integrating.CloudWatch.API"></a>

You can publish Aurora MySQL logs for provisioned clusters with the RDS API. To do so, you run the [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) operation with the following options: 
+ `DBClusterIdentifier`—The DB cluster identifier.
+ `CloudwatchLogsExportConfiguration`—The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

You can also publish Aurora MySQL logs with the RDS API by running one of the following RDS API operations: 
+ [CreateDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html)
+ [RestoreDBClusterFromS3](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromS3.html)
+ [RestoreDBClusterFromSnapshot](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromSnapshot.html)
+ [RestoreDBClusterToPointInTime](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterToPointInTime.html)

Run the RDS API operation with the following parameters: 
+ `DBClusterIdentifier`—The DB cluster identifier.
+ `Engine`—The database engine.
+ `EnableCloudwatchLogsExports`—The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

Other parameters might be required depending on the AWS CLI command that you run.

## Monitoring log events in Amazon CloudWatch
<a name="AuroraMySQL.Integrating.CloudWatch.Monitor"></a>

After enabling Aurora MySQL log events, you can monitor the events in Amazon CloudWatch Logs. A new log group is automatically created for the Aurora DB cluster under the following prefix, in which `cluster-name` represents the DB cluster name, and `log_type` represents the log type.

```
/aws/rds/cluster/cluster-name/log_type
```

For example, if you configure the export function to include the slow query log for a DB cluster named `mydbcluster`, slow query data is stored in the `/aws/rds/cluster/mydbcluster/slowquery` log group.

The events from all instances in your cluster are pushed to a log group using different log streams. The behavior depends on which of the following conditions is true:
+ A log group with the specified name exists.

  Aurora uses the existing log group to export log data for the cluster. To create log groups with predefined log retention periods, metric filters, and customer access, you can use automated configuration, such as AWS CloudFormation.
+ A log group with the specified name doesn't exist.

  When a matching log entry is detected in the log file for the instance, Aurora MySQL creates a new log group in CloudWatch Logs automatically. The log group uses the default log retention period of **Never Expire**.

  To change the log retention period, use the CloudWatch Logs console, the AWS CLI, or the CloudWatch Logs API. For more information about changing log retention periods in CloudWatch Logs, see [Change log data retention in CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SettingLogRetention.html).

To search for information within the log events for a DB cluster, use the CloudWatch Logs console, the AWS CLI, or the CloudWatch Logs API. For more information about searching and filtering log data, see [Searching and filtering log data](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html).