Export log data to Amazon S3 using the console
In the following examples, you use the Amazon CloudWatch console to export all data from an
Amazon CloudWatch Logs log group named my-log-group
to an Amazon S3 bucket named
my-exported-logs
.
Exporting log data to S3 buckets that are encrypted by SSE-KMS is supported. Exporting to buckets encrypted with DSSE-KMS is not supported.
The details of how you set up the export depends on whether the Amazon S3 bucket that you want to export to is in the same account as your logs that are being exported, or in a different account.
Same-account export
If the Amazon S3 bucket is in the same account as the logs that are being exported, use the instructions in this section.
Topics
Step 1: Create an Amazon S3 bucket
We recommend that you use a bucket that was created specifically for CloudWatch Logs. However, if you want to use an existing bucket, you can skip to step 2.
Note
The S3 bucket must reside in the same Region as the log data to export. CloudWatch Logs doesn't support exporting data to S3 buckets in a different Region.
To create an S3 bucket
Open the Amazon S3 console at https://console.aws.amazon.com/s3/
. -
If necessary, change the Region. From the navigation bar, choose the Region where your CloudWatch Logs reside.
-
Choose Create Bucket.
-
For Bucket Name, enter a name for the bucket.
-
For Region, select the Region where your CloudWatch Logs data resides.
-
Choose Create.
Step 2: Set up access permissions
To create the export task in step 5, you'll need to be signed on with the AmazonS3ReadOnlyAccess
IAM role
and with the following permissions:
logs:CreateExportTask
logs:CancelExportTask
logs:DescribeExportTasks
logs:DescribeLogStreams
logs:DescribeLogGroups
To provide access, add permissions to your users, groups, or roles:
-
Users and groups in AWS IAM Identity Center:
Create a permission set. Follow the instructions in Create a permission set in the AWS IAM Identity Center User Guide.
-
Users managed in IAM through an identity provider:
Create a role for identity federation. Follow the instructions in Create a role for a third-party identity provider (federation) in the IAM User Guide.
-
IAM users:
-
Create a role that your user can assume. Follow the instructions in Create a role for an IAM user in the IAM User Guide.
-
(Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in Adding permissions to a user (console) in the IAM User Guide.
-
Step 3: Set permissions on an S3 bucket
By default, all S3 buckets and objects are private. Only the resource owner, the AWS account that created the bucket, can access the bucket and any objects that it contains. However, the resource owner can choose to grant access permissions to other resources and users by writing an access policy.
When you set the policy, we recommend that you include a randomly generated string as the prefix for the bucket, so that only intended log streams are exported to the bucket.
Important
To make exports to S3 buckets more secure, we now require you to specify the list of source accounts that are allowed to export log data to your S3 bucket.
In the following example, the list of account IDs in the aws:SourceAccount
key
would be the accounts from which a user can export log data to your S3 bucket. The aws:SourceArn
key would be the resource for which the action is being taken. You may restrict this to a
specific log group, or use a wildcard as shown in this example.
We recommend that you also include the account ID of the account where the S3 bucket is created, to allow export within the same account.
To set permissions on an Amazon S3 bucket
-
In the Amazon S3 console, choose the bucket that you created in step 1.
-
Choose Permissions, Bucket policy.
-
In the Bucket Policy Editor, add the following policy. Change
my-exported-logs
to the name of your S3 bucket. Be sure to specify the correct Region endpoint, such asus-west-1
, for Principal.{ "Version": "2012-10-17", "Statement": [ { "Action": "s3:GetBucketAcl", "Effect": "Allow", "Resource": "arn:aws:s3:::
my-exported-logs
", "Principal": { "Service": "logs.Region
.amazonaws.com" }, "Condition": { "StringEquals": { "aws:SourceAccount": [ "AccountId1", "AccountId2", ... ] }, "ArnLike": { "aws:SourceArn": [ "arn:aws:logs:Region
:AccountId1:log-group:*", "arn:aws:logs:Region
:AccountId2:log-group:*", ... ] } } }, { "Action": "s3:PutObject" , "Effect": "Allow", "Resource": "arn:aws:s3:::my-exported-logs
/*", "Principal": { "Service": "logs.Region
.amazonaws.com" }, "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control", "aws:SourceAccount": [ "AccountId1", "AccountId2", ... ] }, "ArnLike": { "aws:SourceArn": [ "arn:aws:logs:Region
:AccountId1:log-group:*", "arn:aws:logs:Region
:AccountId2:log-group:*", ... ] } } } ] } -
Choose Save to set the policy that you just added as the access policy on your bucket. This policy enables CloudWatch Logs to export log data to your S3 bucket. The bucket owner has full permissions on all of the exported objects.
Warning
If the existing bucket already has one or more policies attached to it, add the statements for CloudWatch Logs access to that policy or policies. We recommend that you evaluate the resulting set of permissions to be sure that they're appropriate for the users who will access the bucket.
(Optional) Step 4: Exporting to a bucket encrypted with SSE-KMS
This step is necessary only if you are exporting to an S3 bucket that uses server-side encryption with AWS KMS keys. This encryption is known as SSE-KMS.
To export to a bucket encrypted with SSE-KMS
-
Open the AWS KMS console at https://console.aws.amazon.com/kms
. -
To change the AWS Region, use the Region selector in the upper-right corner of the page.
-
In the left navigation bar, choose Customer managed keys.
Choose Create Key.
-
For Key type, choose Symmetric.
-
For Key usage, choose Encrypt and decrypt and then choose Next.
-
Under Add labels, enter an alias for the key and optionally add a description or tags. Then choose Next.
-
Under Key administrators, select who can administer this key, and then choose Next.
-
Under Define key usage permissions, make no changes and choose Next.
-
Review the settings and choose Finish.
-
Back at the Customer managed keys page, choose the name of the key that you just created.
-
Choose the Key policy tab and choose Switch to policy view.
-
In the Key policy section, choose Edit.
-
Add the following statement to the key policy statement list. When you do, replace
Region
with the Region of your logs and replaceaccount-ARN
with the ARN of the account that owns the KMS key.{ "Version": "2012-10-17", "Statement": [ { "Sid": "Allow CWL Service Principal usage", "Effect": "Allow", "Principal": { "Service": "logs.
Region
.amazonaws.com" }, "Action": [ "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": "*" }, { "Sid": "Enable IAM User Permissions", "Effect": "Allow", "Principal": { "AWS": "account-ARN
" }, "Action": [ "kms:GetKeyPolicy*", "kms:PutKeyPolicy*", "kms:DescribeKey*", "kms:CreateAlias*", "kms:ScheduleKeyDeletion*", "kms:Decrypt" ], "Resource": "*" } ] } -
Choose Save changes.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/
. -
Find the bucket that you created in Step 1: Create an S3 bucket and choose the bucket name.
-
Choose the Properties tab. Then, under Default Encryption, choose Edit.
-
Under Server-side Encryption, choose Enable.
-
Under Encryption type, choose AWS Key Management Service key (SSE-KMS).
-
Choose Choose from your AWS KMS keys and find the key that you created.
-
For Bucket key, choose Enable.
-
Choose Save changes.
Step 5: Create an export task
In this step, you create the export task for exporting logs from a log group.
To export data to Amazon S3 using the CloudWatch console
-
Sign in with sufficient permissions as documented in Step 2: Set up access permissions.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/
. -
In the navigation pane, choose Log groups.
-
On the Log Groups screen, choose the name of the log group.
-
Choose Actions, Export data to Amazon S3.
-
On the Export data to Amazon S3 screen, under Define data export, set the time range for the data to export using From and To.
-
If your log group has multiple log streams, you can provide a log stream prefix to limit the log group data to a specific stream. Choose Advanced, and then for Stream prefix, enter the log stream prefix.
-
Under Choose S3 bucket, choose the account associated with the S3 bucket.
-
For S3 bucket name, choose an S3 bucket.
-
For S3 Bucket prefix, enter the randomly generated string that you specified in the bucket policy.
-
Choose Export to export your log data to Amazon S3.
-
To view the status of the log data that you exported to Amazon S3, choose Actions and then View all exports to Amazon S3.
Cross-account export
If the Amazon S3 bucket is in a different account than the logs that are being exported, use the instructions in this section.
Topics
Step 1: Create an Amazon S3 bucket
We recommend that you use a bucket that was created specifically for CloudWatch Logs. However, if you want to use an existing bucket, you can skip to step 2.
Note
The S3 bucket must reside in the same Region as the log data to export. CloudWatch Logs doesn't support exporting data to S3 buckets in a different Region.
To create an S3 bucket
Open the Amazon S3 console at https://console.aws.amazon.com/s3/
. -
If necessary, change the Region. From the navigation bar, choose the Region where your CloudWatch Logs reside.
-
Choose Create Bucket.
-
For Bucket Name, enter a name for the bucket.
-
For Region, select the Region where your CloudWatch Logs data resides.
-
Choose Create.
Step 2: Set up access permissions
First, you must create a new IAM policy to enable CloudWatch Logs to have the s3:PutObject
permission
for the destination Amazon S3 bucket in the destination account.
The policy that you create depends on whether the destination bucket uses AWS KMS encryption.
To create an IAM policy to export logs to an Amazon S3 bucket
Open the IAM console at https://console.aws.amazon.com/iam/
. In the navigation pane on the left, choose Policies.
Choose Create policy.
In the Policy editor section, choose JSON.
If the destination bucket does not use AWS KMS encryption, paste the following policy into the editor.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::
my-exported-logs
/*" } ] }If the destination bucket does use AWS KMS encryption, paste the following policy into the editor.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::
my-exported-logs
/*" }, { "Effect": "Allow", "Action": [ "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": "ARN_OF_KMS_KEY
" } ] }Choose Next.
Enter a policy name. You will use this name to attach the policy to your IAM role.
Choose Create policy to save the new policy.
To create the export task in step 5, you'll need to be signed on with the AmazonS3ReadOnlyAccess
IAM role. You must also be signed on with the IAM policy that you just created, and also with
the following permissions:
logs:CreateExportTask
logs:CancelExportTask
logs:DescribeExportTasks
logs:DescribeLogStreams
logs:DescribeLogGroups
To provide access, add permissions to your users, groups, or roles:
-
Users and groups in AWS IAM Identity Center:
Create a permission set. Follow the instructions in Create a permission set in the AWS IAM Identity Center User Guide.
-
Users managed in IAM through an identity provider:
Create a role for identity federation. Follow the instructions in Create a role for a third-party identity provider (federation) in the IAM User Guide.
-
IAM users:
-
Create a role that your user can assume. Follow the instructions in Create a role for an IAM user in the IAM User Guide.
-
(Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in Adding permissions to a user (console) in the IAM User Guide.
-
Step 3: Set permissions on an S3 bucket
By default, all S3 buckets and objects are private. Only the resource owner, the AWS account that created the bucket, can access the bucket and any objects that it contains. However, the resource owner can choose to grant access permissions to other resources and users by writing an access policy.
When you set the policy, we recommend that you include a randomly generated string as the prefix for the bucket, so that only intended log streams are exported to the bucket.
Important
To make exports to S3 buckets more secure, we now require you to specify the list of source accounts that are allowed to export log data to your S3 bucket.
In the following example, the list of account IDs in the aws:SourceAccount
key
would be the accounts from which a user can export log data to your S3 bucket. The aws:SourceArn
key would be the resource for which the action is being taken. You may restrict this to a
specific log group, or use a wildcard as shown in this example.
We recommend that you also include the account ID of the account where the S3 bucket is created, to allow export within the same account.
To set permissions on an Amazon S3 bucket
-
In the Amazon S3 console, choose the bucket that you created in step 1.
-
Choose Permissions, Bucket policy.
-
In the Bucket Policy Editor, add the following policy. Change
my-exported-logs
to the name of your S3 bucket. Be sure to specify the correct Region endpoint, such asus-west-1
, for Principal.{ "Version": "2012-10-17", "Statement": [ { "Action": "s3:GetBucketAcl", "Effect": "Allow", "Resource": "arn:aws:s3:::
my-exported-logs
", "Principal": { "Service": "logs.Region
.amazonaws.com" }, "Condition": { "StringEquals": { "aws:SourceAccount": [ "AccountId1", "AccountId2", ... ] }, "ArnLike": { "aws:SourceArn": [ "arn:aws:logs:Region
:AccountId1:log-group:*", "arn:aws:logs:Region
:AccountId2:log-group:*", ... ] } } }, { "Action": "s3:PutObject" , "Effect": "Allow", "Resource": "arn:aws:s3:::my-exported-logs
/*", "Principal": { "Service": "logs.Region
.amazonaws.com" }, "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control", "aws:SourceAccount": [ "AccountId1", "AccountId2", ... ] }, "ArnLike": { "aws:SourceArn": [ "arn:aws:logs:Region
:AccountId1:log-group:*", "arn:aws:logs:Region
:AccountId2:log-group:*", ... ] } } }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::create_export_task_caller_account
:role/role_name
" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::my-exported-logs
/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } } ] } -
Choose Save to set the policy that you just added as the access policy on your bucket. This policy enables CloudWatch Logs to export log data to your S3 bucket. The bucket owner has full permissions on all of the exported objects.
Warning
If the existing bucket already has one or more policies attached to it, add the statements for CloudWatch Logs access to that policy or policies. We recommend that you evaluate the resulting set of permissions to be sure that they're appropriate for the users who will access the bucket.
(Optional) Step 4: Exporting to a bucket encrypted with SSE-KMS
This step is necessary only if you are exporting to an S3 bucket that uses server-side encryption with AWS KMS keys. This encryption is known as SSE-KMS.
To export to a bucket encrypted with SSE-KMS
-
Open the AWS KMS console at https://console.aws.amazon.com/kms
. -
To change the AWS Region, use the Region selector in the upper-right corner of the page.
-
In the left navigation bar, choose Customer managed keys.
Choose Create Key.
-
For Key type, choose Symmetric.
-
For Key usage, choose Encrypt and decrypt and then choose Next.
-
Under Add labels, enter an alias for the key and optionally add a description or tags. Then choose Next.
-
Under Key administrators, select who can administer this key, and then choose Next.
-
Under Define key usage permissions, make no changes and choose Next.
-
Review the settings and choose Finish.
-
Back at the Customer managed keys page, choose the name of the key that you just created.
-
Choose the Key policy tab and choose Switch to policy view.
-
In the Key policy section, choose Edit.
-
Add the following statement to the key policy statement list. When you do, replace
Region
with the Region of your logs and replaceaccount-ARN
with the ARN of the account that owns the KMS key.{ "Version": "2012-10-17", "Statement": [ { "Sid": "Allow CWL Service Principal usage", "Effect": "Allow", "Principal": { "Service": "logs.
Region
.amazonaws.com" }, "Action": [ "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": "*" }, { "Sid": "Enable IAM User Permissions", "Effect": "Allow", "Principal": { "AWS": "account-ARN
" }, "Action": [ "kms:GetKeyPolicy*", "kms:PutKeyPolicy*", "kms:DescribeKey*", "kms:CreateAlias*", "kms:ScheduleKeyDeletion*", "kms:Decrypt" ], "Resource": "*" }, { "Sid": "Enable IAM Role Permissions", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::create_export_task_caller_account
:role/role_name
" }, "Action": [ "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": "ARN_OF_KMS_KEY
" } ] } -
Choose Save changes.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/
. -
Find the bucket that you created in Step 1: Create an S3 bucket and choose the bucket name.
-
Choose the Properties tab. Then, under Default Encryption, choose Edit.
-
Under Server-side Encryption, choose Enable.
-
Under Encryption type, choose AWS Key Management Service key (SSE-KMS).
-
Choose Choose from your AWS KMS keys and find the key that you created.
-
For Bucket key, choose Enable.
-
Choose Save changes.
Step 5: Create an export task
In this step, you create the export task for exporting logs from a log group.
To export data to Amazon S3 using the CloudWatch console
-
Sign in with sufficient permissions as documented in Step 2: Set up access permissions.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/
. -
In the navigation pane, choose Log groups.
-
On the Log Groups screen, choose the name of the log group.
-
Choose Actions, Export data to Amazon S3.
-
On the Export data to Amazon S3 screen, under Define data export, set the time range for the data to export using From and To.
-
If your log group has multiple log streams, you can provide a log stream prefix to limit the log group data to a specific stream. Choose Advanced, and then for Stream prefix, enter the log stream prefix.
-
Under Choose S3 bucket, choose the account associated with the S3 bucket.
-
For S3 bucket name, choose an S3 bucket.
-
For S3 Bucket prefix, enter the randomly generated string that you specified in the bucket policy.
-
Choose Export to export your log data to Amazon S3.
-
To view the status of the log data that you exported to Amazon S3, choose Actions and then View all exports to Amazon S3.