Export log data to Amazon S3 using the AWS CLI
In the following example, you use an export task to export all data from a CloudWatch Logs log
group named my-log-group
to an Amazon S3 bucket named
my-exported-logs
. This example assumes that you have already created a
log group called my-log-group
.
Exporting log data to S3 buckets that are encrypted by AWS KMS is supported. Exporting to buckets encrypted with DSSE-KMS is not supported.
The details of how you set up the export depends on whether the Amazon S3 bucket that you want to export to is in the same account as your logs that are being exported, or in a different account.
Same-account export
If the Amazon S3 bucket is in the same account as the logs that are being exported, use the instructions in this section.
Topics
Step 1: Create an S3 bucket
We recommend that you use a bucket that was created specifically for CloudWatch Logs. However, if you want to use an existing bucket, you can skip to step 2.
Note
The S3 bucket must reside in the same Region as the log data to export. CloudWatch Logs doesn't support exporting data to S3 buckets in a different Region.
To create an S3 bucket using the AWS CLI
At a command prompt, run the following create-bucket command,
where LocationConstraint
is the Region where you are exporting log
data.
aws s3api create-bucket --bucket
my-exported-logs
--create-bucket-configuration LocationConstraint=us-east-2
The following is example output.
{ "Location": "/
my-exported-logs
" }
Step 2: Set up access permissions
To create the export task in step 5, you'll need to be signed on with the AmazonS3ReadOnlyAccess
IAM role
and with the following permissions:
logs:CreateExportTask
logs:CancelExportTask
logs:DescribeExportTasks
logs:DescribeLogStreams
logs:DescribeLogGroups
To provide access, add permissions to your users, groups, or roles:
-
Users and groups in AWS IAM Identity Center:
Create a permission set. Follow the instructions in Create a permission set in the AWS IAM Identity Center User Guide.
-
Users managed in IAM through an identity provider:
Create a role for identity federation. Follow the instructions in Create a role for a third-party identity provider (federation) in the IAM User Guide.
-
IAM users:
-
Create a role that your user can assume. Follow the instructions in Create a role for an IAM user in the IAM User Guide.
-
(Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in Adding permissions to a user (console) in the IAM User Guide.
-
Step 3: Set permissions on an S3 bucket
By default, all S3 buckets and objects are private. Only the resource owner, the account that created the bucket, can access the bucket and any objects that it contains. However, the resource owner can choose to grant access permissions to other resources and users by writing an access policy.
Important
To make exports to S3 buckets more secure, we now require you to specify the list of source accounts that are allowed to export log data to your S3 bucket.
In the following example, the list of account IDs in the aws:SourceAccount
key
would be the accounts from which a user can export log data to your S3 bucket. The aws:SourceArn
key would be the resource for which the action is being taken. You may restrict this to a
specific log group, or use a wildcard as shown in this example.
We recommend that you also include the account ID of the account where the S3 bucket is created, to allow export within the same account.
To set permissions on an S3 bucket
-
Create a file named
policy.json
and add the following access policy, changingmy-exported-logs
to the name of your S3 bucket andPrincipal
to the endpoint of the Region where you are exporting log data, such asus-west-1
. Use a text editor to create this policy file. Don't use the IAM console.{ "Version": "2012-10-17", "Statement": [ { "Action": "s3:GetBucketAcl", "Effect": "Allow", "Resource": "arn:aws:s3:::
my-exported-logs
", "Principal": { "Service": "logs.Region
.amazonaws.com" }, "Condition": { "StringEquals": { "aws:SourceAccount": [ "AccountId1", "AccountId2", ... ] }, "ArnLike": { "aws:SourceArn": [ "arn:aws:logs:Region
:AccountId1:log-group:*", "arn:aws:logs:Region
:AccountId2:log-group:*", ... ] } } }, { "Action": "s3:PutObject" , "Effect": "Allow", "Resource": "arn:aws:s3:::my-exported-logs
/*", "Principal": { "Service": "logs.Region
.amazonaws.com" }, "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control", "aws:SourceAccount": [ "AccountId1", "AccountId2", ... ] }, "ArnLike": { "aws:SourceArn": [ "arn:aws:logs:Region
:AccountId1:log-group:*", "arn:aws:logs:Region
:AccountId2:log-group:*", ... ] } } } ] } -
Set the policy that you just added as the access policy on your bucket by using the put-bucket-policy command. This policy enables CloudWatch Logs to export log data to your S3 bucket. The bucket owner will have full permissions on all of the exported objects.
aws s3api put-bucket-policy --bucket my-exported-logs --policy file://policy.json
Warning
If the existing bucket already has one or more policies attached to it, add the statements for CloudWatch Logs access to that policy or policies. We recommend that you evaluate the resulting set of permissions to be sure that they're appropriate for the users who will access the bucket.
(Optional) Step 4: Exporting to a bucket encrypted with SSE-KMS
This step is necessary only if you are exporting to an S3 bucket that uses server-side encryption with AWS KMS keys. This encryption is known as SSE-KMS.
To export to a bucket encrypted with SSE-KMS
-
Use a text editor to create a file named
key_policy.json
and add the following access policy. When you add the policy, make the following changes:-
Replace
Region
with the Region of your logs. -
Replace
account-ARN
with the ARN of the account that owns the KMS key.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "Allow CWL Service Principal usage", "Effect": "Allow", "Principal": { "Service": "logs.
Region
.amazonaws.com" }, "Action": [ "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": "*" }, { "Sid": "Enable IAM User Permissions", "Effect": "Allow", "Principal": { "AWS": "account-ARN
" }, "Action": [ "kms:GetKeyPolicy*", "kms:PutKeyPolicy*", "kms:DescribeKey*", "kms:CreateAlias*", "kms:ScheduleKeyDeletion*", "kms:Decrypt" ], "Resource": "*" } ] } -
-
Enter the following command:
aws kms create-key --policy file://key_policy.json
The following is example output from this command:
{ "KeyMetadata": { "AWSAccountId": "
account_id
", "KeyId": "key_id
", "Arn": "arn:aws:kms:us-east-2:account_id
:key/key_id
", "CreationDate": "time
", "Enabled": true, "Description": "", "KeyUsage": "ENCRYPT_DECRYPT", "KeyState": "Enabled", "Origin": "AWS_KMS", "KeyManager": "CUSTOMER", "CustomerMasterKeySpec": "SYMMETRIC_DEFAULT", "KeySpec": "SYMMETRIC_DEFAULT", "EncryptionAlgorithms": [ "SYMMETRIC_DEFAULT" ], "MultiRegion": false } -
Use a text editor to create a file called
bucketencryption.json
with the following contents.{ "Rules": [ { "ApplyServerSideEncryptionByDefault": { "SSEAlgorithm": "aws:kms", "KMSMasterKeyID": "{KMS Key ARN}" }, "BucketKeyEnabled": true } ] }
-
Enter the following command, replacing
bucket-name
with the name of the bucket that you are exporting logs to.aws s3api put-bucket-encryption --bucket
bucket-name
--server-side-encryption-configuration file://bucketencryption.jsonIf the command doesn't return an error, the process is successful.
Step 5: Create an export task
Use the following command to create the export task. After you create it, the export task might take anywhere from a few seconds to a few hours, depending on the size of the data to export.
To export data to Amazon S3 using the AWS CLI
-
Sign in with sufficient permissions as documented in Step 2: Set up access permissions.
-
At a command prompt, use the following create-export-task command to create the export task.
aws logs create-export-task --profile CWLExportUser --task-name "
my-log-group-09-10-2015
" --log-group-name "my-log-group
" --from1441490400000
--to1441494000000
--destination "my-exported-logs
" --destination-prefix "export-task-output
"The following is example output.
{ "taskId": "
cda45419-90ea-4db5-9833-aade86253e66
" }
Cross-account export
If the Amazon S3 bucket is in a different account than the logs that are being exported, use the instructions in this section.
Topics
Step 1: Create an S3 bucket
We recommend that you use a bucket that was created specifically for CloudWatch Logs. However, if you want to use an existing bucket, you can skip to step 2.
Note
The S3 bucket must reside in the same Region as the log data to export. CloudWatch Logs doesn't support exporting data to S3 buckets in a different Region.
To create an S3 bucket using the AWS CLI
At a command prompt, run the following create-bucket command,
where LocationConstraint
is the Region where you are exporting log
data.
aws s3api create-bucket --bucket
my-exported-logs
--create-bucket-configuration LocationConstraint=us-east-2
The following is example output.
{ "Location": "/
my-exported-logs
" }
Step 2: Set up access permissions
First, you must create a new IAM policy to enable CloudWatch Logs to have the s3:PutObject
permission
for the destination Amazon S3 bucket.
To create the export task in step 5, you'll need to be signed on with the AmazonS3ReadOnlyAccess
IAM role
and with certain other permissions. You can create a policy that contains some of these other
necessary permissions.
The policy that you create depends on whether the destination bucket uses AWS KMS encryption. If it does not use AWS KMS encryption, create a policy with the following contents.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::
my-exported-logs
/*" } ] }
If the destination bucket uses AWS KMS encryption, create a policy with the following contents.
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::
my-exported-logs
/*" }, { "Effect": "Allow", "Action": [ "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": "ARN_OF_KMS_KEY
" } ] }
To create the export task in step 5, you must be signed on with the AmazonS3ReadOnlyAccess
IAM role,
the IAM policy that you just created, and also with
the following permissions:
logs:CreateExportTask
logs:CancelExportTask
logs:DescribeExportTasks
logs:DescribeLogStreams
logs:DescribeLogGroups
To provide access, add permissions to your users, groups, or roles:
-
Users and groups in AWS IAM Identity Center:
Create a permission set. Follow the instructions in Create a permission set in the AWS IAM Identity Center User Guide.
-
Users managed in IAM through an identity provider:
Create a role for identity federation. Follow the instructions in Create a role for a third-party identity provider (federation) in the IAM User Guide.
-
IAM users:
-
Create a role that your user can assume. Follow the instructions in Create a role for an IAM user in the IAM User Guide.
-
(Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in Adding permissions to a user (console) in the IAM User Guide.
-
Step 3: Set permissions on an S3 bucket
By default, all S3 buckets and objects are private. Only the resource owner, the account that created the bucket, can access the bucket and any objects that it contains. However, the resource owner can choose to grant access permissions to other resources and users by writing an access policy.
Important
To make exports to S3 buckets more secure, we now require you to specify the list of source accounts that are allowed to export log data to your S3 bucket.
In the following example, the list of account IDs in the aws:SourceAccount
key
would be the accounts from which a user can export log data to your S3 bucket. The aws:SourceArn
key would be the resource for which the action is being taken. You may restrict this to a
specific log group, or use a wildcard as shown in this example.
We recommend that you also include the account ID of the account where the S3 bucket is created, to allow export within the same account.
To set permissions on an S3 bucket
-
Create a file named
policy.json
and add the following access policy, changingmy-exported-logs
to the name of your S3 bucket andPrincipal
to the endpoint of the Region where you are exporting log data, such asus-west-1
. Use a text editor to create this policy file. Don't use the IAM console.{ "Version": "2012-10-17", "Statement": [ { "Action": "s3:GetBucketAcl", "Effect": "Allow", "Resource": "arn:aws:s3:::
my-exported-logs
", "Principal": { "Service": "logs.Region
.amazonaws.com" }, "Condition": { "StringEquals": { "aws:SourceAccount": [ "AccountId1", "AccountId2", ... ] }, "ArnLike": { "aws:SourceArn": [ "arn:aws:logs:Region
:AccountId1:log-group:*", "arn:aws:logs:Region
:AccountId2:log-group:*", ... ] } } }, { "Action": "s3:PutObject" , "Effect": "Allow", "Resource": "arn:aws:s3:::my-exported-logs
/*", "Principal": { "Service": "logs.Region
.amazonaws.com" }, "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control", "aws:SourceAccount": [ "AccountId1", "AccountId2", ... ] }, "ArnLike": { "aws:SourceArn": [ "arn:aws:logs:Region
:AccountId1:log-group:*", "arn:aws:logs:Region
:AccountId2:log-group:*", ... ] } } }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::create_export_task_caller_account
:role/role_name
" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::my-exported-logs/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } } ] } -
Set the policy that you just added as the access policy on your bucket by using the put-bucket-policy command. This policy enables CloudWatch Logs to export log data to your S3 bucket. The bucket owner will have full permissions on all of the exported objects.
aws s3api put-bucket-policy --bucket my-exported-logs --policy file://policy.json
Warning
If the existing bucket already has one or more policies attached to it, add the statements for CloudWatch Logs access to that policy or policies. We recommend that you evaluate the resulting set of permissions to be sure that they're appropriate for the users who will access the bucket.
(Optional) Step 4: Exporting to a bucket encrypted with SSE-KMS
This step is necessary only if you are exporting to an S3 bucket that uses server-side encryption with AWS KMS keys. This encryption is known as SSE-KMS.
To export to a bucket encrypted with SSE-KMS
-
Use a text editor to create a file named
key_policy.json
and add the following access policy. When you add the policy, make the following changes:-
Replace
Region
with the Region of your logs. -
Replace
account-ARN
with the ARN of the account that owns the KMS key.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "Allow CWL Service Principal usage", "Effect": "Allow", "Principal": { "Service": "logs.
Region
.amazonaws.com" }, "Action": [ "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": "*" }, { "Sid": "Enable IAM User Permissions", "Effect": "Allow", "Principal": { "AWS": "account-ARN
" }, "Action": [ "kms:GetKeyPolicy*", "kms:PutKeyPolicy*", "kms:DescribeKey*", "kms:CreateAlias*", "kms:ScheduleKeyDeletion*", "kms:Decrypt" ], "Resource": "*" }, { "Sid": "Enable IAM Role Permissions", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::create_export_task_caller_account
:role/role_name
" }, "Action": [ "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": "ARN_OF_KMS_KEY
" } ] } -
-
Enter the following command:
aws kms create-key --policy file://key_policy.json
The following is example output from this command:
{ "KeyMetadata": { "AWSAccountId": "
account_id
", "KeyId": "key_id
", "Arn": "arn:aws:kms:us-east-2:account_id
:key/key_id
", "CreationDate": "time
", "Enabled": true, "Description": "", "KeyUsage": "ENCRYPT_DECRYPT", "KeyState": "Enabled", "Origin": "AWS_KMS", "KeyManager": "CUSTOMER", "CustomerMasterKeySpec": "SYMMETRIC_DEFAULT", "KeySpec": "SYMMETRIC_DEFAULT", "EncryptionAlgorithms": [ "SYMMETRIC_DEFAULT" ], "MultiRegion": false } -
Use a text editor to create a file called
bucketencryption.json
with the following contents.{ "Rules": [ { "ApplyServerSideEncryptionByDefault": { "SSEAlgorithm": "aws:kms", "KMSMasterKeyID": "{KMS Key ARN}" }, "BucketKeyEnabled": true } ] }
-
Enter the following command, replacing
bucket-name
with the name of the bucket that you are exporting logs to.aws s3api put-bucket-encryption --bucket
bucket-name
--server-side-encryption-configuration file://bucketencryption.jsonIf the command doesn't return an error, the process is successful.
Step 5: Create an export task
Use the following command to create the export task. After you create it, the export task might take anywhere from a few seconds to a few hours, depending on the size of the data to export.
To export data to Amazon S3 using the AWS CLI
-
Sign in with sufficient permissions as documented in Step 2: Set up access permissions.
-
At a command prompt, use the following create-export-task command to create the export task.
aws logs create-export-task --profile CWLExportUser --task-name "
my-log-group-09-10-2015
" --log-group-name "my-log-group
" --from1441490400000
--to1441494000000
--destination "my-exported-logs
" --destination-prefix "export-task-output
"The following is example output.
{ "taskId": "
cda45419-90ea-4db5-9833-aade86253e66
" }