

# Access logs (standard logs)
<a name="AccessLogs"></a>

You can configure CloudFront to create log files that contain detailed information about every user (viewer) request that CloudFront receives. These are called *access logs*, also known as *standard logs*. 

Each log contains information such as the time the request was received, the processing time, request paths, and server responses. You can use these access logs to analyze response times and to troubleshoot issues.

The following diagram shows how CloudFront logs information about requests for your objects. In this example, the distributions are configured to send access logs to an Amazon S3 bucket.

![\[Basic flow for access logs\]](http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/images/Logging.png)


1. In this example, you have two websites, A and B, and two corresponding CloudFront distributions. Users request your objects using URLs that are associated with your distributions.

1. CloudFront routes each request to the appropriate edge location.

1. CloudFront writes data about each request to a log file specific to that distribution. In this example, information about requests related to Distribution A goes into a log file for Distribution A. Information about requests related to Distribution B goes into a log file for Distribution B.

1. CloudFront periodically saves the log file for a distribution in the Amazon S3 bucket that you specified when you enabled logging. CloudFront then starts saving information about subsequent requests in a new log file for the distribution.

   If viewers don't access your content during a given hour, you don't receive any log files for that hour.

**Note**  
We recommend that you use the logs to understand the nature of the requests for your content, not as a complete accounting of all requests. CloudFront delivers access logs on a best-effort basis. The log entry for a particular request might be delivered long after the request was actually processed and, in rare cases, a log entry might not be delivered at all. When a log entry is omitted from access logs, the number of entries in the access logs won't match the usage that appears in the AWS billing and usage reports.

CloudFront supports two versions of standard logging. Standard logging (legacy) supports sending your access logs to Amazon S3 *only*. Standard logging (v2) supports additional delivery destinations. You can configure both or either logging option for your distribution. For more information, see the following topics:

**Topics**
+ [Configure standard logging (v2)](standard-logging.md)
+ [Configure standard logging (legacy)](standard-logging-legacy-s3.md)
+ [Standard logging reference](standard-logs-reference.md)

**Tip**  
CloudFront also offers real-time access logs, which give you information about requests made to a distribution in real time (logs are delivered within seconds of receiving the requests). You can use real-time access logs to monitor, analyze, and take action based on content delivery performance. For more information, see [Use real-time access logs](real-time-logs.md).

# Configure standard logging (v2)
<a name="standard-logging"></a>

You can enable access logs (standard logs) when you create or update a distribution. Standard logging (v2) includes the following features:
+ Send access logs to Amazon CloudWatch Logs, Amazon Data Firehose, and Amazon Simple Storage Service (Amazon S3).
+ Select the log fields that you want. You can also select a [subset of real-time access log fields](#standard-logging-real-time-log-selection).
+ Select additional [output log file ](#supported-log-file-format)formats.

If you’re using Amazon S3, you have the following optional features:
+ Send logs to opt-in AWS Regions.
+ Organize your logs with partitioning.
+ Enable Hive-compatible file names.

For more information, see [Send logs to Amazon S3](#send-logs-s3).

To get started with standard logging, complete the following steps:

1. Set up your required permissions for the specified AWS service that will receive your logs.

1. Configure standard logging from the CloudFront console or the CloudWatch API.

1. View your access logs.

**Note**  
If you enable standard logging (v2), this doesn’t affect or change standard logging (legacy). You can continue to use standard logging (legacy) for your distribution, in addition to using standard logging (v2). For more information, see [Configure standard logging (legacy)](standard-logging-legacy-s3.md).
If you already enabled standard logging (legacy) and you want to enable standard logging (v2) to Amazon S3, we recommend that you specify a *different* Amazon S3 bucket or use a *separate path* in the same bucket (for example, use a log prefix or partitioning). This helps you keep track of which log files are associated with which distribution and prevents log files from overwriting each other.

## Permissions
<a name="permissions-standard-logging"></a>

CloudFront uses CloudWatch vended logs to deliver access logs. To do so, you need permissions to the specified AWS service so that you can enable logging delivery.

To see the required permissions for each logging destination, choose from the following topics in the *Amazon CloudWatch Logs User Guide*.
+ [CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-V2-CloudWatchLogs)
+ [Firehose](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-V2-Firehose)
+ [Amazon S3](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-V2-S3)

After you have set up permissions to your logging destination, you can enable standard logging for your distribution.

**Note**  
CloudFront supports sending access logs to different AWS accounts (cross accounts). To enable cross-account delivery, both accounts (your account and the receiving account) must have the required permissions. For more information, see the [Enable standard logging for cross-account delivery](#enable-standard-logging-cross-accounts) section or the [Cross-account delivery example](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#vended-logs-crossaccount-example) in the *Amazon CloudWatch Logs User Guide*. 

## Enable standard logging
<a name="set-up-standard-logging"></a>

To enable standard logging, you can use the CloudFront console or the CloudWatch API.

**Contents**
+ [Enable standard logging (CloudFront console)](#access-logging-console)
+ [Enable standard logging (CloudWatch API)](#enable-access-logging-api)

### Enable standard logging (CloudFront console)
<a name="access-logging-console"></a>

**To enable standard logging for a CloudFront distribution (console)**

1. Use the CloudFront console to [update an existing distribution](HowToUpdateDistribution.md#HowToUpdateDistributionProcedure).

1. Choose the **Logging** tab.

1. Choose **Add**, then select the service to receive your logs:
   + CloudWatch Logs
   + Firehose
   + Amazon S3

1. For the **Destination**, select the resource for your service. If you haven’t already created your resource, you can choose **Create** or see the following documentation.
   + For CloudWatch Logs, enter the **[Log group name](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html)**.
   + For Firehose, enter the **[Firehose delivery stream](https://docs.aws.amazon.com/firehose/latest/dev/basic-create.html)**.
   + For Amazon S3, enter the **[Bucket name](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html)**. 
**Tip**  
To specify a prefix, enter the prefix after the bucket name, such as `amzn-s3-demo-bucket.s3.amazonaws.com/MyLogPrefix`. If you don't specify a prefix, CloudFront will automatically add one for you. For more information, see [Send logs to Amazon S3](#send-logs-s3).

1. For **Additional settings – *optional***, you can specify the following options:

   1. For **Field selection**, select the log field names that you want to deliver to your destination. You can select [access log fields](standard-logs-reference.md#BasicDistributionFileFormat) and a subset of [real-time access log fields](#standard-logging-real-time-log-selection).

   1. (Amazon S3 only) For **Partitioning**, specify the path to partition your log file data. 

   1. (Amazon S3 only) For **Hive-compatible file format**, you can select the checkbox to use Hive-compatible S3 paths. This helps simplify loading new data into your Hive-compatible tools.

   1. For **Output format**, specify your preferred format.
**Note**  
If you choose **Parquet**, this option incurs CloudWatch charges for converting your access logs to Apache Parquet. For more information, see the [Vended Logs section for CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/).

   1. For **Field delimiter**, specify how to separate log fields. 

1. Complete the steps to update or create your distribution.

1. To add another destination, repeat steps 3–6.

1. From the **Logs** page, verify that the standard logs status is **Enabled** next to the distribution.

1. (Optional) To enable cookie logging, choose **Manage**, **Settings** and turn on **Cookie logging**, then choose **Save changes**.
**Tip**  
Cookie logging is a global setting that applies to *all* standard logging for your distribution. You can’t override this setting for separate delivery destinations.

For more information about the standard logging delivery and log fields, see the [Standard logging reference](standard-logs-reference.md).

### Enable standard logging (CloudWatch API)
<a name="enable-access-logging-api"></a>

You can also use the CloudWatch API to enable standard logging for your distributions. 

**Notes**  
When calling the CloudWatch API to enable standard logging, you must specify the US East (N. Virginia) Region (`us-east-1`), even if you want to enable cross Region delivery to another destination. For example, if you want to send your access logs to an S3 bucket in the Europe (Ireland) Region (`eu-west-1`), use the CloudWatch API in the `us-east-1` Region.
There is an additional option to include cookies in standard logging. In the CloudFront API, this is the `IncludeCookies` parameter. If you configure access logging by using the CloudWatch API and you specify that you want to include cookies, you must use the CloudFront console or CloudFront API to update your distribution to include cookies. Otherwise, CloudFront can’t send cookies to your log destination. For more information, see [Cookie logging](DownloadDistValuesGeneral.md#DownloadDistValuesCookieLogging).

**To enable standard logging for a distribution (CloudWatch API)**

1. After you a create a distribution, get the Amazon Resource Name (ARN). 

   You can find the ARN from the **Distribution** page in the CloudFront console or you can use the [GetDistribution](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_GetDistribution.html) API operation. A distribution ARN follows the format: `arn:aws:cloudfront::123456789012:distribution/d111111abcdef8` 

1. Next, use the CloudWatch [PutDeliverySource](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliverySource.html) API operation to create a delivery source for the distribution. 

   1. Enter a name for the delivery source.

   1. Pass the `resourceArn` of the distribution. 

   1. For `logType`, specify `ACCESS_LOGS` as the type of logs that are collected. 

   1.   
**Example AWS CLI put-delivery-source command**  

      The following is an example of configuring a delivery source for a distribution.

      ```
      aws logs put-delivery-source --name S3-delivery --resource-arn arn:aws:cloudfront::123456789012:distribution/d111111abcdef8 --log-type ACCESS_LOGS
      ```

      **Output**

      ```
      {
       "deliverySource": {
       "name": "S3-delivery",
       "arn": "arn:aws:logs:us-east-1:123456789012:delivery-source:S3-delivery",
       "resourceArns": [
       "arn:aws:cloudfront::123456789012:distribution/d111111abcdef8"
       ],
       "service": "cloudfront",
       "logType": "ACCESS_LOGS"
       }
      }
      ```

1. Use the [PutDeliveryDestination](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html) API operation to configure where to store your logs. 

   1. For `destinationResourceArn`, specify the ARN of the destination. This can be a CloudWatch Logs log group, a Firehose delivery stream, or an Amazon S3 bucket.

   1. For `outputFormat`, specify the output format for your logs.

   1.   
**Example AWS CLI put-delivery-destination command**  

      The following is an example of configuring a delivery destination to an Amazon S3 bucket.

      ```
      aws logs put-delivery-destination --name S3-destination --delivery-destination-configuration destinationResourceArn=arn:aws:s3:::amzn-s3-demo-bucket
      ```

      **Output**

      ```
      {
          "name": "S3-destination",
          "arn": "arn:aws:logs:us-east-1:123456789012:delivery-destination:S3-destination",
          "deliveryDestinationType": "S3",
          "deliveryDestinationConfiguration": {
              "destinationResourceArn": "arn:aws:s3:::amzn-s3-demo-bucket"
          }
      }
      ```
**Note**  
If you're delivering logs cross-account, you must use the [PutDeliveryDestinationPolicy](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestinationPolicy.html) API operation to assign an AWS Identity and Access Management (IAM) policy to the destination account. The IAM policy allows delivery from one account to another account.

1. Use the [CreateDelivery](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateDelivery.html) API operation to link the delivery source to the destination that you created in the previous steps. This API operation associates the delivery source with the end destination.

   1. For `deliverySourceName`, specify the source name.

   1. For `deliveryDestinationArn`, specify the ARN for the delivery destination.

   1. For `fieldDelimiter`, specify the string to separate each log field.

   1. For `recordFields`, specify the log fields that you want.

   1. If you’re using S3, specify whether to use `enableHiveCompatiblePath` and `suffixPath`.  
**Example AWS CLI create-delivery command**  

   The following is an example of creating a delivery. 

   ```
   aws logs create-delivery --delivery-source-name cf-delivery --delivery-destination-arn arn:aws:logs:us-east-1:123456789012:delivery-destination:S3-destination
   ```

   **Output**

   ```
   {
       "id": "abcNegnBoTR123",
       "arn": "arn:aws:logs:us-east-1:123456789012:delivery:abcNegnBoTR123",
       "deliverySourceName": "cf-delivery",
       "deliveryDestinationArn": "arn:aws:logs:us-east-1:123456789012:delivery-destination:S3-destination",
       "deliveryDestinationType": "S3",
       "recordFields": [
           "date",
           "time",
           "x-edge-location",
           "sc-bytes",
           "c-ip",
           "cs-method",
           "cs(Host)",
           "cs-uri-stem",
           "sc-status",
           "cs(Referer)",
           "cs(User-Agent)",
           "cs-uri-query",
           "cs(Cookie)",
           "x-edge-result-type",
           "x-edge-request-id",
           "x-host-header",
           "cs-protocol",
           "cs-bytes",
           "time-taken",
           "x-forwarded-for",
           "ssl-protocol",
           "ssl-cipher",
           "x-edge-response-result-type",
           "cs-protocol-version",
           "fle-status",
           "fle-encrypted-fields",
           "c-port",
           "time-to-first-byte",
           "x-edge-detailed-result-type",
           "sc-content-type",
           "sc-content-len",
           "sc-range-start",
           "sc-range-end",
           "c-country",
           "cache-behavior-path-pattern"
       ],
        "fieldDelimiter": ""
   }
   ```

1. From the CloudFront console, on the **Logs** page, verify that the standard logs status is **Enabled** next to the distribution.

   For more information about the standard logging delivery and log fields, see the [Standard logging reference](standard-logs-reference.md).

**Note**  
To enable standard logging (v2) for CloudFront by using AWS CloudFormation, you can use the following CloudWatch Logs properties:  
[Delivery](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-delivery.html)
[DeliveryDestination](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-deliverydestination.html)
[DeliverySource](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-deliverysource.html)
The `ResourceArn` is the CloudFront distribution and `LogType` must be `ACCESS_LOGS` as the supported log type.

## Enable standard logging for cross-account delivery
<a name="enable-standard-logging-cross-accounts"></a>

If you enable standard logging for your AWS account and you want to deliver your access logs to another account, make sure that you configure the source account and the destination account correctly. The *source account * with the CloudFront distribution sends its access logs to the *destination account*.

In this example procedure, the source account *111111111111*) sends its access logs to an Amazon S3 bucket in the destination account (*222222222222*). To send access logs to an Amazon S3 bucket in the destination account, use the AWS CLI. 

### Configure the destination account
<a name="steps-destination-account"></a>

For destination account, complete the following procedure.

**To configure the destination account**

1. To create the log delivery destination, you can enter the following AWS CLI command. This example uses the `MyLogPrefix` string to create a prefix for your access logs.

   ```
   aws logs put-delivery-destination --name cloudfront-delivery-destination --delivery-destination-configuration "destinationResourceArn=arn:aws:s3:::amzn-s3-demo-bucket-cloudfront-logs/MyLogPrefix"
   ```

   **Output**

   ```
   {
       "deliveryDestination": {
           "name": "cloudfront-delivery-destination",
           "arn": "arn:aws:logs:us-east-1:222222222222:delivery-destination:cloudfront-delivery-destination",
           "deliveryDestinationType": "S3",
           "deliveryDestinationConfiguration": {"destinationResourceArn": "arn:aws:s3:::amzn-s3-demo-bucket-cloudfront-logs/MyLogPrefix"}
       }
   }
   ```
**Note**  
If you specify an S3 bucket *without* a prefix, CloudFront will automatically append the `AWSLogs/<account-ID>/CloudFront` as a prefix that appears in the `suffixPath` of the S3 delivery destination. For more information, see [S3DeliveryConfiguration](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_S3DeliveryConfiguration.html).

1. Add the resource policy for the log delivery destination to allow the source account to create a log delivery.

   In the following policy, replace *111111111111* with the source account ID and specify the delivery destination ARN from the output in step 1. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "AllowCreateDelivery",
               "Effect": "Allow",
               "Principal": {"AWS": "111111111111"},
               "Action": ["logs:CreateDelivery"],
               "Resource": "arn:aws:logs:us-east-1:222222222222:delivery-destination:cloudfront-delivery-destination"
           }
       ]
   }
   ```

------

1. Save the file, such as `deliverypolicy.json`.

1. To attach the previous policy to the delivery destination, enter the following AWS CLI command.

   ```
   aws logs put-delivery-destination-policy --delivery-destination-name cloudfront-delivery-destination --delivery-destination-policy file://deliverypolicy.json
   ```

1. Add the below statement to the destination Amazon S3 bucket policy, replacing the resource ARN and the source account ID. This policy allows the `delivery.logs.amazonaws.com` service principal to perform the `s3:PutObject` action.

   ```
   {
       "Sid": "AWSLogsDeliveryWrite",
       "Effect": "Allow",
       "Principal": {"Service": "delivery.logs.amazonaws.com"},
       "Action": "s3:PutObject",
       "Resource": "arn:aws:s3:::amzn-s3-demo-bucket-cloudfront-logs/*",
       "Condition": {
           "StringEquals": {
               "s3:x-amz-acl": "bucket-owner-full-control",
               "aws:SourceAccount": "111111111111"
           },
           "ArnLike": {"aws:SourceArn": "arn:aws:logs:us-east-1:111111111111:delivery-source:*"}
       }
   }
   ```

1. If you're using AWS KMS for your bucket, add the following statement to the KMS key policy to grant permissions to the `delivery.logs.amazonaws.com` service principal.

   ```
   {
       "Sid": "Allow Logs Delivery to use the key",
       "Effect": "Allow",
       "Principal": {"Service": "delivery.logs.amazonaws.com"},
       "Action": [
           "kms:Encrypt",
           "kms:Decrypt",
           "kms:ReEncrypt*",
           "kms:GenerateDataKey*",
           "kms:DescribeKey"
       ],
       "Resource": "*",
       "Condition": {
           "StringEquals": {"aws:SourceAccount": "111111111111"},
           "ArnLike": {"aws:SourceArn": "arn:aws:logs:us-east-1:111111111111:delivery-source:*"}
       }
   }
   ```

### Configure the source account
<a name="steps-source-account"></a>

After you configure the destination account, follow this procedure to create the delivery source and enable logging for the distribution in the source account.

**To configure the source account**

1. Create a delivery source for CloudFront standard logging so that you can send log files to CloudWatch Logs. 

   You can enter the following AWS CLI command, replacing the name and your distribution ARN.

   ```
   aws logs put-delivery-source --name s3-cf-delivery --resource-arn arn:aws:cloudfront::111111111111:distribution/E1TR1RHV123ABC --log-type ACCESS_LOGS
   ```

   **Output**

   ```
   {
       "deliverySource": {
           "name": "s3-cf-delivery",
           "arn": "arn:aws:logs:us-east-1:111111111111:delivery-source:s3-cf-delivery",
           "resourceArns": ["arn:aws:cloudfront::111111111111:distribution/E1TR1RHV123ABC"],
           "service": "cloudfront",
           "logType": "ACCESS_LOGS"
       }
   }
   ```

1. Create a delivery to map the source account's log delivery source and the destination account's log delivery destination.

   In the following AWS CLI command, specify the delivery destination ARN from the output in [Step 1: Configure the destination account](#steps-destination-account).

   ```
   aws logs create-delivery --delivery-source-name s3-cf-delivery --delivery-destination-arn arn:aws:logs:us-east-1:222222222222:delivery-destination:cloudfront-delivery-destination
   ```

   **Output**

   ```
   {
       "delivery": {
           "id": "OPmOpLahVzhx1234",
           "arn": "arn:aws:logs:us-east-1:111111111111:delivery:OPmOpLahVzhx1234",
           "deliverySourceName": "s3-cf-delivery",
           "deliveryDestinationArn": "arn:aws:logs:us-east-1:222222222222:delivery-destination:cloudfront-delivery-destination",
           "deliveryDestinationType": "S3",
           "recordFields": [
               "date",
               "time",
               "x-edge-location",
               "sc-bytes",
               "c-ip",
               "cs-method",
               "cs(Host)",
               "cs-uri-stem",
               "sc-status",
               "cs(Referer)",
               "cs(User-Agent)",
               "cs-uri-query",
               "cs(Cookie)",
               "x-edge-result-type",
               "x-edge-request-id",
               "x-host-header",
               "cs-protocol",
               "cs-bytes",
               "time-taken",
               "x-forwarded-for",
               "ssl-protocol",
               "ssl-cipher",
               "x-edge-response-result-type",
               "cs-protocol-version",
               "fle-status",
               "fle-encrypted-fields",
               "c-port",
               "time-to-first-byte",
               "x-edge-detailed-result-type",
               "sc-content-type",
               "sc-content-len",
               "sc-range-start",
               "sc-range-end",
               "c-country",
               "cache-behavior-path-pattern"
           ],
           "fieldDelimiter": "\t"
       }
   }
   ```

1. Verify your cross-account delivery is successful.

   1. From the *source* account, sign in to the CloudFront console and choose your distribution. On the **Logging** tab, under **Type**, you will see an entry created for the S3 cross-account log delivery.

   1. From the *destination* account, sign in to the Amazon S3 console and choose your Amazon S3 bucket. You will see the prefix `MyLogPrefix` in the bucket name and any access logs delivered to that folder. 

## Output file format
<a name="supported-log-file-format"></a>

Depending on the delivery destination that you choose, you can specify one of the following formats for log files:
+ JSON
+ Plain
+ w3c
+ Raw
+ Parquet (Amazon S3 only)

**Note**  
You can only set the output format when you first create the delivery destination. This can't be updated later. To change the output format, delete the delivery and create another one.

For more information, see [PutDeliveryDestination](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html) in the *Amazon CloudWatch Logs API Reference*.

## Edit standard logging settings
<a name="standard-logs-v2-edit-settings"></a>

You can enable or disable logging and update other log settings by using the [CloudFront console](https://console.aws.amazon.com/cloudfront/v4/home) or the CloudWatch API. Your changes to logging settings take effect within 12 hours.

For more information, see the following topics:
+ To update a distribution by using the CloudFront console, see [Update a distribution](HowToUpdateDistribution.md).
+ To update a distribution by using the CloudFront API, see [UpdateDistribution](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_UpdateDistribution.html) in the *Amazon CloudFront API Reference*.
+ For more information about CloudWatch Logs API operations, see the [Amazon CloudWatch Logs API Reference](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/Welcome.html).

## Access log fields
<a name="standard-logging-real-time-log-selection"></a>

You can select the same log fields that standard logging (legacy) supports. For more information, see [log file fields](standard-logs-reference.md#BasicDistributionFileFormat).

In addition, you can select the following [real-time access log fields](real-time-logs.md#understand-real-time-log-config).

1. `timestamp(ms)` – Timestamp in milliseconds.

1. `origin-fbl` – The number of seconds of first-byte latency between CloudFront and your origin. 

1. `origin-lbl` – The number of seconds of last-byte latency between CloudFront and your origin. 

1. `asn` – The autonomous system number (ASN) of the viewer. 

1. `c-country` – A country code that represents the viewer's geographic location, as determined by the viewer's IP address. For a list of country codes, see [ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2).

1. `cache-behavior-path-pattern` – The path pattern that identifies the cache behavior that matched the viewer request. 

## Send logs to CloudWatch Logs
<a name="send-logs-cloudwatch-logs"></a>

To send logs to CloudWatch Logs, create or use an existing CloudWatch Logs log group. For more information about configuring a CloudWatch Logs log group, see [Working with Log Groups and Log Streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html).

After you create your log group, you must have the required permissions to allow standard logging. For more information about the required permissions, see [Logs sent to CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-V2-CloudWatchLogs) in the *Amazon CloudWatch Logs User Guide*. 

**Notes**  
When you specify the name of the CloudWatch Logs log group, only use the regex pattern `[\w-]`. For more information, see the [PutDeliveryDestination](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html#API_PutDeliveryDestination_RequestSyntax) API operation in the *Amazon CloudWatch Logs API Reference*.
Verify that your log group resource policy doesn't exceed the size limit. See the [Log group resource policy size limit considerations](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-V2-CloudWatchLogs) section in the CloudWatch Logs topic.

### Example access log sent to CloudWatch Logs
<a name="example-access-logs-cwl"></a>

```
{ 
"date": "2024-11-14", 
"time": "21:34:06", 
"x-edge-location": "SOF50-P2", 
"asn": "16509", 
"timestamp(ms)": "1731620046814", 
"origin-fbl": "0.251", 
"origin-lbl": "0.251", 
"x-host-header": "d111111abcdef8.cloudfront.net", 
"cs(Cookie)": "examplecookie=value" 
}
```

## Send logs to Firehose
<a name="send-logs-kinesis"></a>

To send logs to Firehose, create or use an existing Firehose delivery stream. Then, specify the Firehose delivery stream as the log delivery destination. You must specify a Firehose delivery stream in the US East (N. Virginia) us-east-1 Region.

For information about creating your delivery stream, see [Creating an Amazon Data Firehose delivery stream](https://docs.aws.amazon.com/firehose/latest/dev/basic-create.html). 

After you create your delivery stream, you must have the required permissions to allow standard logging. For more information, see [Logs sent to Firehose](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-V2-Firehose) in the *Amazon CloudWatch Logs User Guide*.

**Note**  
When you specify the name of the Firehose stream, only use the regex pattern `[\w-]`. For more information, see the [PutDeliveryDestination](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html#API_PutDeliveryDestination_RequestSyntax) API operation in the *Amazon CloudWatch Logs API Reference*.

### Example access log sent to Firehose
<a name="example-access-logs-firehose"></a>

```
{"date":"2024-11-15","time":"19:45:51","x-edge-location":"SOF50-P2","asn":"16509","timestamp(ms)":"1731699951183","origin-fbl":"0.254","origin-lbl":"0.254","x-host-header":"d111111abcdef8.cloudfront.net","cs(Cookie)":"examplecookie=value"}
{"date":"2024-11-15","time":"19:45:52","x-edge-location":"SOF50-P2","asn":"16509","timestamp(ms)":"1731699952950","origin-fbl":"0.125","origin-lbl":"0.125","x-host-header":"d111111abcdef8.cloudfront.net","cs(Cookie)":"examplecookie=value"}
```

## Send logs to Amazon S3
<a name="send-logs-s3"></a>

To send your access logs to Amazon S3, create or use an existing S3 bucket. When you enable logging in CloudFront, specify the bucket name. For information about creating a bucket, see [Create a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon Simple Storage Service User Guide*.

After you create your bucket, you must have the required permissions to allow standard logging. For more information, see [Logs sent to Amazon S3](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-V2-S3) in the *Amazon CloudWatch Logs User Guide*.
+ After you enable logging, AWS automatically adds the required bucket policies for you.
+ You can also use S3 buckets in the [opt-in AWS Regions](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html).

**Note**  
If you already enabled standard logging (legacy) and you want to enable standard logging (v2) to Amazon S3, we recommend that you specify a *different* Amazon S3 bucket or use a *separate path* in the same bucket (for example, use a log prefix or partitioning). This helps you keep track of which log files are associated with which distribution and prevents log files from overwriting each other.

**Topics**
+ [Specify an S3 bucket](#prefix-s3-buckets)
+ [Partitioning](#partitioning)
+ [Hive-compatible file name format](#hive-compatible-file-name-format)
+ [Example paths to access logs](#bucket-path-examples)
+ [Example access log sent to Amazon S3](#example-access-logs-s3)

### Specify an S3 bucket
<a name="prefix-s3-buckets"></a>

When you specify an S3 bucket as the delivery destination, note the following.

The S3 bucket name can only use the regex pattern `[\w-]`. For more information, see the [PutDeliveryDestination](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html#API_PutDeliveryDestination_RequestSyntax) API operation in the *Amazon CloudWatch Logs API Reference*.

If you specified a prefix for your S3 bucket, your logs appear under that path. If you don't specify a prefix, CloudFront will automatically append the `AWSLogs/{account-id}/CloudFront` prefix for you. 

For more information, see [Example paths to access logs](#bucket-path-examples).

### Partitioning
<a name="partitioning"></a>

You can use partitioning to organize your access logs when CloudFront sends them to your S3 bucket. This helps you organize and locate your access logs based on the path that you want.

You can use the following variables to create a folder path.
+ `{DistributionId}` or `{distributionid}`
+ `{yyyy}`
+ `{MM}`
+ `{dd}`
+ `{HH}`
+ `{accountid}`

You can use any number of variables and specify folder names in your path. CloudFront then uses this path to create a folder structure for you in the S3 bucket.

**Examples**
+ `my_distribution_log_data/{DistributionId}/logs`
+ `/cloudfront/{DistributionId}/my_distribution_log_data/{yyyy}/{MM}/{dd}/{HH}/logs `

**Note**  
 You can use either variable for distribution ID in the suffix path. However, if you're sending access logs to AWS Glue, you must use the `{distributionid}` variable because AWS Glue expects partition names to be in lowercase. Update your existing log configuration in CloudFront to replace `{DistributionId}` with `{distributionid}`. 

### Hive-compatible file name format
<a name="hive-compatible-file-name-format"></a>

You can use this option so that S3 objects that contain delivered access logs use a prefix structure that allows for integration with Apache Hive. For more information, see the [CreateDelivery](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateDelivery.html) API operation.

**Example**  

```
/cloudfront/DistributionId={DistributionId}/my_distribution_log_data/year={yyyy}/month={MM}/day={dd}/hour={HH}/logs
```

For more information about partitioning and the Hive-compatible options, see the [S3DeliveryConfiguration](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_S3DeliveryConfiguration.html) element in the *Amazon CloudWatch Logs API Reference*.

### Example paths to access logs
<a name="bucket-path-examples"></a>

When you specify an S3 bucket as the destination, you can use the following options to create the path to your access logs:
+ An Amazon S3 bucket, with or without a prefix
+ Partitioning, by using a CloudFront provided variable or entering your own
+ Enabling the Hive-compatible option

The following tables show how your access logs appear in your bucket, depending on the options that you choose.

#### Amazon S3 bucket with a prefix
<a name="bucket-with-prefix"></a>


| Amazon S3 bucket name | Partition that you specify in the suffix path | Updated suffix path | Hive-compatible enabled? | Access logs are sent to | 
| --- | --- | --- | --- | --- | 
| amzn-s3-demo-bucket/MyLogPrefix | None | None | No | amzn-s3-demo-bucket/MyLogPrefix/ | 
| amzn-s3-demo-bucket/MyLogPrefix | myFolderA/ | myFolderA/ | No | amzn-s3-demo-bucket/MyLogPrefix/myFolderA/ | 
| amzn-s3-demo-bucket/MyLogPrefix | myFolderA/\$1yyyy\$1 | myFolderA/\$1yyyy\$1 | Yes | amzn-s3-demo-bucket/MyLogPrefix/myFolderA/year=2025 | 

#### Amazon S3 bucket without a prefix
<a name="bucket-without-prefix"></a>


| Amazon S3 bucket name | Partition that you specify in the suffix path | Updated suffix path | Hive-compatible enabled? | Access logs are sent to | 
| --- | --- | --- | --- | --- | 
| amzn-s3-demo-bucket | None | AWSLogs/\$1account-id\$1/CloudFront/ | No | amzn-s3-demo-bucket/AWSLogs/<your-account-ID>/CloudFront/ | 
| amzn-s3-demo-bucket | myFolderA/ | AWSLogs/\$1account-id\$1/CloudFront/myFolderA/ | No | amzn-s3-demo-bucket/AWSLogs/<your-account-ID>/CloudFront/myFolderA/ | 
| amzn-s3-demo-bucket | myFolderA/ | AWSLogs/\$1account-id\$1/CloudFront/myFolderA/ | Yes | amzn-s3-demo-bucket/AWSLogs/aws-account-id=<your-account-ID>/CloudFront/myFolderA/ | 
| amzn-s3-demo-bucket | myFolderA/\$1yyyy\$1 | AWSLogs/\$1account-id\$1/CloudFront/myFolderA/\$1yyyy\$1 | Yes | amzn-s3-demo-bucket/AWSLogs/aws-account-id=<your-account-ID>/CloudFront/myFolderA/year=2025 | 

#### AWS account ID as a partition
<a name="bucket-account-id-partition"></a>


| Amazon S3 bucket name | Partition that you specify in the suffix path | Updated suffix path | Hive-compatible enabled? | Access logs are sent to | 
| --- | --- | --- | --- | --- | 
| amzn-s3-demo-bucket | None | AWSLogs/\$1account-id\$1/CloudFront/ | Yes | amzn-s3-demo-bucket/AWSLogs/aws-account-id=<your-account-ID>/CloudFront/ | 
| amzn-s3-demo-bucket | myFolderA/\$1accountid\$1 | AWSLogs/\$1account-id\$1/CloudFront/myFolderA/\$1accountid\$1 | Yes | amzn-s3-demo-bucket/AWSLogs/aws-account-id=<your-account-ID>/CloudFront/myFolderA/accountid=<your-account-ID> | 

**Notes**  
The `{account-id}` variable is reserved for CloudFront. CloudFront automatically adds this variable to your suffix path if you specify an Amazon S3 bucket *without* a prefix. If your logs are Hive-compatible, this variable appears as `aws-account-id`.
You can use the `{accountid}` variable so that CloudFront adds your account ID to the suffix path. If your logs are Hive-compatible, this variable appears as `accountid`.
For more information about the suffix path, see [S3DeliveryConfiguration](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_S3DeliveryConfiguration.html).

### Example access log sent to Amazon S3
<a name="example-access-logs-s3"></a>

```
#Fields: date time x-edge-location asn timestamp(ms) x-host-header cs(Cookie)
2024-11-14    22:30:25    SOF50-P2    16509    1731623425421    
d111111abcdef8.cloudfront.net    examplecookie=value2
```

## Disable standard logging
<a name="delete-standard-log-destination"></a>

You can disable standard logging for your distribution if you no longer need it.

**To disable standard logging**

1. Sign in to the CloudFront console.

1. Choose **Distribution** and then choose your distribution ID. 

1. Choose **Logging** and then under **Access log destinations**, select the destination.

1. Choose **Manage** and then choose **Delete**.

1. Repeat the previous step if you have more than one standard logging.

**Note**  
When you delete standard logging from the CloudFront console, this action only deletes the delivery and the delivery destination. It doesn't delete the delivery source from your AWS account. To delete a delivery source, specify the delivery source name in the `aws logs delete-delivery-source --name DeliverySourceName` command. For more information, see [DeleteDeliverySource](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DeleteDeliverySource.html) in the *Amazon CloudWatch Logs API Reference*.

## Troubleshoot
<a name="troubleshooting-access-logs-v2"></a>

Use the following information to fix common issues when you work with CloudFront standard logging (v2).

### Delivery source already exists
<a name="access-logging-resource-already-used"></a>

When you enable standard logging for a distribution, you create a delivery source. You then use that delivery source to create deliveries to destination type that you want: CloudWatch Logs, Firehose, Amazon S3. Currently, you can only have one delivery source per distribution. If you try to create another delivery source for the same distribution, the following error message appears.

```
This ResourceId has already been used in another Delivery Source in this account
```

To create another delivery source, delete the existing one first. For more information, see [DeleteDeliverySource](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DeleteDeliverySource.html) in the *Amazon CloudWatch Logs API Reference*.

### I changed the suffix path and the Amazon S3 bucket can't receive my logs
<a name="access-logging-s3-permission"></a>

If you enabled standard logging (v2) and specify a bucket ARN without a prefix, CloudFront will append the following default to the suffix path: `AWSLogs/{account-id}/CloudFront`. If you use the CloudFront console or the [UpdateDeliveryConfiguration](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_UpdateDeliveryConfiguration.html) API operation to specify a different suffix path, you must update the Amazon S3 bucket policy to use the same path.

**Example: Updating the suffix path**  

1. Your default suffix path is `AWSLogs/{account-id}/CloudFront` and you replace it with `myFolderA`. 

1. Because your new suffix path is different than the path specified in the Amazon S3 bucket policy, your access logs won't be delivered.

1. You can do one of the following steps:
   + Update the Amazon S3 bucket permission from `amzn-s3-demo-bucket/AWSLogs/<your-account-ID>/CloudFront/*` to `amzn-s3-demo-bucket/myFolderA/*`.
   + Update your logging configuration to use the default suffix again: `AWSLogs/{account-id}/CloudFront` 
For more information, see [Permissions](#permissions-standard-logging).

## Delete log files
<a name="standard-logs-v2-delete"></a>

CloudFront doesn't automatically delete log files from your destination. For information about deleting log files, see the following topics:

**Amazon S3**
+ [Deleting objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeletingObjects.html) in the *Amazon Simple Storage Service Console User Guide*

**CloudWatch Logs**
+ [Working with log groups and log streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) in the *Amazon CloudWatch Logs User Guide*
+ [DeleteLogGroup](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_DeleteLogGroup.html) in the *Amazon CloudWatch Logs API Reference*

**Firehose**
+ [DeleteDeliveryStream](https://docs.aws.amazon.com/firehose/latest/APIReference/API_DeleteDeliveryStream.html) in the *Amazon Data Firehose API Reference*

## Pricing
<a name="pricing-standard-logs"></a>

CloudFront doesn’t charge for enabling standard logs. However, you can incur charges for the delivery, ingestion, storage or access, depending on the log delivery destination that you select. For more information, see [Amazon CloudWatch Logs Pricing](https://aws.amazon.com/cloudwatch/pricing/). Under **Paid Tier**, choose the **Logs** tab, and then under **Vended Logs**, see the information for each delivery destination.

For more information about pricing for each AWS service, see the following topics:
+ [Amazon CloudWatch Logs Pricing](https://aws.amazon.com/cloudwatch/pricing/)
+ [Amazon Data Firehose Pricing](https://aws.amazon.com/kinesis/data-firehose/pricing/)
+ [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/) 
**Note**  
There are no additional charges for log delivery to Amazon S3, though you incur Amazon S3 charges for storing and accessing the log files. If you enable the **Parquet** option to convert your access logs to Apache Parquet, this option incurs CloudWatch charges. For more information, see the [Vended Logs section for CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/).

# Configure standard logging (legacy)
<a name="standard-logging-legacy-s3"></a>

**Notes**  
This topic is for the previous version of standard logging. For the latest version, see [Configure standard logging (v2)](standard-logging.md).
If you already enabled standard logging (legacy) and you want to enable standard logging (v2) to Amazon S3, we recommend that you specify a *different* Amazon S3 bucket or use a *separate path* in the same bucket (for example, use a log prefix or partitioning). This helps you keep track of which log files are associated with which distribution and prevents log files from overwriting each other.

To get started with standard logging (legacy), complete the following steps:

1. Choose an Amazon S3 bucket that will receive your logs and add the required permissions.

1. Configure standard logging (legacy) from the CloudFront console or the CloudFront API. You can only choose an Amazon S3 bucket to receive your logs.

1. View your access logs.

## Choose an Amazon S3 bucket for standard logs
<a name="access-logs-choosing-s3-bucket"></a>

When you enable logging for a distribution, you specify the Amazon S3 bucket that you want CloudFront to store log files in. If you're using Amazon S3 as your origin, we recommend that you use a *separate* bucket for your log files.

Specify the Amazon S3 bucket that you want CloudFront to store access logs in, for example, `amzn-s3-demo-bucket.s3.amazonaws.com`.

You can store the log files for multiple distributions in the same bucket. When you enable logging, you can specify an optional prefix for the file names, so you can keep track of which log files are associated with which distributions.

**About choosing an S3 bucket**  
Your bucket must have access control list (ACL) enabled. If you choose a bucket without ACL enabled from the CloudFront console, an error message will appear. See [Permissions](#AccessLogsBucketAndFileOwnership).
Don't choose an Amazon S3 bucket with [S3 Object Ownership](https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html) set to **bucket owner enforced**. That setting disables ACLs for the bucket and the objects in it, which prevents CloudFront from delivering log files to the bucket.
Legacy logging does not support Amazon S3 buckets in opt-in regions. Please choose a region that is enabled by default or use [Standard Logging V2](standard-logging.md) which does support opt-in regions and additional features. For a list of default and opt-in regions, see [AWS Regions](https://docs.aws.amazon.com/global-infrastructure/latest/regions/aws-regions.html).

## Permissions
<a name="AccessLogsBucketAndFileOwnership"></a>

**Important**  
Starting in April 2023, you must enable S3 ACLs for new S3 buckets used for CloudFront standard logs. You can enable ACLs when you [create a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-ownership-new-bucket.html), or enable ACLs for an [existing bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-ownership-existing-bucket.html).  
For more information about the changes, see [Default settings for new S3 buckets FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-faq.html) in the *Amazon Simple Storage Service User Guide* and [Heads-Up: Amazon S3 Security Changes Are Coming in April of 2023](https://aws.amazon.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/) in the *AWS News Blog*.

Your AWS account must have the following permissions for the bucket that you specify for log files:
+ The ACL for the bucket must grant you `FULL_CONTROL`. If you're the bucket owner, your account has this permission by default. If you're not, the bucket owner must update the ACL for the bucket.
+ `s3:GetBucketAcl`
+ `s3:PutBucketAcl`

**ACL for the bucket**  
When you create or update a distribution and enable logging, CloudFront uses these permissions to update the ACL for the bucket to give the `awslogsdelivery` account `FULL_CONTROL` permission. The `awslogsdelivery` account writes log files to the bucket. If your account doesn't have the required permissions to update the ACL, creating or updating the distribution will fail.  
In some circumstances, if you programmatically submit a request to create a bucket but a bucket with the specified name already exists, S3 resets permissions on the bucket to the default value. If you configured CloudFront to save access logs in an S3 bucket and you stop getting logs in that bucket, check permissions on the bucket to ensure that CloudFront has the necessary permissions.

**Restoring the ACL for the bucket**  
If you remove permissions for the `awslogsdelivery` account, CloudFront won't be able to save logs to the S3 bucket. To enable CloudFront to start saving logs for your distribution again, restore the ACL permission by doing one of the following:  
+ Disable logging for your distribution in CloudFront, and then enable it again. For more information, see [Standard logging](DownloadDistValuesGeneral.md#DownloadDistValuesLoggingOnOff).
+ Add the ACL permission for `awslogsdelivery` manually by navigating to the S3 bucket in the Amazon S3 console and adding permission. To add the ACL for `awslogsdelivery`, you must provide the canonical ID for the account, which is the following:

  `c4c1ede66af53448b93c283ce9448c4ba468c9432aa01d700d3878632f77d2d0`

  

  For more information about adding ACLs to S3 buckets, see [Configuring ACLs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/managing-acls.html) in the *Amazon Simple Storage Service User Guide*.

**ACL for each log file**  
In addition to the ACL on the bucket, there's an ACL on each log file. The bucket owner has `FULL_CONTROL` permission on each log file, the distribution owner (if different from the bucket owner) has no permission, and the `awslogsdelivery` account has read and write permissions. 

**Disabling logging**  
If you disable logging, CloudFront doesn't delete the ACLs for either the bucket or the log files. You can delete the ACLs if needed.

### Required key policy for SSE-KMS buckets
<a name="AccessLogsKMSPermissions"></a>

If the S3 bucket for your standard logs uses server-side encryption with AWS KMS keys (SSE-KMS) by using a customer managed key, you must add the following statement to the key policy for your customer managed key. This allows CloudFront to write log files to the bucket. You can't use SSE-KMS with the AWS managed key because CloudFront won't be able to write log files to the bucket.

```
{
    "Sid": "Allow CloudFront to use the key to deliver logs",
    "Effect": "Allow",
    "Principal": {
        "Service": "delivery.logs.amazonaws.com"
    },
    "Action": "kms:GenerateDataKey*",
    "Resource": "*"
}
```

If the S3 bucket for your standard logs uses SSE-KMS with an [S3 Bucket Key](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-key.html), you also need to add the `kms:Decrypt` permission to the policy statement. In that case, the full policy statement looks like the following.

```
{
    "Sid": "Allow CloudFront to use the key to deliver logs",
    "Effect": "Allow",
    "Principal": {
        "Service": "delivery.logs.amazonaws.com"
    },
    "Action": [
        "kms:GenerateDataKey*",
        "kms:Decrypt"
    ],
    "Resource": "*"
}
```

**Note**  
When you enable SSE-KMS for your S3 bucket, specify the complete ARN for the customer managed key. For more information, see [Specifying server-side encryption with AWS KMS keys (SSE-KMS)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/specifying-kms-encryption.html) in the *Amazon Simple Storage Service User Guide*.

## Enable standard logging (legacy)
<a name="standard-logs-legacy-enable"></a>

To enable standard logs, use the CloudFront console or the CloudFront API.

**Contents**
+ [Enable standard logging (legacy) (CloudFront console)](#standard-logs-legacy-enable-console)
+ [Enable standard logging (legacy) (CloudFront API)](#standard-logs-legacy-enable-api)

### Enable standard logging (legacy) (CloudFront console)
<a name="standard-logs-legacy-enable-console"></a>

**To enable standard logs for a CloudFront distribution (console)**

1. Use the CloudFront console to create a [new distribution](distribution-web-creating-console.md) or [update an existing one](HowToUpdateDistribution.md#HowToUpdateDistributionProcedure).

1. For the **Standard logging** section, for **Log delivery**, choose **On**.

1. (Optional) For **Cookie logging**, choose **On** if you want to include cookies in your logs. For more information, see [Cookie logging](DownloadDistValuesGeneral.md#DownloadDistValuesCookieLogging).
**Tip**  
Cookie logging is a global setting that applies to *all * standard logs for your distribution. You can’t override this setting for separate delivery destinations.

1. For the **Deliver to** section, specify **Amazon S3 (Legacy)**.

1. Specify your Amazon S3 bucket. If you don't have one already, you can choose **Create** or see the documentation to [create a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html).

1. (Optional) For **Log prefix**, specify the string, if any, that you want CloudFront to prefix to the access log file names for this distribution, for example, `exampleprefix/`. The trailing slash ( / ) is optional but recommended to simplify browsing your log files. For more information, see [Log prefix](DownloadDistValuesGeneral.md#DownloadDistValuesLogPrefix).

1. Complete the steps to update or create your distribution.

1. From the **Logs** page, verify that the standard logs status is **Enabled** next to the distribution.

   For more information about the standard logging delivery and log fields, see the [Standard logging reference](standard-logs-reference.md).

### Enable standard logging (legacy) (CloudFront API)
<a name="standard-logs-legacy-enable-api"></a>

You can also use the CloudFront API to enable standard logs for your distributions. 

**To enable standard logs for a distribution (CloudFront API)**
+ Use the [CreateDistribution](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CreateDistribution.html) or [UpdateDistribution](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_UpdateDistribution.html) API operation and configure the [LoggingConfig](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_LoggingConfig.html) object.

## Edit standard logging settings
<a name="ChangeSettings"></a>

You can enable or disable logging, change the Amazon S3 bucket where your logs are stored, and change the prefix for log files by using the [CloudFront console](https://console.aws.amazon.com/cloudfront/v4/home) or the CloudFront API. Your changes to logging settings take effect within 12 hours.

For more information, see the following topics:
+ To update a distribution using the CloudFront console, see [Update a distribution](HowToUpdateDistribution.md).
+ To update a distribution using the CloudFront API, see [UpdateDistribution](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_UpdateDistribution.html) in the *Amazon CloudFront API Reference*.

## Send logs to Amazon S3
<a name="standard-logs-in-s3"></a>

When you send your logs to Amazon S3, your logs appear in the following format.

### File name format
<a name="AccessLogsFileNaming"></a>

The name of each log file that CloudFront saves in your Amazon S3 bucket uses the following file name format:

`<optional prefix>/<distribution ID>.YYYY-MM-DD-HH.unique-ID.gz`

The date and time are in Coordinated Universal Time (UTC).

For example, if you use `example-prefix` as the prefix, and your distribution ID is `EMLARXS9EXAMPLE`, your file names look similar to this:

`example-prefix/EMLARXS9EXAMPLE.2019-11-14-20.RT4KCN4SGK9.gz`

When you enable logging for a distribution, you can specify an optional prefix for the file names, so you can keep track of which log files are associated with which distributions. If you include a value for the log file prefix and your prefix doesn't end with a forward slash (`/`), CloudFront appends one automatically. If your prefix does end with a forward slash, CloudFront doesn't add another one.

The `.gz` at the end of the file name indicates that CloudFront has compressed the log file using gzip.

## Standard log file format
<a name="LogFileFormat"></a>

Each entry in a log file gives details about a single viewer request. The log files have the following characteristics:
+ Use the [W3C extended log file format](https://www.w3.org/TR/WD-logfile.html).
+ Contain tab-separated values.
+ Contain records that are not necessarily in chronological order.
+ Contain two header lines: one with the file format version, and another that lists the W3C fields included in each record.
+ Contain URL-encoded equivalents for spaces and certain other characters in field values.

  URL-encoded equivalents are used for the following characters:
  + ASCII character codes 0 through 32, inclusive
  + ASCII character codes 127 and higher
  + All characters in the following table

  The URL encoding standard is defined in [RFC 1738](https://tools.ietf.org/html/rfc1738.html).


|  URL-Encoded value  |  Character  | 
| --- | --- | 
|  %3C  |  <  | 
|  %3E  |  >  | 
|  %22  |  "  | 
|  %23  |  \$1  | 
|  %25  |  %  | 
|  %7B  |  \$1  | 
|  %7D  |  \$1  | 
|  %7C  |  \$1  | 
|  %5C  |  \$1  | 
|  %5E  |  ^  | 
|  %7E  |  \$1  | 
|  %5B  |  [  | 
|  %5D  |  ]  | 
|  %60  |  `  | 
|  %27  |  '  | 
|  %20  |  space  | 

## Delete log files
<a name="DeletingLogFiles"></a>

CloudFront doesn't automatically delete log files from your Amazon S3 bucket. For information about deleting log files from an Amazon S3 bucket, see [Deleting objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeletingObjects.html) in the *Amazon Simple Storage Service Console User Guide*.

## Pricing
<a name="AccessLogsCharges"></a>

Standard logging is an optional feature of CloudFront. CloudFront doesn’t charge for enabling standard logs. However, you accrue the usual Amazon S3 charges for storing and accessing the files on Amazon S3. You can delete them at any time.

For more information about Amazon S3 pricing, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

For more information about CloudFront pricing, see [CloudFront Pricing](https://aws.amazon.com/cloudfront/pricing/).

# Standard logging reference
<a name="standard-logs-reference"></a>

The following sections apply to both standard logging (v2) and standard logging (legacy).

**Topics**
+ [Timing of log file delivery](#access-logs-timing)
+ [How requests are logged when the request URL or headers exceed the maximum size](#access-logs-request-URL-size)
+ [Log file fields](#BasicDistributionFileFormat)
+ [Analyze logs](#access-logs-analyzing)

## Timing of log file delivery
<a name="access-logs-timing"></a>

CloudFront delivers logs for a distribution up to several times an hour. In general, a log file contains information about the requests that CloudFront received during a given time period. CloudFront usually delivers the log file for that time period to your destination within an hour of the events that appear in the log. Note, however, that some or all log file entries for a time period can sometimes be delayed by up to 24 hours. When log entries are delayed, CloudFront saves them in a log file for which the file name includes the date and time of the period in which the requests *occurred*, not the date and time when the file was delivered.

When creating a log file, CloudFront consolidates information for your distribution from all of the edge locations that received requests for your objects during the time period that the log file covers.

CloudFront can save more than one file for a time period depending on how many requests CloudFront receives for the objects associated with a distribution.

CloudFront begins to reliably deliver access logs about four hours after you enable logging. You might get a few access logs before that time.

**Note**  
If no users request your objects during the time period, you don't receive any log files for that period.

## How requests are logged when the request URL or headers exceed the maximum size
<a name="access-logs-request-URL-size"></a>

If the total size of all request headers, including cookies, exceeds 20 KB, or if the URL exceeds 8192 bytes, CloudFront can't parse the request completely and can't log the request. Because the request isn't logged, you won't see in the log files the HTTP error status code returned.

If the request body exceeds the maximum size, the request is logged, including the HTTP error status code.

## Log file fields
<a name="BasicDistributionFileFormat"></a>

The log file for a distribution contains 33 fields. The following list contains each field name, in order, along with a description of the information in that field.

1. **`date`**

   The date on which the event occurred in the format `YYYY-MM-DD`. For example, `2019-06-30`. The date and time are in Coordinated Universal Time (UTC). For WebSocket connections, this is the date when the connection closed.

1. **`time`**

   The time when the CloudFront server finished responding to the request (in UTC), for example, `01:42:39`. For WebSocket connections, this is the time when the connection is closed.

1. **`x-edge-location`**

   The edge location that served the request. Each edge location is identified by a three-letter code and an arbitrarily assigned number (for example, DFW3). The three-letter code typically corresponds with the International Air Transport Association (IATA) airport code for an airport near the edge location's geographic location. (These abbreviations might change in the future.)

1. **`sc-bytes`**

   The total number of bytes that the server sent to the viewer in response to the request, including headers. For WebSocket and gRPC connections, this is the total number of bytes sent from the server to the client through the connection.

1. **`c-ip`**

   The IP address of the viewer that made the request, for example, `192.0.2.183` or `2001:0db8:85a3::8a2e:0370:7334`. If the viewer used an HTTP proxy or a load balancer to send the request, the value of this field is the IP address of the proxy or load balancer. See also the `x-forwarded-for` field.

1. **`cs-method`**

   The HTTP request method received from the viewer.

1. **`cs(Host)`**

   The domain name of the CloudFront distribution (for example, d111111abcdef8.cloudfront.net).

1. **`cs-uri-stem`**

   The portion of the request URL that identifies the path and object (for example, `/images/cat.jpg`). Question marks (?) in URLs and query strings are not included in the log.

1. **`sc-status`**

   Contains one of the following values:
   + The HTTP status code of the server's response (for example, `200`).
   + `000`, which indicates that the viewer closed the connection before the server could respond to the request. If the viewer closes the connection after the server starts to send the response, this field contains the HTTP status code of the response that the server started to send.

1. **`cs(Referer)`**

   The value of the `Referer` header in the request. This is the name of the domain that originated the request. Common referrers include search engines, other websites that link directly to your objects, and your own website.

1. **`cs(User-Agent)`**

   The value of the `User-Agent` header in the request. The `User-Agent` header identifies the source of the request, such as the type of device and browser that submitted the request or, if the request came from a search engine, which search engine.

1. **`cs-uri-query`**

   The query string portion of the request URL, if any.

   When a URL doesn't contain a query string, this field's value is a hyphen (-). For more information, see [Cache content based on query string parameters](QueryStringParameters.md).

1. **`cs(Cookie)`**

   The `Cookie` header in the request, including name—value pairs and the associated attributes.

   If you enable cookie logging, CloudFront logs the cookies in all requests regardless of which cookies you choose to forward to the origin. When a request doesn't include a cookie header, this field's value is a hyphen (-). For more information about cookies, see [Cache content based on cookies](Cookies.md).

1. **`x-edge-result-type`**

   How the server classified the response after the last byte left the server. In some cases, the result type can change between the time that the server is ready to send the response and the time that it finishes sending the response. See also the `x-edge-response-result-type` field.

   For example, in HTTP streaming, suppose the server finds a segment of the stream in the cache. In that scenario, the value of this field would ordinarily be `Hit`. However, if the viewer closes the connection before the server has delivered the entire segment, the final result type (and the value of this field) is `Error`.

   WebSocket and gRPC connections will have a value of `Miss` for this field because the content is not cacheable and is proxied directly to the origin.

   Possible values include:
   + `Hit` – The server served the object to the viewer from the cache.
   + `RefreshHit` – The server found the object in the cache but the object had expired, so the server contacted the origin to verify that the cache had the latest version of the object.
   + `Miss` – The request could not be satisfied by an object in the cache, so the server forwarded the request to the origin and returned the result to the viewer.
   + `LimitExceeded` – The request was denied because a CloudFront quota (formerly referred to as a limit) was exceeded.
   + `CapacityExceeded` – The server returned an HTTP 503 status code because it didn't have enough capacity at the time of the request to serve the object.
   + `Error` – Typically, this means the request resulted in a client error (the value of the `sc-status` field is in the `4xx` range) or a server error (the value of the `sc-status` field is in the `5xx` range). If the value of the `sc-status` field is `200`, or if the value of this field is `Error` and the value of the `x-edge-response-result-type` field is not `Error`, it means the HTTP request was successful but the client disconnected before receiving all of the bytes.
   + `Redirect` – The server redirected the viewer from HTTP to HTTPS according to the distribution settings.
   + `LambdaExecutionError` – The Lambda@Edge function associated with the distribution didn't complete due to a malformed association, a function timeout, an AWS dependency issue, or another general availability problem.

1. **`x-edge-request-id`**

   An opaque string that uniquely identifies a request. CloudFront also sends this string in the `x-amz-cf-id` response header.

1. **`x-host-header`**

   The value that the viewer included in the `Host` header of the request. If you're using the CloudFront domain name in your object URLs (such as d111111abcdef8.cloudfront.net), this field contains that domain name. If you're using alternate domain names (CNAMEs) in your object URLs (such as www.example.com), this field contains the alternate domain name.

   If you're using alternate domain names, see `cs(Host)` in field 7 for the domain name that is associated with your distribution.

1. **`cs-protocol`**

   The protocol of the viewer request (`http`, `https`, `grpcs`, `ws`, or `wss`).

1. **`cs-bytes`**

   The total number of bytes of data that the viewer included in the request, including headers. For WebSocket and gRPC connections, this is the total number of bytes sent from the client to the server on the connection.

1. **`time-taken`**

   The number of seconds (to the thousandth of a second, for example, 0.082) from when the server receives the viewer's request to when the server writes the last byte of the response to the output queue, as measured on the server. From the perspective of the viewer, the total time to get the full response will be longer than this value because of network latency and TCP buffering.

1. **`x-forwarded-for`**

   If the viewer used an HTTP proxy or a load balancer to send the request, the value of the `c-ip` field is the IP address of the proxy or load balancer. In that case, this field is the IP address of the viewer that originated the request. This field can contain multiple comma-separated IP addresses. Each IP address can be an IPv4 address (for example, `192.0.2.183`) or an IPv6 address (for example, `2001:0db8:85a3::8a2e:0370:7334`).

   If the viewer did not use an HTTP proxy or a load balancer, the value of this field is a hyphen (-).

1. **`ssl-protocol`**

   When the request used HTTPS, this field contains the SSL/TLS protocol that the viewer and server negotiated for transmitting the request and response. For a list of possible values, see the supported SSL/TLS protocols in [Supported protocols and ciphers between viewers and CloudFront](secure-connections-supported-viewer-protocols-ciphers.md).

   When `cs-protocol` in field 17 is `http`, the value for this field is a hyphen (-).

1. **`ssl-cipher`**

   When the request used HTTPS, this field contains the SSL/TLS cipher that the viewer and server negotiated for encrypting the request and response. For a list of possible values, see the supported SSL/TLS ciphers in [Supported protocols and ciphers between viewers and CloudFront](secure-connections-supported-viewer-protocols-ciphers.md).

   When `cs-protocol` in field 17 is `http`, the value for this field is a hyphen (-).

1. **`x-edge-response-result-type`**

   How the server classified the response just before returning the response to the viewer. See also the `x-edge-result-type` field. Possible values include:
   + `Hit` – The server served the object to the viewer from the cache.
   + `RefreshHit` – The server found the object in the cache but the object had expired, so the server contacted the origin to verify that the cache had the latest version of the object.
   + `Miss` – The request could not be satisfied by an object in the cache, so the server forwarded the request to the origin server and returned the result to the viewer.
   + `LimitExceeded` – The request was denied because a CloudFront quota (formerly referred to as a limit) was exceeded.
   + `CapacityExceeded` – The server returned a 503 error because it didn't have enough capacity at the time of the request to serve the object.
   + `Error` – Typically, this means the request resulted in a client error (the value of the `sc-status` field is in the `4xx` range) or a server error (the value of the `sc-status` field is in the `5xx` range).

     If the value of the `x-edge-result-type` field is `Error` and the value of this field is not `Error`, the client disconnected before finishing the download.
   + `Redirect` – The server redirected the viewer from HTTP to HTTPS according to the distribution settings.
   + `LambdaExecutionError` – The Lambda@Edge function associated with the distribution didn't complete due to a malformed association, a function timeout, an AWS dependency issue, or another general availability problem.

1. **`cs-protocol-version`**

   The HTTP version that the viewer specified in the request. Possible values include `HTTP/0.9`, `HTTP/1.0`, `HTTP/1.1`, `HTTP/2.0`, and `HTTP/3.0`.

1. **`fle-status`**

   When [field-level encryption](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/field-level-encryption.html) is configured for a distribution, this field contains a code that indicates whether the request body was successfully processed. When the server successfully processes the request body, encrypts values in the specified fields, and forwards the request to the origin, the value of this field is `Processed`. The value of `x-edge-result-type` can still indicate a client-side or server-side error in this case.

   Possible values for this field include:
   + `ForwardedByContentType` – The server forwarded the request to the origin without parsing or encryption because no content type was configured.
   + `ForwardedByQueryArgs` – The server forwarded the request to the origin without parsing or encryption because the request contains a query argument that wasn't in the configuration for field-level encryption.
   + `ForwardedDueToNoProfile` – The server forwarded the request to the origin without parsing or encryption because no profile was specified in the configuration for field-level encryption.
   + `MalformedContentTypeClientError` – The server rejected the request and returned an HTTP 400 status code to the viewer because the value of the `Content-Type` header was in an invalid format.
   + `MalformedInputClientError` – The server rejected the request and returned an HTTP 400 status code to the viewer because the request body was in an invalid format.
   + `MalformedQueryArgsClientError` – The server rejected the request and returned an HTTP 400 status code to the viewer because a query argument was empty or in an invalid format.
   + `RejectedByContentType` – The server rejected the request and returned an HTTP 400 status code to the viewer because no content type was specified in the configuration for field-level encryption.
   + `RejectedByQueryArgs` – The server rejected the request and returned an HTTP 400 status code to the viewer because no query argument was specified in the configuration for field-level encryption.
   + `ServerError` – The origin server returned an error.

   If the request exceeds a field-level encryption quota (formerly referred to as a limit), this field contains one of the following error codes, and the server returns HTTP status code 400 to the viewer. For a list of the current quotas on field-level encryption, see [Quotas on field-level encryption](cloudfront-limits.md#limits-field-level-encryption).
   + `FieldLengthLimitClientError` – A field that is configured to be encrypted exceeded the maximum length allowed.
   + `FieldNumberLimitClientError` – A request that the distribution is configured to encrypt contains more than the number of fields allowed.
   + `RequestLengthLimitClientError` – The length of the request body exceeded the maximum length allowed when field-level encryption is configured.

   If field-level encryption is not configured for the distribution, the value of this field is a hyphen (-).

1. **`fle-encrypted-fields`**

   The number of [field-level encryption](field-level-encryption.md) fields that the server encrypted and forwarded to the origin. CloudFront servers stream the processed request to the origin as they encrypt data, so this field can have a value even if the value of `fle-status` is an error.

   If field-level encryption is not configured for the distribution, the value of this field is a hyphen (-).

1. **`c-port`**

   The port number of the request from the viewer.

1. **`time-to-first-byte`**

   The number of seconds between receiving the request and writing the first byte of the response, as measured on the server.

1. **`x-edge-detailed-result-type`**

   This field contains the same value as the `x-edge-result-type` field, except in the following cases:
   + When the object was served to the viewer from the [Origin Shield](origin-shield.md) layer, this field contains `OriginShieldHit`.
   + When the object was not in the CloudFront cache and the response was generated by an [origin request Lambda@Edge function](lambda-at-the-edge.md), this field contains `MissGeneratedResponse`.
   + When the value of the `x-edge-result-type` field is `Error`, this field contains one of the following values with more information about the error:
     + `AbortedOrigin` – The server encountered an issue with the origin.
     + `ClientCommError` – The response to the viewer was interrupted due to a communication problem between the server and the viewer.
     + `ClientGeoBlocked` – The distribution is configured to refuse requests from the viewer's geographic location.
     + `ClientHungUpRequest` – The viewer stopped prematurely while sending the request.
     + `Error` – An error occurred for which the error type doesn't fit any of the other categories. This error type can occur when the server serves an error response from the cache.
     + `InvalidRequest` – The server received an invalid request from the viewer.
     + `InvalidRequestBlocked` – Access to the requested resource is blocked.
     + `InvalidRequestCertificate` – The distribution doesn't match the SSL/TLS certificate for which the HTTPS connection was established.
     + `InvalidRequestHeader` – The request contained an invalid header.
     + `InvalidRequestMethod` – The distribution is not configured to handle the HTTP request method that was used. This can happen when the distribution supports only cacheable requests.
     + `OriginCommError` – The request timed out while connecting to the origin, or reading data from the origin.
     + `OriginConnectError` – The server couldn't connect to the origin.
     + `OriginContentRangeLengthError` – The `Content-Length` header in the origin's response doesn't match the length in the `Content-Range` header.
     + `OriginDnsError` – The server couldn't resolve the origin's domain name.
     + `OriginError` – The origin returned an incorrect response.
     + `OriginHeaderTooBigError` – A header returned by the origin is too big for the edge server to process.
     + `OriginInvalidResponseError` – The origin returned an invalid response.
     + `OriginReadError` – The server couldn't read from the origin.
     + `OriginWriteError` – The server couldn't write to the origin.
     + `OriginZeroSizeObjectError` – A zero size object sent from the origin resulted in an error.
     + `SlowReaderOriginError` – The viewer was slow to read the message that caused the origin error.

1. **`sc-content-type`**

   The value of the HTTP `Content-Type` header of the response.

1. **`sc-content-len`**

   The value of the HTTP `Content-Length` header of the response.

1. **`sc-range-start`**

   When the response contains the HTTP `Content-Range` header, this field contains the range start value.

1. **`sc-range-end`**

   When the response contains the HTTP `Content-Range` header, this field contains the range end value.

1. **`distribution-tenant-id`**

   The ID of the distribution tenant.

1. **`connection-id`**

   A unique identifier for the TLS connection. 

   You must enable mTLS for your distributions before you can get information for this field. For more information, see [Mutual TLS authentication with CloudFront (Viewer mTLS)Origin mutual TLS with CloudFront](mtls-authentication.md).

   

The following is an example log file for a distribution.

```
#Version: 1.0
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type cs-protocol-version fle-status fle-encrypted-fields c-port time-to-first-byte x-edge-detailed-result-type sc-content-type sc-content-len sc-range-start sc-range-end
2019-12-04	21:02:31	LAX1	392	192.0.2.100	GET	d111111abcdef8.cloudfront.net	/index.html	200	-	Mozilla/5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/78.0.3904.108%20Safari/537.36	-	-	Hit	SOX4xwn4XV6Q4rgb7XiVGOHms_BGlTAC4KyHmureZmBNrjGdRLiNIQ==	d111111abcdef8.cloudfront.net	https	23	0.001	-	TLSv1.2	ECDHE-RSA-AES128-GCM-SHA256	Hit	HTTP/2.0	-	-	11040	0.001	Hit	text/html	78	-	-
2019-12-04	21:02:31	LAX1	392	192.0.2.100	GET	d111111abcdef8.cloudfront.net	/index.html	200	-	Mozilla/5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/78.0.3904.108%20Safari/537.36	-	-	Hit	k6WGMNkEzR5BEM_SaF47gjtX9zBDO2m349OY2an0QPEaUum1ZOLrow==	d111111abcdef8.cloudfront.net	https	23	0.000	-	TLSv1.2	ECDHE-RSA-AES128-GCM-SHA256	Hit	HTTP/2.0	-	-	11040	0.000	Hit	text/html	78	-	-
2019-12-04	21:02:31	LAX1	392	192.0.2.100	GET	d111111abcdef8.cloudfront.net	/index.html	200	-	Mozilla/5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/78.0.3904.108%20Safari/537.36	-	-	Hit	f37nTMVvnKvV2ZSvEsivup_c2kZ7VXzYdjC-GUQZ5qNs-89BlWazbw==	d111111abcdef8.cloudfront.net	https	23	0.001	-	TLSv1.2	ECDHE-RSA-AES128-GCM-SHA256	Hit	HTTP/2.0	-	-	11040	0.001	Hit	text/html	78	-	-	
2019-12-13	22:36:27	SEA19-C1	900	192.0.2.200	GET	d111111abcdef8.cloudfront.net	/favicon.ico	502	http://www.example.com/	Mozilla/5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/78.0.3904.108%20Safari/537.36	-	-	Error	1pkpNfBQ39sYMnjjUQjmH2w1wdJnbHYTbag21o_3OfcQgPzdL2RSSQ==	www.example.com	http	675	0.102	-	-	-	Error	HTTP/1.1	-	-	25260	0.102	OriginDnsError	text/html	507	-	-
2019-12-13	22:36:26	SEA19-C1	900	192.0.2.200	GET	d111111abcdef8.cloudfront.net	/	502	-	Mozilla/5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/78.0.3904.108%20Safari/537.36	-	-	Error	3AqrZGCnF_g0-5KOvfA7c9XLcf4YGvMFSeFdIetR1N_2y8jSis8Zxg==	www.example.com	http	735	0.107	-	-	-	Error	HTTP/1.1	-	-	3802	0.107	OriginDnsError	text/html	507	-	-
2019-12-13	22:37:02	SEA19-C2	900	192.0.2.200	GET	d111111abcdef8.cloudfront.net	/	502	-	curl/7.55.1	-	-	Error	kBkDzGnceVtWHqSCqBUqtA_cEs2T3tFUBbnBNkB9El_uVRhHgcZfcw==	www.example.com	http	387	0.103	-	-	-	Error	HTTP/1.1	-	-	12644	0.103	OriginDnsError	text/html	507	-	-
```

## Analyze logs
<a name="access-logs-analyzing"></a>

Because you can receive multiple access logs per hour, we recommend that you combine all the log files you receive for a given time period into one file. You can then analyze the data for that period more accurately and completely.

One way to analyze your access logs is to use [Amazon Athena](https://aws.amazon.com/athena/). Athena is an interactive query service that can help you analyze data for AWS services, including CloudFront. To learn more, see [ Querying Amazon CloudFront Logs](https://docs.aws.amazon.com/athena/latest/ug/cloudfront-logs.html) in the *Amazon Athena User Guide*.

In addition, the following AWS blog posts discuss some ways to analyze access logs.
+ [ Amazon CloudFront Request Logging](https://aws.amazon.com/blogs/aws/amazon-cloudfront-request-logging/) (for content delivered via HTTP)
+ [ Enhanced CloudFront Logs, Now With Query Strings](https://aws.amazon.com/blogs/aws/enhanced-cloudfront-logs-now-with-query-strings/)