

# Creating data exports
<a name="dataexports-create"></a>

You can use the **Data Exports** page in the Billing and Cost Management console to create data exports of three different types: standard exports, cost and usage dashboard exports, and legacy exports.

There are the following limits on the number of exports you can create per table:
+ **Cost and Usage Report 2.0 (CUR 2.0)**: 5 exports
+ **Cost optimization recommendations**: 2 exports
+ **FOCUS 1.0 with AWS columns**: 2 exports
+ **FOCUS 1.2 with AWS columns**: 2 exports
+ **Cost and usage dashboard**: 2 exports
+ **Carbon emissions**: 2 exports

For more information, see [Quotas and restrictions](https://docs.aws.amazon.com/cur/latest/userguide/dataexports-quotas.html).

Set up an export in minutes by either creating an export in the console and selecting the table you want to export, or creating an export in the AWS SDK/CLI and defining an SQL query of column selections and row filters from the data table you want.

When creating an export in the console, you can create an Amazon S3 bucket for your data export storage. When creating an export in the AWS SDK/CLI, you need to create an Amazon S3 bucket with the correct bucket policy in advance. For more information, see [Setting up an Amazon S3 bucket for data exports](https://docs.aws.amazon.com/cur/latest/userguide/dataexports-s3-bucket.html).

Once you create a new data export, Data Exports starts to export the data to the Amazon S3 bucket.

**Note**  
It can take up to 24 hours for AWS to start delivering exports to your Amazon S3 bucket. Once delivery starts, AWS refreshes the billing and cost management export output at least once a day and the carbon emissions export output at least once a month in your S3 bucket. The actual refresh rate may be different due to various factors.

**Topics**
+ [Setting up an Amazon S3 bucket for data exports](dataexports-s3-bucket.md)
+ [Creating a standard export](dataexports-create-standard.md)
+ [Creating a cost and usage dashboard](dataexports-create-dashboard.md)
+ [Creating a Legacy CUR export](dataexports-create-legacy.md)
+ [Creating exports with billing views](dataexports-create-billing-view.md)
+ [Data query–SQL query and table configurations](dataexports-data-query.md)
+ [Configuring Cost and Usage Reports 2.0 using AWS Billing Conductor](dataexports-create-abc.md)

# Setting up an Amazon S3 bucket for data exports
<a name="dataexports-s3-bucket"></a>

To receive and store your data exports, you must have an Amazon S3 bucket in your AWS account or in a designated destination AWS account. When creating an export in the console, if you want the export in your own bucket, you can select an existing S3 bucket that you own, or you can create a new bucket. In either case, you need to review and confirm the application of the following default S3 bucket policy. If you want your export to be delivered to a bucket owned by another AWS account, you can specify the bucket owner and the bucket name during the Data Exports creation process. Editing the bucket policy or changing the S3 bucket owner after you’ve created an export may prevent Data Exports from delivering your exports. Storing the exports data in any S3 bucket is billed at standard Amazon S3 rates. For more information, see [Quotas and restrictions](https://docs.aws.amazon.com/cur/latest/userguide/dataexports-quotas.html).

The following policy must be applied to every S3 bucket, whether owned by you or a different AWS account, when creating a data export:

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
    {
      "Sid": "EnableAWSDataExportsToWriteToS3",
      "Effect": "Allow",
      "Principal": {
        "Service": [
          "bcm-data-exports.amazonaws.com"
        ]
      },
      "Action": [
        "s3:PutObject"
      ],
      "Resource": "arn:aws:s3:::{bucket-name}/*",
      "Condition": {
        "ArnLike": {
          "aws:SourceArn": "arn:aws:bcm-data-exports:us-east-1:{source-account-id}:export/*"
        },
        "StringEquals": {
          "aws:SourceAccount": "{source-account-id}"
          }
      }
    }
  ]
   }
```

This S3 bucket policy ensures that Data Exports can only deliver exports to the S3 bucket on behalf of the account that created the export. It also allows Data Exports to verify that the S3 bucket is still owned by the account specified during export creation.
+ To deliver exports to your S3 bucket, AWS needs write permissions for that S3 bucket. To do this, the S3 bucket policy grants the Data Exports service (`bcm-data-exports.amazonaws.com`) permission to deliver (`s3:PutObject`) reports to the S3 bucket you own (`arn:aws:s3:::<EXAMPLE-BUCKET>/*`).
+ Every time Data Exports makes the request to write to the S3 bucket, it must provide the account ID of the account that created the export. The condition keys `aws:SourceArn` and `aws:SourceAccount` enforce this.
+ This S3 bucket policy does not give AWS permissions to read or delete any objects in your S3 bucket, including the Cost and Usage Reports after they’ve been delivered.

For an Amazon S3 bucket that has access control list (ACL) enabled, Data Exports applies a `BucketOwnerFullControl` ACL to the reports when delivering them. By default, Amazon S3 objects, such as these reports, can only be read by the user or service principal who wrote them. To provide you or the S3 bucket owner with permission to read the reports, AWS needs to apply the `BucketOwnerFullControl` ACL. The ACL grants the S3 bucket owner `Permission.FullControl` for these reports. However, it’s recommended to disable ACL and use an S3 bucket policy to control access.

**Note**  
For newly-created S3 buckets, ACLs are disabled by default. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html).

If you see an **Invalid bucket** error in the **Data Exports** console page, verify that the policy and S3 bucket ownership haven’t changed since report setup.

# Creating a standard export
<a name="dataexports-create-standard"></a>

You can create a standard data export that you can analyze using other processing tools (Amazon Athena, for example).

**To create a standard data export**

1. Open the Billing and Cost Management console at [https://console.aws.amazon.com/costmanagement/](https://console.aws.amazon.com/costmanagement/).

1. In the navigation pane, choose **Data Exports**.

1. Choose **Create export**.

1. On the **Create export** page, under **Export type**, choose **Standard data export**.

1. For **Export name**, enter a name for your export.

   Export names can have up to 128 characters and must be unique. Valid characters are a-z, A-Z, 0-9, - (hyphen), and \$1 (underscore).

1. Under **Data table configurations**, you can specify the table and columns to be contained within your export. First, select the table you want to export.
**Note**  
Exporting the Cost optimization recommendations table requires a service-linked role. For more information, see [Service-linked roles for Data Exports](https://docs.aws.amazon.com/cost-management/latest/userguide/data-exports-SLR.html).  
Exporting the Carbon emissions table requires the IAM permission `sustainability:GetCarbonFootprintSummary` to access the carbon footprint data.

   With the exception of FOCUS 1.0 with AWS columns and Carbon emissions, there are different table configurations to add data to your export.

   1. For **CUR 2.0**:

      1. Select **Include resource IDs** to include the IDs of each individual resource in the export.
**Note**  
Including resource IDs creates individual line items for each of your resources. This might increase the size of your export significantly, based on your AWS usage.  
Selecting resource ID will add a Tag column containing data about users, accounts, cost categories, and resources when you create a new report. You can deselect the columns to avoid redundant information.

      1. Select **Split cost allocation data** to include detailed cost and usage for shared resources (Amazon ECS and Amazon EKS).
**Note**  
Including split cost allocation data creates individual line items for each of your resources (that is, ECS tasks and Kubernetes pods). This might increase the size of your Cost and Usage Report significantly, based on your AWS usage.

      1. Select **Include Capacity reservation data** to include the Capacity reservation columns and row-level granularity in the export.
**Note**  
Including Capacity reservation data creates 3 new columns and can split the instance line items, based on your AWS usage.

      1. Select **Enable manual discount format** to convert your discounts so that they appear in the Cost and Usage Report in the manual discount format instead of the standard automated format.
**Note**  
This option only appears if you are on the discount automation program.

      1. For **Time granularity**, choose between hourly, daily, or monthly to have the line items in the export aggregated by that time granularity.

   1. For **FOCUS with AWS columns**, there are no table configurations.

   1. For **Carbon emissions**, there are no table configurations.

   1. For **Cost optimization recommendations**:

      1. Select **Include all recommendations** to remove the lowest savings value recommendation of recommendations that are incompatible with one another.

      1. Add **Recommendation filters** if you want certain types of recommendations to be filtered out before incompatible recommendations are removed.
**Note**  
If you specified these settings in the Cost Optimization Hub console, they will be carried over to Data Exports when you choose **Create an export** in Cost Optimization Hub.

1. For **Column selection**, select the columns you want to include in your export. If unsure, select all columns by selecting the first check box at the top of the table. Selecting more columns may increase the file size of your export.

1. Under **Data table delivery options**, for **Data export refresh cadence**: .
   + For billing and cost management data exports, the only option available is **Daily - export is refreshed up to one time per day**.
   + For carbon emissions data exports, the only option available is **Monthly - export is refreshed once per month**. Each update provides the carbon emissions data from the previous month (for example, a February update contains January data).

1. For **Compression type and file format**, choose between the following for your export:
   + Parquet – Parquet
   + gzip – text/csv

1. For **File versioning**, choose between the following which determines whether your export is overwritten with each update:
   + **Overwrite existing data export file**: Each export refresh overwrites the previous delivery within the data partition (for example, billing periods). Overwriting exports can save on Amazon S3 storage costs.
**Note**  
Overwrite is not supported for exports of cost optimization recommendations.
   + **Create new data export file**: Each export refresh is written to a separate directory, even for deliveries of the same partition (for example, billing period). Creating new export versions allows you to track the changes in cost and usage data over time.

1. Under **Data export storage settings**, choose whether you want your export delivered to the S3 bucket of:
   + This account
   + Another account

1. If you choose **This Account **for **S3 bucket **name, choose **Configure **and do one of the following:
   + Select existing bucket.
   + Choose **Create a bucket **enter **S3 bucket name **and then choose the **Region **where you want to create a new bucket
   + Review the **Bucket policy**. If you are selecting an existing bucket, you need to acknowledge that Data Exports will overwrite your existing S3 bucket policy. The new policy will allow both CUR and Data Exports to deliver exports.

1. If you choose **Another account** enter **S3 bucket**, **S3 bucket owner**, which is the AWS account that owns the bucket, and **Region**

1. For **S3 path prefix**, enter a name for the directory that will be created in your S3 bucket to store all the export data.
**Note**  
If your export is delivered to the S3 bucket of Another account, we recommend using S3 path prefix that is unique to your account to prevent multiple accounts with identical path prefix and report name accidentally over-writing one another's data.

1. Under **Tags**, you can choose to add up to 50 tags in order to search and filter your resources or track your AWS costs.
**Note**  
Adding tags is optional.

1. Choose **Create** to complete the creation of your export.

# Creating a cost and usage dashboard
<a name="dataexports-create-dashboard"></a>

You can visualize your billing and cost management data by deploying a pre-built Cost and Usage Dashboard powered by Amazon QuickSight.

**To create a cost and usage dashboard**

1. Open the Billing and Cost Management console at [https://console.aws.amazon.com/costmanagement/](https://console.aws.amazon.com/costmanagement/).

1. In the navigation pane, choose **Data Exports**.

1. On the **Data Exports** page, choose either **Create** or the **Cost and usage dashboard** tile.

1. On the **Create** page, under **Export type**, choose **Cost and usage dashboard powered by QuickSight**.

1. For **Export name**, enter a name for your dashboard.

   Export names can have up to 128 characters and must be unique. Valid characters are a-z, A-Z, 0-9, - (hyphen), and \$1 (underscore).

1. For **QuickSight dashboard settings** your QuickSight account details such as **account name**, **account ID**, **account edition**, and **authentication method** are automatically populated.

   1. If the QuickSight account details don't populate automatically, choose **Create account** to sign up if you're new to QuickSight, or log in to your QuickSight account if you're an existing QuickSight customer.

   1. Once you successfully create or log in to your QuickSight account, you'll see a success message. Close the window and return to **Data Exports**.

   1. Under **QuickSight dashboard settings**, choose **Refresh**.
**Note**  
This feature requires [Enterprise Edition](https://aws.amazon.com/quicksight/pricing/).

1. For **QuickSight namespace**, enter your [namespace](https://docs.aws.amazon.com/quicksight/latest/user/namespaces.html).

1. For **QuickSight username**, enter the details for the user who has permissions to access the QuickSight dashboard.

1. For **QuickSight region**, choose the AWS Region where you want to create the QuickSight dashboard.

1. The **Data table content settings** and **Data table delivery options** are preset and can't be edited.

1. Under **Data export storage settings**, for **S3 bucket** name, choose **Configure**.

1. In the **Configure S3 bucket** dialog box, do one of the following:
   + Select existing bucket.
   + Choose **Create a bucket**, enter an **S3 bucket name**, and then choose the **Region** where you want to create a new bucket.

1. Review the **Bucket policy**, and then choose **Create bucket**.

1. For **S3 path prefix**, enter the S3 path prefix that you want prepended to the name of your export.

1. Under **Service access**, choose a method to authorize QuickSight:
   + Create a new service role (default)
   + Use an existing service role

1. Under **Tags**, you can choose to add up to 50 tags in order to search and filter your resources or track your AWS costs.
**Note**  
Adding tags is optional.

1. Choose **Create**.

You can always return to the **Data Exports** page of the AWS Billing and Cost Management console to see when your Cost and Usage Dashboard was last updated.

# Creating a Legacy CUR export
<a name="dataexports-create-legacy"></a>

You can create a data export of your legacy Cost and Usage Report (CUR). This workflow uses the legacy `cur` APIs and doesn't allow you to use SQL to define your export contents. CUR 2.0 with its additional columns and SQL access is only available as a standard data export.

**To create a legacy data export**

1. Open the Billing and Cost Management console at [https://console.aws.amazon.com/costmanagement/](https://console.aws.amazon.com/costmanagement/).

1. In the navigation pane, choose **Data Exports**.

1. Choose **Create**.

1. On the **Create** page, under **Export type**, choose **Legacy CUR export**.

1. For **Export name**, enter a name for your export.

1. Under **Export content**, select the data to include in your CUR export.
   + For **Additional export content**, select **Include resource IDs** to include the IDs of each individual resource in the export.
**Note**  
Including resource IDs creates individual line items for each of your resources. This might increase the size of your export significantly, based on your AWS usage.
   + Select **Split cost allocation data** to include detailed cost and usage for shared resources (Amazon ECS and Amazon EKS).
**Note**  
Including split cost allocation data creates individual line items for each of your resources (that is, ECS tasks and Kubernetes pods). This might increase the size of your Cost and Usage Report significantly, based on your AWS usage.
   + Select **Enable manual discount format** to convert your discounts so that they appear in the Cost and Usage Report in the manual discount format instead of the standard automated format.
**Note**  
This is only available for customers on Discount Automation.

1. Under **Data table delivery options**, for **Time granularity**, choose one of the following:
   + **Hourly** if you want the line items in the export to be aggregated by the hour.
   + **Daily** if you want the line items in the export to be aggregated by the day.
   + **Monthly** if you want the line items in the export to be aggregated by month.

1. For **Report versioning**, choose between the following:
   + **Create new report version**: Each report refresh will be written to a separate directory, even for deliveries of the same billing period. Choose this to improve the ability to audit your exports over time.
   + **Overwrite existing report**: Each report refresh will overwrite the previous delivery within the same billing period. Deliveries of new billing periods be delivered as new files and directories. Choose this to save on Amazon S3 storage costs.

1. For **Report data integration**, choose whether you want to enable your Cost and Usage Reports to integrate with Amazon Athena, Amazon Redshift, or Amazon QuickSight. The report is compressed in the following formats:
   + **Amazon Athena**: Selects the delivery options optimal for Amazon Athena which are Parquet file format and overwrite existing report. Also delivers a script that can be used to set up the integration.
   + **Amazon Redshift**: Selects the delivery option optimal for Amazon Redshift which is gzip/csv file format. Also delivers a script that can be used to set up the integration.
   + **Amazon QuickSight**: Selects the delivery option optimal for Amazon QuickSight which is gzip/csv file format.

1. For **Compression type and file format**, choose between the following:
   + Parquet – Parquet
   + gzip – text/csv
   + zip – text/csv

1. Under **Data export storage settings**, for **S3 bucket** name, choose **Configure**.

1. In the **Configure S3 bucket** dialog box, do one of the following:
   + Select existing bucket.
   + Choose **Create a bucket**, enter an **S3 bucket name**, and then choose the **Region** where you want to create a new bucket.

1. Review the **Bucket policy**, and then choose **Create bucket**.

1. For **S3 path prefix**, enter the S3 path prefix that you want prepended to the name of your export.

1. Under **Tags**, you can choose to add up to 50 tags in order to search and filter your resources or track your AWS costs.
**Note**  
Adding tags is optional.

1. Choose **Create report**.

# Creating exports with billing views
<a name="dataexports-create-billing-view"></a>

When you sign in as a bill transfer account using billing transfer, or as a management account using AWS Billing Conductor, you can create an export based on your AWS managed billing views (billing groups and billing transfer views).

**Important**  
Custom billing views aren't supported.
You can create billing view-based reports only from the Data Exports page. The legacy Cost and Usage Reports page doesn't support creating reports based on billing views.

You can create reports based on billing views whether billing view mode is enabled or disabled, because reports are resources of your account.

**To create a report based on billing views**

1. Open the Billing and Cost Management console at [https://console.aws.amazon.com/costmanagement/](https://console.aws.amazon.com/costmanagement/).

1. In the navigation pane, choose **Data Exports**.

1. Choose **Create report**.

1. Choose the billing view type (managed views only).

1. Choose the specific view for your report.

1. Complete the remaining steps to create your report.

**Note**  
When creating a report based on a billing transfer showback/chargeback view or billing group view, you must disable the Split Cost Allocation Data functionality.

For more information about Data Exports for billing transfer use cases, see [billing transfer best practices](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/orgs_transfer_billing-best-practices.html).

# Data query–SQL query and table configurations
<a name="dataexports-data-query"></a>

Data Exports enables you to write SQL queries (column selections, row filtering, column aliasing) that are executed against the tables provided–CUR 2.0, for example. Each table might also have table configurations that alter the data contained within the table. For example, with CUR 2.0, you can specify a configuration to choose a time granularity of hourly, daily, or monthly, or a configuration to add cost and usage data at resource-level granularity.

For an export data query to be fully defined, you must specify the following two attributes:
+ **SQL query:** The SQL statement is executed against a table and determines what data is returned by the export.
+ **Table configurations:** The table configuration settings change what data is contained within the table before the SQL query is executed against it.

In the **Data Exports** console page, you can use the workflow that builds the SQL statement and table configurations based on your selections. In the Data Exports SDK/CLI, you can write your own SQL statement and table configurations.

Data Exports SQL statements (`QueryStatement`) use the following syntax:

```
SELECT <column_name_a>, <column_name_b>.<attribute_name> AS <new_name>, ... 
FROM <TABLE_NAME>
[ WHERE <column_name> OPERATOR <value> AND|OR ... ]
[ LIMIT number ]
```

Data Exports table configurations (`TableConfigurations`) use the following syntax:

```
{"<TABLE_NAME>":
    {"<CONFIGURATION_NAME_A>": "<value>",
     "<CONFIGURATION_NAME_B>": "<value>", 
     ...}
            }
```

## SQL query
<a name="dataexports-sql-query"></a>

The SQL query is executed against a table and determines what data is returned in an export. The SQL statement can be altered after an export has been created, but the table selected can't be changed.

SQL statements (in the QueryStatement field) can have a maximum of 36,000 characters.

The possible keywords in a Data Exports SQL query are as follows.

**Note**  
The keywords are not case-sensitive. The column names and table names are case-sensitive.

**SELECT**  
Required.  
Specifies which columns are to be selected from the table. There can only be one SELECT statement per query.  
Use the dot operator `.` to specify selecting an attribute of a MAP or STRUCT column as a separate column. The name of the resulting column in the SQL output is the attribute name by default.  
For example, you can select attributes from the product MAP column.  
`SELECT product.from_location FROM COST_AND_USAGE_REPORT`  
This selects the `from_location` attribute from the `product` column and creates a new column with the attribute’s data. By default, in the output, this column’s name will be `from_location`. However, it can be renamed with `AS`.  
For more information on the MAP and STRUCT columns available in each table, and the attributes these columns have, see the [Data Exports table dictionary](https://docs.aws.amazon.com/cur/latest/userguide/dataexports-table-dictionary.html).

**AS**  
Optional.  
Enables renaming of the column being selected. The new column name can't have spaces or characters other than alphanumeric characters (a-z, A-Z, and 0-9) and underscores ( \$1 ). You can't use quotes when defining the column alias in order to use other characters.  
Aliasing can be useful when selecting an attribute of a MAP or STRUCT column to rename the resulting column to match the schema of the CUR. For example, to match how the CUR shows the `product_from_location` column, write the following query in Data Exports with the CUR 2.0 table.  
`SELECT product.from_location AS product_from_location FROM COST_AND_USAGE_REPORT`  
This creates an export with a column named `product_from_location`.

**FROM**  
Required.  
Specifies the table to be queried. There can only be one FROM statement per query.

**WHERE**  
Optional.  
Filters the rows to only those that match your specified clause.  
The WHERE clause supports the following operators:  
+ **=** Value must match the string or number.
+ **\$1= and <>** Value must not match the specified string or number.
+ **<, <=, >,** and **>=** Value must be less than, less than or equal to, greater than, or greater than or equal to the number.
+ **AND** Both conditions that are specified must be true to match. You can use multiple **AND** keywords to specify two or more conditions.
+ **OR** Either conditions that are specified must be true to match. You can use multiple **OR** keywords to specify two or more conditions.
+ **NOT** The condition specified must not be true to match.
+ **IN** Any of the values specified within the parentheses after the keyword must be true to match.
+ Parentheses can be used to construct multi-conditional WHERE clauses
When expressing strings as the value following an operator, use single quotes `'` instead of double quotes. You don't need to escape the single quotes. For example you can write the following WHERE statement:  
`WHERE line_item_type = 'Discount' OR line_item_type = 'Usage'`

**LIMIT**  
Optional.  
Limits the number of rows returned by the query to the value that you specify.

## Table configurations
<a name="dataexports-table-configurations"></a>

Table configurations are user-controlled properties that a user can set to change the data or schema of a table before it's queried in Data Exports. The table configurations are saved as a JSON statement and are either specified through user input in the AWS SDK/CLI or user selections in the console.

For example, CUR 2.0 has table configurations to change data granularity (hourly, daily, monthly), whether resource-level granular data is included, and whether split cost allocation data is included. Not all tables have configurations. For more information on the configurations available for each table, see the [Data Exports table dictionary](https://docs.aws.amazon.com/cur/latest/userguide/dataexports-table-dictionary.html).

Each table configuration parameter has a default value that is assumed if a table configuration is not specified by the user. Table configurations can't be changed after an export is created.

# Configuring Cost and Usage Reports 2.0 using AWS Billing Conductor
<a name="dataexports-create-abc"></a>

With AWS Billing Conductor, you can create pro forma AWS Cost and Usage Report (AWS CUR) 2.0 for each billing group. These pro forma reports use the same file format, granularity, and columns as the standard AWS CUR 2.0, providing the most comprehensive cost and usage data available for a given time period.

For more information about AWS Billing Conductor, see the [AWS Billing Conductor User Guide](https://docs.aws.amazon.com/billingconductor/latest/userguide/what-is-billingconductor.html).

**Topics**
+ [Comparing standard and AWS Billing Conductor Cost and Usage Reports](#dataexports-standard-ABC)
+ [Creating pro forma Cost and Usage Reports for a billing group](#dataexports-abc-cur)

## Comparing standard and AWS Billing Conductor Cost and Usage Reports
<a name="dataexports-standard-ABC"></a>

There are a few differences between the standard Cost and Usage Reports and pro forma AWS CUR created using the AWS Billing Conductor configuration.

**Account coverage**
+ Standard AWS CUR – Includes cost and usage data for all accounts in your consolidated billing family
+ Pro forma AWS CUR – Includes only accounts that belong to the specific billing group at the time of report generation

**Invoice handling**
+ Standard AWS CUR – Populates the invoice column after AWS generates an invoice
+ Pro forma AWS CUR – Does not populate the invoice column because AWS does not generate or issue invoices based on pro forma billing data

## Creating pro forma Cost and Usage Reports for a billing group
<a name="dataexports-abc-cur"></a>

Use the following steps to generate a pro forma AWS CUR for a billing group.

**To create pro forma Cost and Usage Reports for a billing group**

1. Open the Billing and Cost Management console at [https://console.aws.amazon.com/costmanagement/](https://console.aws.amazon.com/costmanagement/).

1. In the navigation pane, choose **Data Exports**.

1. Choose **Create**.

1. In the **Export details** section, choose **Standard data export**.

1. For **Export name**, enter a name for your export.

1. Under **Data table content settings**, choose **CUR 2.0**.

1. Under **Data table configurations**, choose **Include resource IDs** to include the IDs of each individual resources in the report.

   **Split cost allocation data** is disabled when pro forma data export is enabled.

1. Choose **Next**.

1. For **S3 bucket**, choose **Configure**.

1. In the **Configure S3 Bucket** dialog box, do one of the following:
   + Choose an existing bucket from the drop down list and choose **Next**.
   + Enter a bucket name and the AWS Region where you want to create a new bucket and choose **Next**.

1. Review the **Bucket policy**, select **I have confirmed that this policy is correct**, and choose **Save**.

1. For **S3 path prefix**, enter the S3 path prefix that you want prepended to the name of your export.

1. For **Time granularity**, choose one of the following:
   + **Hourly** if you want the line items in the report to be aggregated by the hour.
   + **Daily** if you want the line items in the report to be aggregated by the day.
   + **Monthly** if you want the line items in the report to be aggregated by the month.

1. For **Report versioning**, choose whether you want each version of the report to overwrite the previous version of the report, or to be delivered in addition to the previous versions.

   Overwriting reports can save on Amazon S3 storage costs. Delivering new report versions can improve auditability of billing data over time.

1. Choose **Next**.

1. After you have reviewed the settings for your report, choose **Review and Complete**.