

# Logging and monitoring in Amazon S3
<a name="monitoring-overview"></a>

Monitoring is an important part of maintaining the reliability, availability, and performance of Amazon S3 and your AWS solutions. We recommend collecting monitoring data from all of the parts of your AWS solution so that you can more easily debug a multipoint failure if one occurs. Before you start monitoring Amazon S3, create a monitoring plan that includes answers to the following questions:
+ What are your monitoring goals?
+ What resources will you monitor?
+ How often will you monitor these resources?
+ What monitoring tools will you use?
+ Who will perform the monitoring tasks?
+ Who should be notified when something goes wrong?

For more information about logging and monitoring in Amazon S3, see the following topics.

**Note**  
For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see [S3 Express One Zone](directory-bucket-high-performance.md#s3-express-one-zone) and [Working with directory buckets](directory-buckets-overview.md).

Monitoring is an important part of maintaining the reliability, availability, and performance of Amazon S3 and your AWS solutions. You should collect monitoring data from all of the parts of your AWS solution so that you can more easily debug a multi-point failure if one occurs. AWS provides several tools for monitoring your Amazon S3 resources and responding to potential incidents.

**Amazon CloudWatch Alarms**  
Using Amazon CloudWatch alarms, you watch a single metric over a time period that you specify. If the metric exceeds a given threshold, a notification is sent to an Amazon SNS topic or AWS Auto Scaling policy. CloudWatch alarms do not invoke actions because they are in a particular state. Rather the state must have changed and been maintained for a specified number of periods. For more information, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md).

**AWS CloudTrail Logs**  
CloudTrail provides a record of actions taken by a user, role, or an AWS service in Amazon S3. Using the information collected by CloudTrail, you can determine the request that was made to Amazon S3, the IP address from which the request was made, who made the request, when it was made, and additional details. For more information, see [Logging Amazon S3 API calls using AWS CloudTrail](cloudtrail-logging.md).

**Amazon GuardDuty**  
[Amazon GuardDuty](https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html) is a threat detection service that continuously monitors your accounts, containers, workloads, and the data within your AWS environment to identify potential threats or security risks to your S3 buckets. GuardDuty also provides rich context about the threats that it detects. GuardDuty monitors AWS CloudTrail management logs for threats and surfaces security relevant information. For example, GuardDuty will include factors of an API request, such as the user that made the request, the location the request was made from, and the specific API requested, that could be unusual in your environment. [GuardDuty S3 Protection](https://docs.aws.amazon.com/guardduty/latest/ug/s3-protection.html) monitors the S3 data events collected by CloudTrail and identifies potentially anomalous and malicious behavior in all the S3 buckets in your environment.

**Amazon S3 Access Logs**  
Server access logs provide detailed records about requests that are made to a bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits. For more information, see [Logging requests with server access logging](ServerLogs.md).

**AWS Trusted Advisor**  
Trusted Advisor draws upon best practices learned from serving hundreds of thousands of AWS customers. Trusted Advisor inspects your AWS environment and then makes recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. All AWS customers have access to five Trusted Advisor checks. Customers with a Business or Enterprise support plan can view all Trusted Advisor checks.   
Trusted Advisor has the following Amazon S3-related checks:  
+ Logging configuration of Amazon S3 buckets.
+ Security checks for Amazon S3 buckets that have open access permissions.
+ Fault tolerance checks for Amazon S3 buckets that don't have versioning enabled, or have versioning suspended.
For more information, see [AWS Trusted Advisor](https://docs.aws.amazon.com/awssupport/latest/user/getting-started.html#trusted-advisor) in the *Support User Guide*.

**Amazon S3 Storage Lens**  
Amazon S3 Storage Lens is a cloud-storage analytics feature that you can use to gain organization-wide visibility into object-storage usage and activity. You can use S3 Storage Lens metrics to generate summary insights, such as finding out how much storage you have across your entire organization or which are the fastest-growing buckets and prefixes. You can also use S3 Storage Lens metrics to identify cost-optimization opportunities, implement data-protection and security best practices, and improve the performance of application workloads.  
S3 Storage Lens aggregates your metrics and displays the information in the Account snapshot section on the Amazon S3 console **Buckets** page. S3 Storage Lens also provides an interactive dashboard that you can use to visualize insights and trends, flag outliers, and receive recommendations for optimizing storage costs and applying data-protection best practices. Your dashboard has drill-down options to generate and visualize insights at the organization, account, AWS Region, storage class, bucket, prefix, or Storage Lens group level. For more information, see [Understanding Amazon S3 Storage Lens](storage_lens_basics_metrics_recommendations.md).

**Amazon S3 Inventory**  
Amazon S3 Inventory generates a list of objects and metadata that you can use to query and manage your objects. You can use this inventory report to generate granular data such as object size, last modified date, encryption status and other fields. Those reports are available daily or weekly to automatically give the latest list.  
For example, you can use Amazon S3 Inventory to audit and report on the replication and encryption status of your objects for business, compliance, and regulatory needs. You can also use Amazon S3 Inventory to simplify and speed up business workflows and big data jobs, which provides a scheduled alternative to the Amazon S3 synchronous `List` API operations. Amazon S3 Inventory doesn't use the `List` API operations to audit your objects and does not affect the request rate of your bucket. For more information, see [Cataloging and analyzing your data with S3 Inventory](storage-inventory.md).

**Amazon S3 Event Notifications**  
With the Amazon S3 Event Notifications feature, you receive notifications when certain events happen in your S3 bucket. To enable notifications, add a notification configuration that identifies the events that you want Amazon S3 to publish. For more information, see [Amazon S3 Event Notifications](EventNotifications.md).

**Amazon S3 and AWS X-Ray**  
AWS X-Ray integrates with Amazon S3 to trace upstream requests to update your application's S3 buckets. If a service traces requests by using the X-Ray SDK, Amazon S3 can send the tracing headers to downstream event subscribers such as Λ, Amazon SQS, and Amazon SNS. X-Ray enables trace messages for Amazon S3 event notifications. You can use the X-Ray trace map to view the connections between Amazon S3 and other services that your application uses. For more information, see [Amazon S3 and X-Ray](https://docs.aws.amazon.com/xray/latest/devguide/xray-services-s3.html).

The following security best practices also address logging and monitoring:
+ [Identify and audit all your Amazon S3 buckets](security-best-practices.md#audit)
+ [Implement monitoring using Amazon Web Services monitoring tools](security-best-practices.md#tools)
+ [Enable AWS Config](security-best-practices.md#config)
+ [Enable Amazon S3 server access logging](security-best-practices.md#serverlog)
+ [Use CloudTrail](security-best-practices.md#objectlog)
+ [Monitor Amazon Web Services security advisories](security-best-practices.md#advisories)

**Topics**
+ [Monitoring tools](monitoring-automated-manual.md)
+ [Logging options for Amazon S3](logging-with-S3.md)
+ [Logging Amazon S3 API calls using AWS CloudTrail](cloudtrail-logging.md)
+ [Logging requests with server access logging](ServerLogs.md)
+ [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md)
+ [Amazon S3 Event Notifications](EventNotifications.md)
+ [Monitoring your storage activity and usage with Amazon S3 Storage Lens](storage_lens.md)
+ [Cataloging and analyzing your data with S3 Inventory](storage-inventory.md)

# Monitoring tools
<a name="monitoring-automated-manual"></a>

AWS provides various tools that you can use to monitor Amazon S3. You can configure some of these tools to do the monitoring for you, while some of the tools require manual intervention. We recommend that you automate monitoring tasks as much as possible.

## Automated monitoring tools
<a name="monitoring-automated_tools"></a>

You can use the following automated monitoring tools to watch Amazon S3 and report when something is wrong:
+ **Amazon CloudWatch Alarms** – Watch a single metric over a time period that you specify, and perform one or more actions based on the value of the metric relative to a given threshold over a number of time periods. The action is a notification sent to an Amazon Simple Notification Service (Amazon SNS) topic or Amazon EC2 Auto Scaling policy. CloudWatch alarms do not invoke actions simply because they are in a particular state. The state must have changed and been maintained for a specified number of periods. For more information, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md).
+ **AWS CloudTrail Log Monitoring** – Share log files between accounts, monitor CloudTrail log files in real time by sending them to CloudWatch Logs, write log processing applications in Java, and validate that your log files have not changed after delivery by CloudTrail. For more information, see [Logging Amazon S3 API calls using AWS CloudTrail](cloudtrail-logging.md).

## Manual monitoring tools
<a name="monitoring-manual-tools"></a>

Another important part of monitoring Amazon S3 involves manually monitoring those items that the CloudWatch alarms don't cover. The Amazon S3, CloudWatch, Trusted Advisor, and other AWS Management Console dashboards provide an at-a-glance view of the state of your AWS environment. You might want to enable *server access logging*, which tracks requests for access to your bucket. Each access log record provides details about a single access request, such as the requester, bucket name, request time, request action, response status, and error code, if any. For more information, see [Logging requests with server access logging](ServerLogs.md).
+ The Amazon S3 dashboard shows the following:
  + Your buckets and the objects and properties they contain
+ The CloudWatch home page shows the following:
  + Current alarms and status
  + Graphs of alarms and resources
  + Service health status

  In addition, you can use CloudWatch to do the following: 
  + Create [customized dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html) to monitor the services you care about.
  + Graph metric data to troubleshoot issues and discover trends.
  + Search and browse all your AWS resource metrics.
  + Create and edit alarms to be notified of problems.
+ AWS Trusted Advisor can help you monitor your AWS resources to improve performance, reliability, security, and cost effectiveness. Four Trusted Advisor checks are available to all users; more than 50 checks are available to users with a Business or Enterprise support plan. For more information, see [AWS Trusted Advisor](https://aws.amazon.com/premiumsupport/trustedadvisor/).

  Trusted Advisor has these checks that relate to Amazon S3: 
  + Checks of the logging configuration of Amazon S3 buckets.
  + Security checks for Amazon S3 buckets that have open access permissions.
  + Fault tolerance checks for Amazon S3 buckets that do not have versioning enabled, or have versioning suspended.

# Logging options for Amazon S3
<a name="logging-with-S3"></a>

You can record the actions that are taken by users, roles, or AWS services on Amazon S3 resources and maintain log records for auditing and compliance purposes. To do this, you can use server-access logging, AWS CloudTrail logging, or a combination of both. We recommend that you use CloudTrail for logging bucket-level and object-level actions for your Amazon S3 resources. For more information about each option, see the following sections:
+ [Logging requests with server access logging](ServerLogs.md)
+ [Logging Amazon S3 API calls using AWS CloudTrail](cloudtrail-logging.md)

The following table lists the key properties of CloudTrail logs and Amazon S3 server-access logs. To make sure that CloudTrail meets your security requirements, review the table and notes.


| Log properties | AWS CloudTrail | Amazon S3 server logs | 
| --- |--- |--- |
|  Can be forwarded to other systems (Amazon CloudWatch Logs, Amazon CloudWatch Events)  |  Yes  | No | 
|  Deliver logs to more than one destination (for example, send the same logs to two different buckets)  |  Yes  | No | 
|  Turn on logs for a subset of objects (prefix)  |  Yes  | No | 
|  Cross-account log delivery (target and source bucket owned by different accounts)  |  Yes  | No | 
|  Integrity validation of log file by using digital signature or hashing  |  Yes  | No | 
|  Default or choice of encryption for log files  |  Yes  | No | 
|  Object operations (by using Amazon S3 APIs)  |  Yes  |  Yes  | 
|  Bucket operations (by using Amazon S3 APIs)  |  Yes  |  Yes  | 
|  Searchable UI for logs  |  Yes  | No | 
|  Fields for Object Lock parameters, Amazon S3 Select properties for log records  |  Yes  | No | 
|  Fields for `Object Size`, `Total Time`, `Turn-Around Time`, and `HTTP Referer` for log records  |  No  |  Yes  | 
|  Lifecycle transitions, expirations, restores  |  No  |  Yes  | 
|  Logging of keys in a batch delete operation  |  Yes  |  Yes  | 
|  Authentication failures1  |  No  |  Yes  | 
|  Accounts where logs get delivered  |  Bucket owner2, and requester  |  Bucket owner only  | 
| **Performance and Cost** | **AWS CloudTrail** | **Amazon S3 Server Logs** | 
| --- |--- |--- |
|  Price  |  Management events (first delivery) are free; data events incur a fee, in addition to storage of logs  |  No other cost in addition to storage of logs  | 
|  Speed of log delivery  |  Data events every 5 minutes; management events every 15 minutes  |  Within a few hours  | 
|  Log format  |  JSON  |  Log file with space-separated, newline-delimited records  | 

**Notes**

1. CloudTrail does not deliver logs for requests that fail authentication (in which the provided credentials are not valid) or that fail due to redirection (error code `301 Moved Permanently`). However, it does include logs for requests in which authorization fails (`AccessDenied`) and requests that are made by anonymous users.

1. The S3 bucket owner receives CloudTrail logs when the account does not have full access to the object in the request. For more information, see [Amazon S3 object-level actions in cross-account scenarios](cloudtrail-logging-s3-info.md#cloudtrail-object-level-crossaccount). 

1. S3 does not support delivery of CloudTrail logs or server access logs to the requester or the bucket owner for VPC endpoint requests when the VPC endpoint policy denies them or for requests that fail before the VPC policy is evaluated.

# Logging Amazon S3 API calls using AWS CloudTrail
<a name="cloudtrail-logging"></a>

[AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) is a service that provides a record of actions taken by a user, role, or an AWS service. CloudTrail captures all API calls for Amazon S3 as events. The calls captured include calls from the Amazon S3 console and code calls to the Amazon S3 API operations. Using the information collected by CloudTrail, you can determine the request that was made to Amazon S3, the IP address from which the request was made, when it was made, and additional details.

Every event or log entry contains information about who generated the request. The identity information helps you determine the following:
+ Whether the request was made with root user or user credentials.
+ Whether the request was made on behalf of an IAM Identity Center user.
+ Whether the request was made with temporary security credentials for a role or federated user.
+ Whether the request was made by another AWS service.

CloudTrail is active in your AWS account when you create the account and you automatically have access to the CloudTrail **Event history**. The CloudTrail **Event history** provides a viewable, searchable, downloadable, and immutable record of the past 90 days of recorded management events in an AWS Region. For more information, see [Working with CloudTrail Event history](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events.html) in the *AWS CloudTrail User Guide*. There are no CloudTrail charges for viewing the **Event history**.

For an ongoing record of events in your AWS account past 90 days, create a trail or a [CloudTrail Lake](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-lake.html) event data store.

**CloudTrail trails**  
A *trail* enables CloudTrail to deliver log files to an Amazon S3 bucket. All trails created using the AWS Management Console are multi-Region. You can create a single-Region or a multi-Region trail by using the AWS CLI. Creating a multi-Region trail is recommended because you capture activity in all AWS Regions in your account. If you create a single-Region trail, you can view only the events logged in the trail's AWS Region. For more information about trails, see [Creating a trail for your AWS account](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html) and [Creating a trail for an organization](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html) in the *AWS CloudTrail User Guide*.  
You can deliver one copy of your ongoing management events to your Amazon S3 bucket at no charge from CloudTrail by creating a trail, however, there are Amazon S3 storage charges. For more information about CloudTrail pricing, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/). For information about Amazon S3 pricing, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

**CloudTrail Lake event data stores**  
*CloudTrail Lake* lets you run SQL-based queries on your events. CloudTrail Lake converts existing events in row-based JSON format to [ Apache ORC](https://orc.apache.org/) format. ORC is a columnar storage format that is optimized for fast retrieval of data. Events are aggregated into *event data stores*, which are immutable collections of events based on criteria that you select by applying [advanced event selectors](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-lake-concepts.html#adv-event-selectors). The selectors that you apply to an event data store control which events persist and are available for you to query. For more information about CloudTrail Lake, see [Working with AWS CloudTrail Lake](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-lake.html) in the *AWS CloudTrail User Guide*.  
CloudTrail Lake event data stores and queries incur costs. When you create an event data store, you choose the [pricing option](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-lake-manage-costs.html#cloudtrail-lake-manage-costs-pricing-option) you want to use for the event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention period for the event data store. For more information about CloudTrail pricing, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/).

You can store your log files in your bucket for as long as you want, but you can also define Amazon S3 Lifecycle rules to archive or delete log files automatically. By default, your log files are encrypted by using Amazon S3 server-side encryption (SSE).

## Using CloudTrail logs with Amazon S3 server access logs and CloudWatch Logs
<a name="cloudtrail-logging-vs-server-logs"></a>

AWS CloudTrail logs provide a record of actions taken by a user, role, or an AWS service in Amazon S3, while Amazon S3 server access logs provide detailed records for the requests that are made to an S3 bucket. For more information about how the different logs work, and their properties, performance, and costs, see [Logging options for Amazon S3](logging-with-S3.md). 

You can use AWS CloudTrail logs together with server access logs for Amazon S3. CloudTrail logs provide you with detailed API tracking for Amazon S3 bucket-level and object-level operations. Server access logs for Amazon S3 provide you with visibility into object-level operations on your data in Amazon S3. For more information about server access logs, see [Logging requests with server access logging](ServerLogs.md).

You can also use CloudTrail logs together with Amazon CloudWatch for Amazon S3. CloudTrail integration with CloudWatch Logs delivers S3 bucket-level API activity captured by CloudTrail to a CloudWatch log stream in the CloudWatch log group that you specify. You can create CloudWatch alarms for monitoring specific API activity and receive email notifications when the specific API activity occurs. For more information about CloudWatch alarms for monitoring specific API activity, see the [AWS CloudTrail User Guide](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/). For more information about using CloudWatch with Amazon S3, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md).

**Note**  
S3 does not support delivery of CloudTrail logs to the requester or the bucket owner for VPC endpoint requests when the VPC endpoint policy denies them.

## CloudTrail tracking with Amazon S3 SOAP API calls
<a name="cloudtrail-s3-soap"></a>

CloudTrail tracks Amazon S3 SOAP API calls. Amazon S3 SOAP support over HTTP is deprecated, but it is still available over HTTPS. For more information about Amazon S3 SOAP support, see [Appendix: SOAP API](https://docs.aws.amazon.com/AmazonS3/latest/API/APISoap.html) in the *Amazon S3 API Reference*. 

**Important**  
Newer Amazon S3 features are not supported for SOAP. We recommend that you use either the REST API or the AWS SDKs.

 The following table shows Amazon S3 SOAP actions tracked by CloudTrail logging. 


| SOAP API name | API event name used in CloudTrail log | 
| --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/SOAPListAllMyBuckets.html](https://docs.aws.amazon.com/AmazonS3/latest/API/SOAPListAllMyBuckets.html)  | ListBuckets | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/SOAPCreateBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/SOAPCreateBucket.html)  | CreateBucket | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/SOAPDeleteBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/SOAPDeleteBucket.html)  | DeleteBucket | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/SOAPGetBucketAccessControlPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/SOAPGetBucketAccessControlPolicy.html)  | GetBucketAcl | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/SOAPSetBucketAccessControlPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/SOAPSetBucketAccessControlPolicy.html)  | PutBucketAcl | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/SOAPGetBucketLoggingStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/SOAPGetBucketLoggingStatus.html)  | GetBucketLogging | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/SOAPSetBucketLoggingStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/SOAPSetBucketLoggingStatus.html)  | PutBucketLogging | 

 For more information about CloudTrail and Amazon S3, see the following topics: 

**Topics**
+ [Using CloudTrail logs with Amazon S3 server access logs and CloudWatch Logs](#cloudtrail-logging-vs-server-logs)
+ [CloudTrail tracking with Amazon S3 SOAP API calls](#cloudtrail-s3-soap)
+ [Amazon S3 CloudTrail events](cloudtrail-logging-s3-info.md)
+ [CloudTrail log file entries for Amazon S3 and S3 on Outposts](cloudtrail-logging-understanding-s3-entries.md)
+ [Enabling CloudTrail event logging for S3 buckets and objects](enable-cloudtrail-logging-for-s3.md)
+ [Identifying Amazon S3 requests using CloudTrail](cloudtrail-request-identification.md)

# Amazon S3 CloudTrail events
<a name="cloudtrail-logging-s3-info"></a>

**Important**  
Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. Starting January 5, 2023, all new object uploads to Amazon S3 are automatically encrypted at no additional cost and with no impact on performance. The automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in CloudTrail logs, S3 Inventory, S3 Storage Lens, the Amazon S3 console, and as an additional Amazon S3 API response header in the AWS CLI and AWS SDKs. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html).

This section provides information about the events that S3 logs to CloudTrail.

## Amazon S3 data events in CloudTrail
<a name="cloudtrail-data-events"></a>

[Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) provide information about the resource operations performed on or in a resource (for example, reading or writing to an Amazon S3 object). These are also known as data plane operations. Data events are often high-volume activities. By default, CloudTrail doesn’t log data events. The CloudTrail **Event history** doesn't record data events.

Additional charges apply for data events. For more information about CloudTrail pricing, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/).

You can log data events for the Amazon S3 resource types by using the CloudTrail console, AWS CLI, or CloudTrail API operations. For more information about how to log data events, see [Logging data events with the AWS Management Console](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events-console) and [Logging data events with the AWS Command Line Interface](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#creating-data-event-selectors-with-the-AWS-CLI) in the *AWS CloudTrail User Guide*.

The following table lists the Amazon S3 resource types for which you can log data events. The **Data event type (console)** column shows the value to choose from the **Data event type** list on the CloudTrail console. The **resources.type value** column shows the `resources.type` value, which you would specify when configuring advanced event selectors using the AWS CLI or CloudTrail APIs. The **Data APIs logged to CloudTrail** column shows the API calls logged to CloudTrail for the resource type. 




| Data event type (console) | resources.type value | Data APIs logged to CloudTrail | 
| --- | --- | --- | 
| S3 |  AWS::S3::Object  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/cloudtrail-logging-s3-info.html)  | 
| S3 Express One Zone |  AWS::S3Express::Object  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/cloudtrail-logging-s3-info.html)  | 
| S3 Access Point |  AWS::S3::Access Point  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/cloudtrail-logging-s3-info.html)  | 
| S3 Object Lambda |  AWS::S3ObjectLambda::AccessPoint  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/cloudtrail-logging-s3-info.html)  | 
| S3 Outposts |  AWS::S3Outposts::Object  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/cloudtrail-logging-s3-info.html)  | 

You can configure advanced event selectors to filter on the `eventName`, `readOnly`, and `resources.ARN` fields to log only those events that are important to you. For more information about these fields, see [https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_AdvancedFieldSelector.html](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_AdvancedFieldSelector.html) in the *AWS CloudTrail API Reference*.

## Amazon S3 management events in CloudTrail
<a name="cloudtrail-management-events"></a>

Amazon S3 logs all control plane operations as management events. For more information about S3 API operations, see the [Amazon S3 API Reference](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operations.html).

## How CloudTrail captures requests made to Amazon S3
<a name="cloudtrail-logging-s3-requests"></a>

By default, CloudTrail logs S3 bucket-level API calls that were made in the last 90 days, but not log requests made to objects. Bucket-level calls include events such as `CreateBucket`, `DeleteBucket`, `PutBucketLifecycle`, `PutBucketPolicy`, and so on. You can see bucket-level events on the CloudTrail console. However, you can't view data events (Amazon S3 object-level calls) there—you must parse or query CloudTrail logs for them. 

If you are logging data activity with AWS CloudTrail, the event record for an Amazon S3 `DeleteObjects` data event includes both the `DeleteObjects` event and a `DeleteObject` event for each object deleted as part of that operation. You can exclude the additional visibility about deleted objects from the event record. For more information, see [AWS CLI examples for filtering data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/filtering-data-events.html#filtering-data-events-deleteobjects) in the *AWS CloudTrail User Guide.*

## Amazon S3 account-level actions tracked by CloudTrail logging
<a name="cloudtrail-account-level-tracking"></a>

CloudTrail logs account-level actions. Amazon S3 records are written together with other AWS service records in a log file. CloudTrail determines when to create and write to a new file based on a time period and file size. 

The tables in this section list the Amazon S3 account-level actions that are supported for logging by CloudTrail.

Amazon S3 account-level API actions tracked by CloudTrail logging appear as the following event names. The CloudTrail event names differ from the API action name. For example, DeletePublicAccessBlock is DeleteAccountPublicAccessBlock.
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeletePublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeletePublicAccessBlock.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetPublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetPublicAccessBlock.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutPublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutPublicAccessBlock.html)

## Amazon S3 bucket-level actions that are tracked by CloudTrail logging
<a name="cloudtrail-bucket-level-tracking"></a>

By default, CloudTrail logs bucket-level actions for general purpose buckets. Amazon S3 records are written together with other AWS service records in a log file. CloudTrail determines when to create and write to a new file based on a time period and file size. 

This section lists the Amazon S3 bucket-level actions that are supported for logging by CloudTrail.

Amazon S3 bucket-level API actions tracked by CloudTrail logging appear as the following event names. In some cases, the CloudTrail event name differs from the API action name. For example, `PutBucketLifecycleConfiguration` is `PutBucketLifecycle`.
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucketMetadataConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucketMetadataConfiguration.html) (V2 API operation)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucketMetadataTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucketMetadataTableConfiguration.html) (V1 API operation)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketAnalyticsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketAnalyticsConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketCors.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketCors.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketEncryption.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketIntelligentTieringConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketIntelligentTieringConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketInventoryConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketInventoryConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketLifecycle.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketLifecycle.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetadataConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetadataConfiguration.html) (V2 API operation)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetadataTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetadataTableConfiguration.html) (V1 API operation)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetricsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetricsConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketOwnershipControls.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketOwnershipControls.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeletePublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeletePublicAccessBlock.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketReplication.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketTagging.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAccelerateConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAccelerateConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAcl.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAnalyticsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAnalyticsConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketCors.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketCors.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketIntelligentTieringConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketIntelligentTieringConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketInventoryConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketInventoryConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLifecycle.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLifecycle.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLocation.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLogging.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetadataConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetadataConfiguration.html) (V2 API operation)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetadataTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetadataTableConfiguration.html) (V1 API operation)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetricsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetricsConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketNotification.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketNotification.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLockConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLockConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketOwnershipControls.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketOwnershipControls.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicyStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicyStatus.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetPublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetPublicAccessBlock.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketRequestPayment.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketRequestPayment.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketTagging.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketVersioning.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketWebsite.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketWebsite.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAccelerateConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAccelerateConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAcl.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAnalyticsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAnalyticsConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketCors.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketCors.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketIntelligentTieringConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketIntelligentTieringConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketInventoryConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketInventoryConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycle.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycle.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketMetricsConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketMetricsConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketNotification.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketNotification.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLockConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLockConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketOwnershipControls.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketOwnershipControls.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutPublicAccessBlock.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutPublicAccessBlock.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketRequestPayment.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketRequestPayment.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketTagging.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketWebsite.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketWebsite.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataJournalTableConfiguratione.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataJournalTableConfiguratione.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataInventoryTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataInventoryTableConfiguration.html)

In addition to these API operations, you can also use the [OPTIONS object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTOPTIONSobject.html) object-level action. This action is treated like a bucket-level action in CloudTrail logging because the action checks the CORS configuration of a bucket.

**Note**  
The HeadBucket API is supported as an Amazon S3 data event in CloudTrail. 

## Amazon S3 Express One Zone bucket-level (Regional API endpoint) actions tracked by CloudTrail logging
<a name="cloudtrail-bucket-level-tracking-s3express"></a>

By default, CloudTrail logs bucket-level actions for directory buckets as management events. The `eventsource` for CloudTrail management events for S3 Express One Zone is `s3express.amazonaws.com`.

These following Regional endpoint API operations are logged to CloudTrail.
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListDirectoryBuckets.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListDirectoryBuckets.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketEncryption.html)

For more information, see [Logging with AWS CloudTrail for S3 Express One Zone](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-one-zone-logging.html)

## Amazon S3 object-level actions in cross-account scenarios
<a name="cloudtrail-object-level-crossaccount"></a>

The following are special use cases involving the object-level API calls in cross-account scenarios and how CloudTrail logs are reported. CloudTrail delivers logs to the requester (the account that made the API call), except in some access denied cases where log entries are redacted or omitted. When setting up cross-account access, consider the examples in this section.

**Note**  
The examples assume that CloudTrail logs are appropriately configured. 

### Example 1: CloudTrail delivers logs to the bucket owner
<a name="cloudtrail-crossaccount-example1"></a>

CloudTrail delivers logs to the bucket owner even if the bucket owner does not have permissions for the same object API operation. Consider the following cross-account scenario:
+ Account A owns the bucket.
+ Account B (the requester) tries to access an object in that bucket.
+ Account C owns the object. Account C might or might not be the same account as Account A.

**Note**  
CloudTrail always delivers object-level API logs to the requester (Account B). In addition, CloudTrail also delivers the same logs to the bucket owner (Account A) even when the bucket owner does not own the object (Account C) or have permissions for those same API operations on that object.

### Example 2: CloudTrail does not proliferate email addresses that are used in setting object ACLs
<a name="cloudtrail-crossaccount-example2"></a>

Consider the following cross-account scenario:
+ Account A owns the bucket.
+  Account B (the requester) sends a request to set an object ACL grant by using an email address. For more information about ACLs, see [Access control list (ACL) overview](acl-overview.md).

The requester gets the logs along with the email information. However, the bucket owner—if they are eligible to receive logs, as in example 1—gets the CloudTrail log reporting the event. However, the bucket owner doesn't get the ACL configuration information, specifically the grantee email address and the grant. The only information that the log tells the bucket owner is that an ACL API call was made by Account B.

# CloudTrail log file entries for Amazon S3 and S3 on Outposts
<a name="cloudtrail-logging-understanding-s3-entries"></a>

**Important**  
Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. Starting January 5, 2023, all new object uploads to Amazon S3 are automatically encrypted at no additional cost and with no impact on performance. The automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in CloudTrail logs, S3 Inventory, S3 Storage Lens, the Amazon S3 console, and as an additional Amazon S3 API response header in the AWS CLI and AWS SDKs. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html).

An event represents a single request from any source and includes information about the requested API operation, the date and time of the operation, request parameters, and so on. CloudTrail log files aren't an ordered stack trace of the public API calls, so events don't appear in any specific order.

**Note**  
To view CloudTrail log file examples for Amazon S3 Express One Zone, see [CloudTrail log file examples for S3 Express One Zone](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-log-files.html).

For more information, see the following examples.

**Topics**
+ [Example: CloudTrail log file entry for Amazon S3](#example-ct-log-s3)

## Example: CloudTrail log file entry for Amazon S3
<a name="example-ct-log-s3"></a>

The following example shows a CloudTrail log entry that demonstrates the [GET Service](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTServiceGET.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTacl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTacl.html), and [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETversioningStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETversioningStatus.html) actions.

```
{
    "Records": [
    {
        "eventVersion": "1.03",
        "userIdentity": {
            "type": "IAMUser",
            "principalId": "111122223333",
            "arn": "arn:aws:iam::111122223333:user/myUserName",
            "accountId": "111122223333",
            "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
            "userName": "myUserName"
        },
        "eventTime": "2019-02-01T03:18:19Z",
        "eventSource": "s3.amazonaws.com",
        "eventName": "ListBuckets",
        "awsRegion": "us-west-2",
        "sourceIPAddress": "127.0.0.1",
        "userAgent": "[]",
        "requestParameters": {
            "host": [
                "s3.us-west-2.amazonaws.com"
            ]
        },
        "responseElements": null,
        "additionalEventData": {
            "SignatureVersion": "SigV2",
            "AuthenticationMethod": "QueryString",
            "aclRequired": "Yes"
    },
        "requestID": "47B8E8D397DCE7A6",
        "eventID": "cdc4b7ed-e171-4cef-975a-ad829d4123e8",
        "eventType": "AwsApiCall",
        "recipientAccountId": "444455556666",
        "tlsDetails": {
            "tlsVersion": "TLSv1.2",
            "cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
            "clientProvidedHostHeader": "s3.amazonaws.com"
    }      
    },
    {
       "eventVersion": "1.03",
       "userIdentity": {
            "type": "IAMUser",
            "principalId": "111122223333",
            "arn": "arn:aws:iam::111122223333:user/myUserName",
            "accountId": "111122223333",
            "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
            "userName": "myUserName"
        },
      "eventTime": "2019-02-01T03:22:33Z",
      "eventSource": "s3.amazonaws.com",
      "eventName": "PutBucketAcl",
      "awsRegion": "us-west-2",
      "sourceIPAddress": "",
      "userAgent": "[]",
      "requestParameters": {
          "bucketName": "",
          "AccessControlPolicy": {
              "AccessControlList": {
                  "Grant": {
                      "Grantee": {
                          "xsi:type": "CanonicalUser",
                          "xmlns:xsi": "http://www.w3.org/2001/XMLSchema-instance",
                          "ID": "d25639fbe9c19cd30a4c0f43fbf00e2d3f96400a9aa8dabfbbebe1906Example"
                       },
                      "Permission": "FULL_CONTROL"
                   }
              },
              "xmlns": "http://s3.amazonaws.com/doc/2006-03-01/",
              "Owner": {
                  "ID": "d25639fbe9c19cd30a4c0f43fbf00e2d3f96400a9aa8dabfbbebe1906Example"
              }
          },
          "host": [
              "s3.us-west-2.amazonaws.com"
          ],
          "acl": [
              ""
          ]
      },
      "responseElements": null,
      "additionalEventData": {
          "SignatureVersion": "SigV4",
          "CipherSuite": "ECDHE-RSA-AES128-SHA",
          "AuthenticationMethod": "AuthHeader"
      },
      "requestID": "BD8798EACDD16751",
      "eventID": "607b9532-1423-41c7-b048-ec2641693c47",
      "eventType": "AwsApiCall",
      "recipientAccountId": "111122223333",
      "tlsDetails": {
            "tlsVersion": "TLSv1.2",
            "cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
            "clientProvidedHostHeader": "s3.amazonaws.com"
    }              
    },
    {
      "eventVersion": "1.03",
      "userIdentity": {
          "type": "IAMUser",
          "principalId": "111122223333",
          "arn": "arn:aws:iam::111122223333:user/myUserName",
          "accountId": "111122223333",
          "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
          "userName": "myUserName"
        },
      "eventTime": "2019-02-01T03:26:37Z",
      "eventSource": "s3.amazonaws.com",
      "eventName": "GetBucketVersioning",
      "awsRegion": "us-west-2",
      "sourceIPAddress": "",
      "userAgent": "[]",
      "requestParameters": {
          "host": [
              "s3.us-west-2.amazonaws.com"
          ],
          "bucketName": "amzn-s3-demo-bucket1",
          "versioning": [
              ""
          ]
      },
      "responseElements": null,
      "additionalEventData": {
          "SignatureVersion": "SigV4",
          "CipherSuite": "ECDHE-RSA-AES128-SHA",
          "AuthenticationMethod": "AuthHeader"
    },
      "requestID": "07D681279BD94AED",
      "eventID": "f2b287f3-0df1-4961-a2f4-c4bdfed47657",
      "eventType": "AwsApiCall",
      "recipientAccountId": "111122223333",
      "tlsDetails": {
            "tlsVersion": "TLSv1.2",
            "cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
            "clientProvidedHostHeader": "s3.amazonaws.com"
    }                 
    }
  ]
}
```

# Enabling CloudTrail event logging for S3 buckets and objects
<a name="enable-cloudtrail-logging-for-s3"></a>

You can use CloudTrail data events to get information about bucket and object-level requests in Amazon S3. To enable CloudTrail data events for all of your buckets or for a list of specific buckets, you must [create a trail manually in CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-a-trail-using-the-console-first-time.html). 

**Note**  
The default setting for CloudTrail is to find only management events. Check to ensure that you have the data events enabled for your account.
 With an S3 bucket that is generating a high workload, you could quickly generate thousands of logs in a short amount of time. Be mindful of how long you choose to enable CloudTrail data events for a busy bucket. 

 CloudTrail stores Amazon S3 data event logs in an S3 bucket of your choosing. Consider using a bucket in a separate AWS account to better organize events from multiple buckets that you might own into a central place for easier querying and analysis. AWS Organizations helps you create an AWS account that is linked to the account that owns the bucket that you're monitoring. For more information, see [What is AWS Organizations?](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) in the *AWS Organizations User Guide*.

When you log data events for a trail in CloudTrail, you can choose to use advanced event selectors or basic event selectors to log data events for objects stored in general purpose buckets. To log data events for objects stored in directory buckets, you must use advanced event selectors. For more information, see [ Logging with AWS CloudTrail for S3 Express One Zone](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-one-zone-logging.html).

When you create a trail in the CloudTrail console using advanced event selectors, in the data events section, you can choose **Log all events** for the **Log selector template** to log all object-level events. When you create a trail in the CloudTrail console using basic event selectors, in the data events section, you can select the **Select all S3 buckets in your account** check box to log all object-level events. 

**Note**  
It's a best practice to create a lifecycle configuration for your AWS CloudTrail data event bucket. Configure the lifecycle configuration to periodically remove log files after the period of time you believe you need to audit them. Doing so reduces the amount of data that Athena analyzes for each query. For more information, see [Setting an S3 Lifecycle configuration on a bucket](how-to-set-lifecycle-configuration-intro.md).
For more information about logging format, see [Logging Amazon S3 API calls using AWS CloudTrail](cloudtrail-logging.md).
For examples of how to query CloudTrail logs, see the *AWS Big Data Blog* post [Analyze Security, Compliance, and Operational Activity Using AWS CloudTrail and Amazon Athena](https://aws.amazon.com/blogs/big-data/aws-cloudtrail-and-amazon-athena-dive-deep-to-analyze-security-compliance-and-operational-activity/). 

## Enable logging for objects in a bucket using the console
<a name="enable-cloudtrail-events"></a>

You can use the AWS CloudTrail console to configure a CloudTrail trail to log data events for objects in an S3 bucket. CloudTrail supports logging Amazon S3 object-level API operations such as `GetObject`, `DeleteObject`, and `PutObject`. These events are called *data events*. 

By default, CloudTrail trails don't log data events, but you can configure trails to log data events for S3 buckets that you specify, or to log data events for all the Amazon S3 buckets in your AWS account. For more information, see [Logging Amazon S3 API calls using AWS CloudTrail](cloudtrail-logging.md). 

CloudTrail does not populate data events in the CloudTrail event history. Additionally, not all bucket-level actions are populated in the CloudTrail event history. For more information about the Amazon S3 bucket–level API actions tracked by CloudTrail logging, see [Amazon S3 bucket-level actions that are tracked by CloudTrail logging](cloudtrail-logging-s3-info.md#cloudtrail-bucket-level-tracking). For more information about how to query CloudTrail logs, see the AWS Knowledge Center article about [using Amazon CloudWatch Logs filter patterns and Amazon Athena to query CloudTrail logs](https://aws.amazon.com/premiumsupport/knowledge-center/find-cloudtrail-object-level-events/).

**Note**  
If you are logging data activity with AWS CloudTrail, the event record for an Amazon S3 `DeleteObjects` data event includes both the `DeleteObjects` event and a `DeleteObject` event for each object deleted as part of that operation. You can exclude the additional visibility about deleted objects from the event record. For more information, see [AWS CLI examples for filtering data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/filtering-data-events.html#filtering-data-events-deleteobjects) in the *AWS CloudTrail User Guide*.

To enable CloudTrail data events logging for objects in an S3 general purpose bucket or an S3 directory bucket see [Creating a trail with the CloudTrail console](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-a-trail-using-the-console-first-time) in the *AWS CloudTrail User Guide*.

For more information about logging objects in an S3 directory bucket, see [Logging with AWS CloudTrail for directory buckets](s3-express-one-zone-logging.md).

For information about using the CloudTrail console to configure a trail to log S3 data events, see [Logging data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide*. 

To disable CloudTrail data events logging for objects in an S3 bucket, see [Deleting a trail with the CloudTrail console](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-delete-trails-console.html) in the *AWS CloudTrail User Guide*. 

**Important**  
Additional charges apply for data events. For more information, see [AWS CloudTrail pricing](https://aws.amazon.com/cloudtrail/pricing/). 

For more information about CloudTrail logging with S3 buckets, see the following topics:
+ [Creating a general purpose bucket](create-bucket-overview.md)
+ [Viewing the properties for an S3 general purpose bucket](view-bucket-properties.md)
+ [Logging Amazon S3 API calls using AWS CloudTrail](cloudtrail-logging.md)
+ [ Working with CloudTrail log files](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-working-with-log-files.html) in the *AWS CloudTrail User Guide*

# Identifying Amazon S3 requests using CloudTrail
<a name="cloudtrail-request-identification"></a>

In Amazon S3, you can identify requests using an AWS CloudTrail event log. AWS CloudTrail is the preferred way of identifying Amazon S3 requests, but if you are using Amazon S3 server access logs, see [Using Amazon S3 server access logs to identify requests](using-s3-access-logs-to-identify-requests.md).

**Topics**
+ [Identifying requests made to Amazon S3 in a CloudTrail log](#identify-S3-requests-using-in-CTlog)
+ [Identifying Amazon S3 Signature Version 2 requests by using CloudTrail](#cloudtrail-identification-sigv2-requests)
+ [Identifying access to S3 objects by using CloudTrail](#cloudtrail-identification-object-access)

## Identifying requests made to Amazon S3 in a CloudTrail log
<a name="identify-S3-requests-using-in-CTlog"></a>

After you set up CloudTrail to deliver events to a bucket, you should start to see objects go to your destination bucket on the Amazon S3 console. These are formatted as follows: 

`s3://amzn-s3-demo-bucket1/AWSLogs/111122223333/CloudTrail/Region/yyyy/mm/dd` 

Events logged by CloudTrail are stored as compressed, gzipped JSON objects in your S3 bucket. To efficiently find requests, you should use a service like Amazon Athena to index and query the CloudTrail logs. 

For more information about CloudTrail and Athena, see [Creating the table for AWS CloudTrail logs in Athena using partition projection](https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html#create-cloudtrail-table-partition-projection) in the *Amazon Athena User Guide*.

## Identifying Amazon S3 Signature Version 2 requests by using CloudTrail
<a name="cloudtrail-identification-sigv2-requests"></a>

You can use a CloudTrail event log to identify which API signature version was used to sign a request in Amazon S3. This capability is important because support for Signature Version 2 will be turned off (deprecated). After that, Amazon S3 will no longer accept requests that use Signature Version 2, and all requests must use *Signature Version 4* signing. 

We *strongly* recommend that you use CloudTrail to help determine whether any of your workflows are using Signature Version 2 signing. Remediate them by upgrading your libraries and code to use Signature Version 4 instead to prevent any impact to your business. 

For more information, see [Announcement: AWS CloudTrail for Amazon S3 adds new fields for enhanced security auditing](https://forums.aws.amazon.com/ann.jspa?annID=6551) in AWS re:Post.

**Note**  
CloudTrail events for Amazon S3 include the signature version in the request details under the key name of '`additionalEventData`. To find the signature version on requests made for objects in Amazon S3 such as `GET`, `PUT`, and `DELETE` requests, you must enable CloudTrail data events. (This feature is turned off by default.)

AWS CloudTrail is the preferred method for identifying Signature Version 2 requests. If you're using Amazon S3 server-access logs, see [Identifying Signature Version 2 requests by using Amazon S3 access logs](using-s3-access-logs-to-identify-requests.md#using-s3-access-logs-to-identify-sigv2-requests).

**Topics**
+ [Athena query examples for identifying Amazon S3 Signature Version 2 requests](#ct-examples-identify-sigv2-requests)
+ [Partitioning Signature Version 2 data](#partitioning-sigv2-data)

### Athena query examples for identifying Amazon S3 Signature Version 2 requests
<a name="ct-examples-identify-sigv2-requests"></a>

**Example — Select all Signature Version 2 events, and print only `EventTime`, `S3_Action`, `Request_Parameters`, `Region`, `SourceIP`, and `UserAgent`**  
In the following Athena query, replace *`s3_cloudtrail_events_db.cloudtrail_table`* with your Athena details, and increase or remove the limit as needed.   

```
SELECT EventTime, EventName as S3_Action, requestParameters as Request_Parameters, awsregion as AWS_Region, sourceipaddress as Source_IP, useragent as User_Agent
FROM s3_cloudtrail_events_db.cloudtrail_table
WHERE eventsource='s3.amazonaws.com'
AND json_extract_scalar(additionalEventData, '$.SignatureVersion')='SigV2'
LIMIT 10;
```

**Example — Select all requesters that are sending Signature Version 2 traffic**  
   

```
SELECT useridentity.arn, Count(requestid) as RequestCount
FROM s3_cloudtrail_events_db.cloudtrail_table
WHERE eventsource='s3.amazonaws.com'
    and json_extract_scalar(additionalEventData, '$.SignatureVersion')='SigV2'
Group by useridentity.arn
```

### Partitioning Signature Version 2 data
<a name="partitioning-sigv2-data"></a>

If you have a large amount of data to query, you can reduce the costs and running time of Athena by creating a partitioned table. 

To do this, create a new table with partitions as follows.

```
   CREATE EXTERNAL TABLE s3_cloudtrail_events_db.cloudtrail_table_partitioned(
        eventversion STRING,
        userIdentity STRUCT<
            type:STRING,
            principalid:STRING,
            arn:STRING,
            accountid:STRING,
            invokedby:STRING,
            accesskeyid:STRING,
            userName:STRING,
         sessioncontext:STRUCT<
                    attributes:STRUCT< 
                    mfaauthenticated:STRING,
                    creationdate:STRING>,
                    sessionIssuer:STRUCT<
                    type:STRING,
                    principalId:STRING,
                    arn:STRING,
                    accountId:STRING,
                    userName:STRING>
                >
             >,
        eventTime STRING,
        eventSource STRING,
        eventName STRING,
        awsRegion STRING,
        sourceIpAddress STRING,
        userAgent STRING,
        errorCode STRING,
        errorMessage STRING,
        requestParameters STRING,
        responseElements STRING,
        additionalEventData STRING,
        requestId STRING,
        eventId STRING,
        resources ARRAY<STRUCT<ARN:STRING,accountId: STRING,type:STRING>>, 
        eventType STRING,
        apiVersion STRING,
        readOnly STRING,
        recipientAccountId STRING,
        serviceEventDetails STRING,
        sharedEventID STRING,
        vpcEndpointId STRING
    )   
    PARTITIONED BY (region string, year string, month string, day string)
    ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' 
    STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
    OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
    LOCATION 's3://amzn-s3-demo-bucket1/AWSLogs/111122223333/';
```

Then, create the partitions individually. You can't get results from dates that you haven't created. 

```
ALTER TABLE s3_cloudtrail_events_db.cloudtrail_table_partitioned ADD
    PARTITION (region= 'us-east-1', year= '2019', month= '02', day= '19') LOCATION 's3://amzn-s3-demo-bucket1/AWSLogs/111122223333/CloudTrail/us-east-1/2019/02/19/'
    PARTITION (region= 'us-west-1', year= '2019', month= '02', day= '19') LOCATION 's3://amzn-s3-demo-bucket1/AWSLogs/111122223333/CloudTrail/us-west-1/2019/02/19/'
    PARTITION (region= 'us-west-2', year= '2019', month= '02', day= '19') LOCATION 's3://amzn-s3-demo-bucket1/AWSLogs/111122223333/CloudTrail/us-west-2/2019/02/19/'
    PARTITION (region= 'ap-southeast-1', year= '2019', month= '02', day= '19') LOCATION 's3://amzn-s3-demo-bucket1/AWSLogs/111122223333/CloudTrail/ap-southeast-1/2019/02/19/'
    PARTITION (region= 'ap-southeast-2', year= '2019', month= '02', day= '19') LOCATION 's3://amzn-s3-demo-bucket1/AWSLogs/111122223333/CloudTrail/ap-southeast-2/2019/02/19/'
    PARTITION (region= 'ap-northeast-1', year= '2019', month= '02', day= '19') LOCATION 's3://amzn-s3-demo-bucket1/AWSLogs/111122223333/CloudTrail/ap-northeast-1/2019/02/19/'
    PARTITION (region= 'eu-west-1', year= '2019', month= '02', day= '19') LOCATION 's3://amzn-s3-demo-bucket1/AWSLogs/111122223333/CloudTrail/eu-west-1/2019/02/19/'
    PARTITION (region= 'sa-east-1', year= '2019', month= '02', day= '19') LOCATION 's3://amzn-s3-demo-bucket1/AWSLogs/111122223333/CloudTrail/sa-east-1/2019/02/19/';
```

You can then make the request based on these partitions, and you don't need to load the full bucket. 

```
SELECT useridentity.arn,
Count(requestid) AS RequestCount
FROM s3_cloudtrail_events_db.cloudtrail_table_partitioned
WHERE eventsource='s3.amazonaws.com'
AND json_extract_scalar(additionalEventData, '$.SignatureVersion')='SigV2'
AND region='us-east-1'
AND year='2019'
AND month='02'
AND day='19'
Group by useridentity.arn
```

## Identifying access to S3 objects by using CloudTrail
<a name="cloudtrail-identification-object-access"></a>

You can use your AWS CloudTrail event logs to identify Amazon S3 object access requests for data events such as `GetObject`, `DeleteObject`, and `PutObject`, and discover additional information about those requests.

**Note**  
If you are logging data activity with AWS CloudTrail, the event record for an Amazon S3 `DeleteObjects` data event includes both the `DeleteObjects` event and a `DeleteObject` event for each object deleted as part of that operation. You can exclude the additional visibility about deleted objects from the event record. For more information, see [AWS CLI examples for filtering data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/filtering-data-events.html#filtering-data-events-deleteobjects) in the *AWS CloudTrail User Guide.*

The following example shows how to get all `PUT` object requests for Amazon S3 from an AWS CloudTrail event log. 

**Topics**
+ [Athena query examples for identifying Amazon S3 object access requests](#ct-examples-identify-object-access-requests)

### Athena query examples for identifying Amazon S3 object access requests
<a name="ct-examples-identify-object-access-requests"></a>

In the following Athena query examples, replace *`s3_cloudtrail_events_db.cloudtrail_table`* with your Athena details, and modify the date range as needed. 

**Example — Select all events that have `PUT` object access requests, and print only `EventTime`, `EventSource`, `SourceIP`, `UserAgent`, `BucketName`, `object`, and `UserARN`**  

```
SELECT
  eventTime, 
  eventName, 
  eventSource, 
  sourceIpAddress, 
  userAgent, 
  json_extract_scalar(requestParameters, '$.bucketName') as bucketName, 
  json_extract_scalar(requestParameters, '$.key') as object,
  userIdentity.arn as userArn
FROM
  s3_cloudtrail_events_db.cloudtrail_table
WHERE
  eventName = 'PutObject'
  AND eventTime BETWEEN '2019-07-05T00:00:00Z' and '2019-07-06T00:00:00Z'
```

**Example — Select all events that have `GET` object access requests, and print only `EventTime`, `EventSource`, `SourceIP`, `UserAgent`, `BucketName`, `object`, and `UserARN`**  

```
SELECT
  eventTime, 
  eventName, 
  eventSource, 
  sourceIpAddress, 
  userAgent, 
  json_extract_scalar(requestParameters, '$.bucketName') as bucketName, 
  json_extract_scalar(requestParameters, '$.key') as object,
  userIdentity.arn as userArn
FROM
  s3_cloudtrail_events_db.cloudtrail_table
WHERE
  eventName = 'GetObject'
  AND eventTime BETWEEN '2019-07-05T00:00:00Z' and '2019-07-06T00:00:00Z'
```

**Example — Select all anonymous requester events to a bucket in a certain period and print only `EventTime`, `EventName`, `EventSource`, `SourceIP`, `UserAgent`, `BucketName`, `UserARN`, and `AccountID`**  

```
SELECT
  eventTime, 
  eventName, 
  eventSource, 
  sourceIpAddress, 
  userAgent, 
  json_extract_scalar(requestParameters, '$.bucketName') as bucketName, 
  userIdentity.arn as userArn,
  userIdentity.accountId
FROM
  s3_cloudtrail_events_db.cloudtrail_table
WHERE
  userIdentity.accountId = 'anonymous'
  AND eventTime BETWEEN '2019-07-05T00:00:00Z' and '2019-07-06T00:00:00Z'
```

**Example — Identify all requests that required an ACL for authorization**  
 The following Amazon Athena query example shows how to identify all requests to your S3 buckets that required an access control list (ACL) for authorization. If the request required an ACL for authorization, the `aclRequired` value in `additionalEventData` is `Yes`. If no ACLs were required, `aclRequired` is not present. You can use this information to migrate those ACL permissions to the appropriate bucket policies. After you've created these bucket policies, you can disable ACLs for these buckets. For more information about disabling ACLs, see [Prerequisites for disabling ACLs](object-ownership-migrating-acls-prerequisites.md).  

```
SELECT
  eventTime, 
  eventName, 
  eventSource, 
  sourceIpAddress, 
  userAgent, 
  userIdentity.arn as userArn,
  json_extract_scalar(requestParameters, '$.bucketName') as bucketName,
  json_extract_scalar(requestParameters, '$.key') as object,
  json_extract_scalar(additionalEventData, '$.aclRequired') as aclRequired
FROM 
  s3_cloudtrail_events_db.cloudtrail_table
WHERE
  json_extract_scalar(additionalEventData, '$.aclRequired') = 'Yes'
  AND eventTime BETWEEN '2022-05-10T00:00:00Z' and '2022-08-10T00:00:00Z'
```

**Note**  
These query examples can also be useful for security monitoring. You can review the results for `PutObject` or `GetObject` calls from unexpected or unauthorized IP addresses or requesters and for identifying any anonymous requests to your buckets.
 This query only retrieves information from the time at which logging was enabled. 

If you are using Amazon S3 server access logs, see [Identifying object access requests by using Amazon S3 access logs](using-s3-access-logs-to-identify-requests.md#using-s3-access-logs-to-identify-objects-access).

# Logging requests with server access logging
<a name="ServerLogs"></a>

Server access logging provides detailed records for the requests that are made to a bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits. This information can also help you learn about your customer base and understand your Amazon S3 bill.

**Note**  
Server access logs don't record information about wrong-Region redirect errors for Regions that launched after March 20, 2019. Wrong-Region redirect errors occur when a request for an object or bucket is made outside the Region in which the bucket exists. 

## How do I enable log delivery?
<a name="server-access-logging-overview"></a>

To enable log delivery, perform the following basic steps. For details, see [Enabling Amazon S3 server access logging](enable-server-access-logging.md).

1. **Provide the name of the destination bucket** (also known as a *target bucket*). This bucket is where you want Amazon S3 to save the access logs as objects. Both the source and destination buckets must be in the same AWS Region and owned by the same account. The destination bucket must not have an S3 Object Lock default retention period configuration. The destination bucket must also not have Requester Pays enabled.

   You can have logs delivered to any bucket that you own that is in the same Region as the source bucket, including the source bucket itself. But for simpler log management, we recommend that you save access logs in a different bucket. 

   When your source bucket and destination bucket are the same bucket, additional logs are created for the logs that are written to the bucket, which creates an infinite loop of logs. We do not recommend doing this because it could result in a small increase in your storage billing. In addition, the extra logs about logs might make it harder to find the log that you are looking for. 

   If you choose to save access logs in the source bucket, we recommend that you specify a destination prefix (also known as a *target prefix*) for all log object keys. When you specify a prefix, all the log object names begin with a common string, which makes the log objects easier to identify. 

1. **(Optional) Assign a destination prefix to all Amazon S3 log object keys.** The destination prefix (also known as a *target prefix*) makes it simpler for you to locate the log objects. For example, if you specify the prefix value `logs/`, each log object that Amazon S3 creates begins with the `logs/` prefix in its key, for example:

   ```
   logs/2013-11-01-21-32-16-E568B2907131C0C0
   ```

   If you specify the prefix value `logs`, the log object appears as follows:

   ```
   logs2013-11-01-21-32-16-E568B2907131C0C0
   ```

   [Prefixes](https://docs.aws.amazon.com/general/latest/gr/glos-chap.html#keyprefix) are also useful to distinguish between source buckets when multiple buckets log to the same destination bucket.

   This prefix can also help when you delete the logs. For example, you can set a lifecycle configuration rule for Amazon S3 to delete objects with a specific prefix. For more information, see [Deleting Amazon S3 log files](deleting-log-files-lifecycle.md).

1. **(Optional) Set permissions **so that others can access the generated logs.** By default, only the bucket owner always has full access to the log objects. If your destination bucket uses the Bucket owner enforced setting for S3 Object Ownership to disable access control lists (ACLs), you can't grant permissions in destination grants (also known as *target grants*) that use ACLs. However, you can update your bucket policy for the destination bucket to grant access to others. For more information, see [Identity and Access Management for Amazon S3](security-iam.md) and [Permissions for log delivery](enable-server-access-logging.md#grant-log-delivery-permissions-general). 

1. **(Optional) Set a log object key format for the log files.** You have two options for the log object key format (also known as the *target object key format*): 
   + **Non-date-based partitioning** – This is the original log object key format. If you choose this format, the log file key format appears as follows: 

     ```
     [DestinationPrefix][YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]
     ```

     For example, if you specify `logs/` as the prefix, your log objects are named like this: 

     ```
     logs/2013-11-01-21-32-16-E568B2907131C0C0
     ```
   + **Date-based partitioning** – If you choose date-based partitioning, you can choose the event time or delivery time for the log file as the date source used in the log format. This format makes it easier to query the logs.

     If you choose date-based partitioning, the log file key format appears as follows: 

     ```
     [DestinationPrefix][SourceAccountId]/[SourceRegion]/[SourceBucket]/[YYYY]/[MM]/[DD]/[YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]
     ```

     For example, if you specify `logs/` as the target prefix, your log objects are named like this:

     ```
     logs/123456789012/us-west-2/amzn-s3-demo-source-bucket/2023/03/01/2023-03-01-21-32-16-E568B2907131C0C0
     ```

     For delivery time delivery, the time in the log file names corresponds to the delivery time for the log files. 

     For event time delivery, the year, month, and day correspond to the day on which the event occurred, and the hour, minutes and seconds are set to `00` in the key. The logs delivered in these log files are for a specific day only. 

   

   If you're configuring your logs through the AWS Command Line Interface (AWS CLI), AWS SDKs, or Amazon S3 REST API, use `TargetObjectKeyFormat` to specify the log object key format. To specify non-date-based partitioning, use `SimplePrefix`. To specify data-based partitioning, use `PartitionedPrefix`. If you use `PartitionedPrefix`, use `PartitionDateSource` to specify either `EventTime` or `DeliveryTime`.

   For `SimplePrefix`, the log file key format appears as follows:

   ```
   [TargetPrefix][YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]
   ```

   For `PartitionedPrefix` with event time or delivery time, the log file key format appears as follows:

   ```
   [TargetPrefix][SourceAccountId]/[SourceRegion]/[SourceBucket]/[YYYY]/[MM]/[DD]/[YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]
   ```

## Log object key format
<a name="server-log-keyname-format"></a>

Amazon S3 uses the following object key formats for the log objects that it uploads in the destination bucket:
+ **Non-date-based partitioning** – This is the original log object key format. If you choose this format, the log file key format appears as follows: 

  ```
  [DestinationPrefix][YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]
  ```
+ **Date-based partitioning** – If you choose date-based partitioning, you can choose the event time or delivery time for the log file as the date source used in the log format. This format makes it easier to query the logs.

  If you choose date-based partitioning, the log file key format appears as follows: 

  ```
  [DestinationPrefix][SourceAccountId]/[SourceRegion]/[SourceBucket]/[YYYY]/[MM]/[DD]/[YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]
  ```

In the log object key, `YYYY`, `MM`, `DD`, `hh`, `mm`, and `ss` are the digits of the year, month, day, hour, minute, and seconds (respectively). These dates and times are in Coordinated Universal Time (UTC). 

A log file delivered at a specific time can contain records written at any point before that time. There is no way to know whether all log records for a certain time interval have been delivered or not. 

The `UniqueString` component of the key is there to prevent overwriting of files. It has no meaning, and log processing software should ignore it. 

## How are logs delivered?
<a name="how-logs-delivered"></a>

Amazon S3 periodically collects access log records, consolidates the records in log files, and then uploads log files to your destination bucket as log objects. If you enable logging on multiple source buckets that identify the same destination bucket, the destination bucket will have access logs for all those source buckets. However, each log object reports access log records for a specific source bucket. 

Amazon S3 uses a special log delivery account to write server access logs. These writes are subject to the usual access control restrictions. We recommend that you update the bucket policy on the destination bucket to grant access to the logging service principal (`logging.s3.amazonaws.com`) for access log delivery. You can also grant access for access log delivery to the S3 log delivery group through your bucket access control list (ACL). However, granting access to the S3 log delivery group by using your bucket ACL is not recommended. 

When you enable server access logging and grant access for access log delivery through your destination bucket policy, you must update the policy to allow `s3:PutObject` access for the logging service principal. If you use the Amazon S3 console to enable server access logging, the console automatically updates the destination bucket policy to grant these permissions to the logging service principal. For more information about granting permissions for server access log delivery, see [Permissions for log delivery](enable-server-access-logging.md#grant-log-delivery-permissions-general). 

**Note**  
S3 does not support delivery of CloudTrail logs or server access logs to the requester or the bucket owner for VPC endpoint requests when the VPC endpoint policy denies them or for requests that fail before the VPC policy is evaluated.

**Bucket owner enforced setting for S3 Object Ownership**  
If the destination bucket uses the Bucket owner enforced setting for Object Ownership, ACLs are disabled and no longer affect permissions. You must update the bucket policy on the destination bucket to grant access to the logging service principal. For more information about Object Ownership, see [Grant access to the S3 log delivery group for server access logging](object-ownership-migrating-acls-prerequisites.md#object-ownership-server-access-logs).

## Best-effort server log delivery
<a name="LogDeliveryBestEffort"></a>

Server access log records are delivered on a best-effort basis. Most requests for a bucket that is properly configured for logging result in a delivered log record. Most log records are delivered within a few hours of the time that they are recorded, but they can be delivered more frequently. 

The completeness and timeliness of server logging is not guaranteed. The log record for a particular request might be delivered long after the request was actually processed, or *it might not be delivered at all*. It is possible that you might even see a duplication of a log record. The purpose of server logs is to give you an idea of the nature of traffic against your bucket. Although log records are rarely lost or duplicated, be aware that server logging is not meant to be a complete accounting of all requests.

Because of the best-effort nature of server logging, your usage reports might include one or more access requests that do not appear in a delivered server log. You can find these usage reports under **Cost & usage reports** in the AWS Billing and Cost Management console.

## Bucket logging status changes take effect over time
<a name="BucketLoggingStatusChanges"></a>

Changes to the logging status of a bucket take time to actually affect the delivery of log files. For example, if you enable logging for a bucket, some requests made in the following hour might be logged, and others might not. Suppose that you change the destination bucket for logging from bucket A to bucket B. For the next hour, some logs might continue to be delivered to bucket A, whereas others might be delivered to the new destination bucket B. In all cases, the new settings eventually take effect without any further action on your part. 

For more information about logging and log files, see the following sections:

**Topics**
+ [How do I enable log delivery?](#server-access-logging-overview)
+ [Log object key format](#server-log-keyname-format)
+ [How are logs delivered?](#how-logs-delivered)
+ [Best-effort server log delivery](#LogDeliveryBestEffort)
+ [Bucket logging status changes take effect over time](#BucketLoggingStatusChanges)
+ [Enabling Amazon S3 server access logging](enable-server-access-logging.md)
+ [Amazon S3 server access log format](LogFormat.md)
+ [Deleting Amazon S3 log files](deleting-log-files-lifecycle.md)
+ [Using Amazon S3 server access logs to identify requests](using-s3-access-logs-to-identify-requests.md)
+ [Troubleshoot server access logging](troubleshooting-server-access-logging.md)

# Enabling Amazon S3 server access logging
<a name="enable-server-access-logging"></a>

Server access logging provides detailed records for the requests that are made to an Amazon S3 bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits. This information can also help you learn about your customer base and understand your Amazon S3 bill.

By default, Amazon S3 doesn't collect server access logs. When you enable logging, Amazon S3 delivers access logs for a source bucket to a destination bucket (also known as a *target bucket*) that you choose. The destination bucket must be in the same AWS Region and AWS account as the source bucket. 

An access log record contains details about the requests that are made to a bucket. This information can include the request type, the resources that are specified in the request, and the time and date that the request was processed. For more information about logging basics, see [Logging requests with server access logging](ServerLogs.md). 

**Important**  
There is no extra charge for enabling server access logging on an Amazon S3 bucket. However, any log files that the system delivers to you will accrue the usual charges for storage. (You can delete the log files at any time.) We do not assess data-transfer charges for log file delivery, but we do charge the normal data-transfer rate for accessing the log files.
Your destination bucket should not have server access logging enabled. You can have logs delivered to any bucket that you own that is in the same Region as the source bucket, including the source bucket itself. However, delivering logs to the source bucket will cause an infinite loop of logs and is not recommended. For simpler log management, we recommend that you save access logs in a different bucket. For more information, see [How do I enable log delivery?](ServerLogs.md#server-access-logging-overview)
S3 buckets that have S3 Object Lock enabled can't be used as destination buckets for server access logs. Your destination bucket must not have a default retention period configuration.
The destination bucket must not have Requester Pays enabled.

You can enable or disable server access logging by using the Amazon S3 console, Amazon S3 API, the AWS Command Line Interface (AWS CLI), or AWS SDKs. 

## Permissions for log delivery
<a name="grant-log-delivery-permissions-general"></a>

Amazon S3 uses a special log delivery account to write server access logs. These writes are subject to the usual access control restrictions. For access log delivery, you must grant the logging service principal (`logging.s3.amazonaws.com`) access to your destination bucket.

To grant permissions to Amazon S3 for log delivery, you can use either a bucket policy or bucket access control lists (ACLs), depending on your destination bucket's S3 Object Ownership settings. However, we recommend that you use a bucket policy instead of ACLs. 

**Bucket owner enforced setting for S3 Object Ownership**  
If the destination bucket uses the Bucket owner enforced setting for Object Ownership, ACLs are disabled and no longer affect permissions. In this case, you must update the bucket policy for the destination bucket to grant access to the logging service principal. You can't update your bucket ACL to grant access to the S3 log delivery group. You also can't include destination grants (also known as *target grants*) in your [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html) configuration. 

For information about migrating existing bucket ACLs for access log delivery to a bucket policy, see [Grant access to the S3 log delivery group for server access logging](object-ownership-migrating-acls-prerequisites.md#object-ownership-server-access-logs). For more information about Object Ownership, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md). When you create new buckets, ACLs are disabled by default.

**Granting access by using a bucket policy**  
To grant access by using the bucket policy on the destination bucket, update the bucket policy to grant the `s3:PutObject` permission to the logging service principal. If you use the Amazon S3 console to enable server access logging, the console automatically updates the bucket policy on the destination bucket to grant this permission to the logging service principal. If you enable server access logging programmatically, you must manually update the bucket policy for the destination bucket to grant access to the logging service principal. 

For an example bucket policy that grants access to the logging service principal, see [Grant permissions to the logging service principal by using a bucket policy](#grant-log-delivery-permissions-bucket-policy).

**Granting access by using bucket ACLs**  
You can alternately use bucket ACLs to grant access for access log delivery. You add a grant entry to the bucket ACL that grants `WRITE` and `READ_ACP` permissions to the S3 log delivery group. However, granting access to the S3 log delivery group by using bucket ACLs is not recommended. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md). For information about migrating existing bucket ACLs for access log delivery to a bucket policy, see [Grant access to the S3 log delivery group for server access logging](object-ownership-migrating-acls-prerequisites.md#object-ownership-server-access-logs). For an example ACL that grants access to the logging service principal, see [Grant permissions to the log delivery group by using a bucket ACL](#grant-log-delivery-permissions-acl).

### Grant permissions to the logging service principal by using a bucket policy
<a name="grant-log-delivery-permissions-bucket-policy"></a>

This example bucket policy grants the `s3:PutObject` permission to the logging service principal (`logging.s3.amazonaws.com`). To use this bucket policy, replace the `user input placeholders` with your own information. In the following policy, `amzn-s3-demo-destination-bucket` is the destination bucket where server access logs will be delivered, and `amzn-s3-demo-source-bucket` is the source bucket. `EXAMPLE-LOGGING-PREFIX` is the optional destination prefix (also known as a *target prefix*) that you want to use for your log objects. `SOURCE-ACCOUNT-ID` is the AWS account that owns the source bucket. 

**Note**  
If there are `Deny` statements in your bucket policy, make sure that they don't prevent Amazon S3 from delivering access logs.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "S3ServerAccessLogsPolicy",
            "Effect": "Allow",
            "Principal": {
                "Service": "logging.s3.amazonaws.com"
            },
            "Action": [
                "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/EXAMPLE-LOGGING-PREFIX*",
            "Condition": {
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:s3:::amzn-s3-demo-source-bucket"
                },
                "StringEquals": {
                    "aws:SourceAccount": "SOURCE-ACCOUNT-ID"
                }
            }
        }
    ]
}
```

------

### Grant permissions to the log delivery group by using a bucket ACL
<a name="grant-log-delivery-permissions-acl"></a>

**Note**  
As a security best practice, Amazon S3 disables access control lists (ACLs) by default in all new buckets. For more information about ACL permissions in the Amazon S3 console, see [Configuring ACLs](managing-acls.md). 

Although we do not recommend this approach, you can grant permissions to the log delivery group by using a bucket ACL. However, if the destination bucket uses the Bucket owner enforced setting for Object Ownership, you can't set bucket or object ACLs. You also can't include destination grants (also known as *target grants*) in your [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html) configuration. Instead, you must use a bucket policy to grant access to the logging service principal (`logging.s3.amazonaws.com`). For more information, see [Permissions for log delivery](#grant-log-delivery-permissions-general).

In the bucket ACL, the log delivery group is represented by the following URL:

```
1. http://acs.amazonaws.com/groups/s3/LogDelivery
```

To grant `WRITE` and `READ_ACP` (ACL read) permissions, add the following grants to the destination bucket ACL:

```
 1. <Grant>
 2.     <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  xsi:type="Group">
 3.         <URI>http://acs.amazonaws.com/groups/s3/LogDelivery</URI> 
 4.     </Grantee>
 5.     <Permission>WRITE</Permission>
 6. </Grant>
 7. <Grant>
 8.     <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  xsi:type="Group">
 9.         <URI>http://acs.amazonaws.com/groups/s3/LogDelivery</URI> 
10.     </Grantee>
11.     <Permission>READ_ACP</Permission>
12. </Grant>
```

For examples of adding ACL grants programmatically, see [Configuring ACLs](managing-acls.md).

**Important**  
When you enable Amazon S3 server access logging by using AWS CloudFormation on a bucket and you're using ACLs to grant access to the S3 log delivery group, you must also add "`AccessControl": "LogDeliveryWrite"` to your CloudFormation template. Doing so is important because you can grant those permissions only by creating an ACL for the bucket, but you can't create custom ACLs for buckets in CloudFormation. You can use only canned ACLs with CloudFormation.

## To enable server access logging
<a name="enable-server-logging"></a>

To enable server access logging by using the Amazon S3 console, Amazon S3 REST API, AWS SDKs, and AWS CLI, use the following procedures.

### Using the S3 console
<a name="server-access-logging"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to enable server access logging for.

1. Choose **Properties**.

1. In the **Server access logging** section, choose **Edit**.

1. Under **Server access logging**, choose **Enable**. 

1. Under **Destination bucket**, specify a bucket and an optional prefix. If you specify a prefix, we recommend including a forward slash (`/`) after the prefix to make it easier to find your logs. 
**Note**  
Specifying a prefix with a slash (`/`) makes it simpler for you to locate the log objects. For example, if you specify the prefix value `logs/`, each log object that Amazon S3 creates begins with the `logs/` prefix in its key, as follows:  

   ```
   logs/2013-11-01-21-32-16-E568B2907131C0C0
   ```
If you specify the prefix value `logs`, the log object appears as follows:  

   ```
   logs2013-11-01-21-32-16-E568B2907131C0C0
   ```

1. Under **Log object key format**, do one of the following:
   + To choose non-date-based partitioning, choose **[DestinationPrefix][YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]**.
   + To choose date-based partitioning, choose **[DestinationPrefix][SourceAccountId]/[SourceRegion]/[SourceBucket]/[YYYY]/[MM]/[DD]/[YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]**, then choose **S3 event time** or **Log file delivery time**.

1. Choose **Save changes**.

   When you enable server access logging on a bucket, the console both enables logging on the source bucket and updates the bucket policy for the destination bucket to grant the `s3:PutObject` permission to the logging service principal (`logging.s3.amazonaws.com`). For more information about this bucket policy, see [Grant permissions to the logging service principal by using a bucket policy](#grant-log-delivery-permissions-bucket-policy).

   You can view the logs in the destination bucket. After you enable server access logging, it might take a few hours before the logs are delivered to the target bucket. For more information about how and when logs are delivered, see [How are logs delivered?](ServerLogs.md#how-logs-delivered)

For more information, see [Viewing the properties for an S3 general purpose bucket](view-bucket-properties.md).

### Using the REST API
<a name="enable-logging-rest"></a>

To enable logging, you submit a [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html) request to add the logging configuration on the source bucket. The request specifies the destination bucket (also known as a *target bucket*) and, optionally, the prefix to be used with all log object keys. 

The following example identifies `amzn-s3-demo-destination-bucket` as the destination bucket and *`logs/`* as the prefix. 

```
1. <BucketLoggingStatus xmlns="http://doc.s3.amazonaws.com/2006-03-01">
2.   <LoggingEnabled>
3.     <TargetBucket>amzn-s3-demo-destination-bucket</TargetBucket>
4.     <TargetPrefix>logs/</TargetPrefix>
5.   </LoggingEnabled>
6. </BucketLoggingStatus>
```

The following example identifies `amzn-s3-demo-destination-bucket` as the destination bucket, *`logs/`* as the prefix, and `EventTime` as the log object key format. 

```
 1. <BucketLoggingStatus xmlns="http://doc.s3.amazonaws.com/2006-03-01">
 2.   <LoggingEnabled>
 3.     <TargetBucket>amzn-s3-demo-destination-bucket</TargetBucket>
 4.     <TargetPrefix>logs/</TargetPrefix>
 5.     <TargetObjectKeyFormat>
 6.       <PartitionedPrefix>
 7.          <PartitionDateSource>EventTime</PartitionDateSource>
 8.       </PartitionedPrefix>
 9.   </TargetObjectKeyFormat>
10.   </LoggingEnabled>
11. </BucketLoggingStatus>
```

The log objects are written and owned by the S3 log delivery account, and the bucket owner is granted full permissions on the log objects. You can optionally use destination grants (also known as *target grants*) to grant permissions to other users so that they can access the logs. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html). 

**Note**  
If the destination bucket uses the Bucket owner enforced setting for Object Ownership, you can't use destination grants to grant permissions to other users. To grant permissions to others, you can update the bucket policy on the destination bucket. For more information, see [Permissions for log delivery](#grant-log-delivery-permissions-general). 

To retrieve the logging configuration on a bucket, use the [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlogging.html) API operation. 

To delete the logging configuration, you send a `PutBucketLogging` request with an empty `BucketLoggingStatus`: 

```
1. <BucketLoggingStatus xmlns="http://doc.s3.amazonaws.com/2006-03-01">
2. </BucketLoggingStatus>
```

To enable logging on a bucket, you can use either the Amazon S3 API or the AWS SDK wrapper libraries.

### Using the AWS SDKs
<a name="enable-logging-sdk"></a>

The following examples enable logging on a bucket. You must create two buckets, a source bucket and a destination (target) bucket. The examples update the bucket ACL on the destination bucket first. They then grant the log delivery group the necessary permissions to write logs to the destination bucket, and then they enable logging on the source bucket. 

These examples won't work on destination buckets that use the Bucket owner enforced setting for Object Ownership.

If the destination (target) bucket uses the Bucket owner enforced setting for Object Ownership, you can't set bucket or object ACLs. You also can't include destination (target) grants in your [PutBucketLogging](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html) configuration. You must use a bucket policy to grant access to the logging service principal (`logging.s3.amazonaws.com`). For more information, see [Permissions for log delivery](#grant-log-delivery-permissions-general).

------
#### [ .NET ]

**SDK for .NET**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [AWS Code Examples Repository](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/dotnetv3/S3#code-examples). 

```
    using System;
    using System.IO;
    using System.Threading.Tasks;
    using Amazon.S3;
    using Amazon.S3.Model;
    using Microsoft.Extensions.Configuration;

    /// <summary>
    /// This example shows how to enable logging on an Amazon Simple Storage
    /// Service (Amazon S3) bucket. You need to have two Amazon S3 buckets for
    /// this example. The first is the bucket for which you wish to enable
    /// logging, and the second is the location where you want to store the
    /// logs.
    /// </summary>
    public class ServerAccessLogging
    {
        private static IConfiguration _configuration = null!;

        public static async Task Main()
        {
            LoadConfig();

            string bucketName = _configuration["BucketName"];
            string logBucketName = _configuration["LogBucketName"];
            string logObjectKeyPrefix = _configuration["LogObjectKeyPrefix"];
            string accountId = _configuration["AccountId"];

            // If the AWS Region defined for your default user is different
            // from the Region where your Amazon S3 bucket is located,
            // pass the Region name to the Amazon S3 client object's constructor.
            // For example: RegionEndpoint.USWest2 or RegionEndpoint.USEast2.
            IAmazonS3 client = new AmazonS3Client();

            try
            {
                // Update bucket policy for target bucket to allow delivery of logs to it.
                await SetBucketPolicyToAllowLogDelivery(
                    client,
                    bucketName,
                    logBucketName,
                    logObjectKeyPrefix,
                    accountId);

                // Enable logging on the source bucket.
                await EnableLoggingAsync(
                    client,
                    bucketName,
                    logBucketName,
                    logObjectKeyPrefix);
            }
            catch (AmazonS3Exception e)
            {
                Console.WriteLine($"Error: {e.Message}");
            }
        }

        /// <summary>
        /// This method grants appropriate permissions for logging to the
        /// Amazon S3 bucket where the logs will be stored.
        /// </summary>
        /// <param name="client">The initialized Amazon S3 client which will be used
        /// to apply the bucket policy.</param>
        /// <param name="sourceBucketName">The name of the source bucket.</param>
        /// <param name="logBucketName">The name of the bucket where logging
        /// information will be stored.</param>
        /// <param name="logPrefix">The logging prefix where the logs should be delivered.</param>
        /// <param name="accountId">The account id of the account where the source bucket exists.</param>
        /// <returns>Async task.</returns>
        public static async Task SetBucketPolicyToAllowLogDelivery(
            IAmazonS3 client,
            string sourceBucketName,
            string logBucketName,
            string logPrefix,
            string accountId)
        {
            var resourceArn = @"""arn:aws:s3:::" + logBucketName + "/" + logPrefix + @"*""";

            var newPolicy = @"{
                                ""Statement"":[{
                                ""Sid"": ""S3ServerAccessLogsPolicy"",
                                ""Effect"": ""Allow"",
                                ""Principal"": { ""Service"": ""logging.s3.amazonaws.com"" },
                                ""Action"": [""s3:PutObject""],
                                ""Resource"": [" + resourceArn + @"],
                                ""Condition"": {
                                ""ArnLike"": { ""aws:SourceArn"": ""arn:aws:s3:::" + sourceBucketName + @""" },
                                ""StringEquals"": { ""aws:SourceAccount"": """ + accountId + @""" }
                                        }
                                    }]
                                }";
            Console.WriteLine($"The policy to apply to bucket {logBucketName} to enable logging:");
            Console.WriteLine(newPolicy);

            PutBucketPolicyRequest putRequest = new PutBucketPolicyRequest
            {
                BucketName = logBucketName,
                Policy = newPolicy,
            };
            await client.PutBucketPolicyAsync(putRequest);
            Console.WriteLine("Policy applied.");
        }

        /// <summary>
        /// This method enables logging for an Amazon S3 bucket. Logs will be stored
        /// in the bucket you selected for logging. Selected prefix
        /// will be prepended to each log object.
        /// </summary>
        /// <param name="client">The initialized Amazon S3 client which will be used
        /// to configure and apply logging to the selected Amazon S3 bucket.</param>
        /// <param name="bucketName">The name of the Amazon S3 bucket for which you
        /// wish to enable logging.</param>
        /// <param name="logBucketName">The name of the Amazon S3 bucket where logging
        /// information will be stored.</param>
        /// <param name="logObjectKeyPrefix">The prefix to prepend to each
        /// object key.</param>
        /// <returns>Async task.</returns>
        public static async Task EnableLoggingAsync(
            IAmazonS3 client,
            string bucketName,
            string logBucketName,
            string logObjectKeyPrefix)
        {
            Console.WriteLine($"Enabling logging for bucket {bucketName}.");
            var loggingConfig = new S3BucketLoggingConfig
            {
                TargetBucketName = logBucketName,
                TargetPrefix = logObjectKeyPrefix,
            };

            var putBucketLoggingRequest = new PutBucketLoggingRequest
            {
                BucketName = bucketName,
                LoggingConfig = loggingConfig,
            };
            await client.PutBucketLoggingAsync(putBucketLoggingRequest);
            Console.WriteLine($"Logging enabled.");
        }

        /// <summary>
        /// Loads configuration from settings files.
        /// </summary>
        public static void LoadConfig()
        {
            _configuration = new ConfigurationBuilder()
                .SetBasePath(Directory.GetCurrentDirectory())
                .AddJsonFile("settings.json") // Load settings from .json file.
                .AddJsonFile("settings.local.json", true) // Optionally, load local settings.
                .Build();
        }
    }
```
+  For API details, see [PutBucketLogging](https://docs.aws.amazon.com/goto/DotNetSDKV3/s3-2006-03-01/PutBucketLogging) in *AWS SDK for .NET API Reference*. 

------
#### [ Java ]

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.BucketLoggingStatus;
import software.amazon.awssdk.services.s3.model.LoggingEnabled;
import software.amazon.awssdk.services.s3.model.PartitionedPrefix;
import software.amazon.awssdk.services.s3.model.PutBucketLoggingRequest;
import software.amazon.awssdk.services.s3.model.TargetObjectKeyFormat;

// Class to set a bucket policy on a target S3 bucket and enable server access logging on a source S3 bucket.
public class ServerAccessLogging {
    private static S3Client s3Client;

    public static void main(String[] args) {
        String sourceBucketName = "SOURCE-BUCKET";
        String targetBucketName = "TARGET-BUCKET";
        String sourceAccountId = "123456789012";
        String targetPrefix = "logs/";

        // Create S3 Client.
        s3Client = S3Client.builder().
                region(Region.US_EAST_2)
                .build();

        // Set a bucket policy on the target S3 bucket to enable server access logging by granting the
        // logging.s3.amazonaws.com principal permission to use the PutObject operation.
        ServerAccessLogging serverAccessLogging = new ServerAccessLogging();
        serverAccessLogging.setTargetBucketPolicy(sourceAccountId, sourceBucketName, targetBucketName);

        // Enable server access logging on the source S3 bucket.
        serverAccessLogging.enableServerAccessLogging(sourceBucketName, targetBucketName,
                targetPrefix);

    }

    // Function to set a bucket policy on the target S3 bucket to enable server access logging by granting the
    // logging.s3.amazonaws.com principal permission to use the PutObject operation.
    public void setTargetBucketPolicy(String sourceAccountId, String sourceBucketName, String targetBucketName) {
        String policy = "{\n" +
                "    \"Version\": \"2012-10-17\",\n" +
                "    \"Statement\": [\n" +
                "        {\n" +
                "            \"Sid\": \"S3ServerAccessLogsPolicy\",\n" +
                "            \"Effect\": \"Allow\",\n" +
                "            \"Principal\": {\"Service\": \"logging.s3.amazonaws.com\"},\n" +
                "            \"Action\": [\n" +
                "                \"s3:PutObject\"\n" +
                "            ],\n" +
                "            \"Resource\": \"arn:aws:s3:::" + targetBucketName + "/*\",\n" +
                "            \"Condition\": {\n" +
                "                \"ArnLike\": {\n" +
                "                    \"aws:SourceArn\": \"arn:aws:s3:::" + sourceBucketName + "\"\n" +
                "                },\n" +
                "                \"StringEquals\": {\n" +
                "                    \"aws:SourceAccount\": \"" + sourceAccountId + "\"\n" +
                "                }\n" +
                "            }\n" +
                "        }\n" +
                "    ]\n" +
                "}";
        s3Client.putBucketPolicy(b -> b.bucket(targetBucketName).policy(policy));
    }

    // Function to enable server access logging on the source S3 bucket.
    public void enableServerAccessLogging(String sourceBucketName, String targetBucketName,
            String targetPrefix) {
        TargetObjectKeyFormat targetObjectKeyFormat = TargetObjectKeyFormat.builder()
                .partitionedPrefix(PartitionedPrefix.builder().partitionDateSource("EventTime").build())
                .build();
        LoggingEnabled loggingEnabled = LoggingEnabled.builder()
                .targetBucket(targetBucketName)
                .targetPrefix(targetPrefix)
                .targetObjectKeyFormat(targetObjectKeyFormat)
                .build();
        BucketLoggingStatus bucketLoggingStatus = BucketLoggingStatus.builder()
                .loggingEnabled(loggingEnabled)
                .build();
        s3Client.putBucketLogging(PutBucketLoggingRequest.builder()
                .bucket(sourceBucketName)
                .bucketLoggingStatus(bucketLoggingStatus)
                .build());
    }

}
```

------

### Using the AWS CLI
<a name="enabling-s3-access-logs-for-requests"></a>

We recommend that you create a dedicated logging bucket in each AWS Region that you have S3 buckets in. Then have your Amazon S3 access logs delivered to that S3 bucket. For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-logging.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-logging.html) in the *AWS CLI Reference*.

If the destination (target) bucket uses the Bucket owner enforced setting for Object Ownership, you can't set bucket or object ACLs. You also can't include destination (target) grants in your [PutBucketLogging](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html) configuration. You must use a bucket policy to grant access to the logging service principal (`logging.s3.amazonaws.com`). For more information, see [Permissions for log delivery](#grant-log-delivery-permissions-general).

**Example — Enable access logs with five buckets across two Regions**  
In this example, you have the following five buckets:   
+ `amzn-s3-demo-source-bucket-us-east-1`
+ `amzn-s3-demo-source-bucket1-us-east-1`
+ `amzn-s3-demo-source-bucket2-us-east-1`
+ `amzn-s3-demo-bucket1-us-west-2`
+ `amzn-s3-demo-bucket2-us-west-2`
**Note**  
The final step of the following procedure provides example bash scripts that you can use to create your logging buckets and enable server access logging on these buckets. To use those scripts, you must create the `policy.json` and `logging.json` files, as described in the following procedure.

1. Create two logging destination buckets in the US West (Oregon) and US East (N. Virginia) Regions and give them the following names:
   + `amzn-s3-demo-destination-bucket-logs-us-east-1`
   + `amzn-s3-demo-destination-bucket1-logs-us-west-2`

1. Later in these steps, you will enable server access logging as follows:
   + `amzn-s3-demo-source-bucket-us-east-1` logs to the S3 bucket `amzn-s3-demo-destination-bucket-logs-us-east-1` with the prefix `amzn-s3-demo-source-bucket-us-east-1`
   + `amzn-s3-demo-source-bucket1-us-east-1` logs to the S3 bucket `amzn-s3-demo-destination-bucket-logs-us-east-1` with the prefix `amzn-s3-demo-source-bucket1-us-east-1`
   + `amzn-s3-demo-source-bucket2-us-east-1` logs to the S3 bucket `amzn-s3-demo-destination-bucket-logs-us-east-1` with the prefix `amzn-s3-demo-source-bucket2-us-east-1`
   + `amzn-s3-demo-bucket1-us-west-2` logs to the S3 bucket `amzn-s3-demo-destination-bucket1-logs-us-west-2` with the prefix `amzn-s3-demo-bucket1-us-west-2`
   + `amzn-s3-demo-bucket2-us-west-2` logs to the S3 bucket `amzn-s3-demo-destination-bucket1-logs-us-west-2` with the prefix `amzn-s3-demo-bucket2-us-west-2`

1. For each destination logging bucket, grant permissions for server access log delivery by using a bucket ACL *or* a bucket policy:
   + **Update the bucket policy** (Recommended) – To grant permissions to the logging service principal, use the following `put-bucket-policy` command. Replace `amzn-s3-demo-destination-bucket-logs` with the name of your destination bucket.

     ```
     1. aws s3api put-bucket-policy --bucket amzn-s3-demo-destination-bucket-logs --policy file://policy.json
     ```

     `Policy.json` is a JSON document in the current folder that contains the following bucket policy. To use this bucket policy, replace the `user input placeholders` with your own information. In the following policy, *`amzn-s3-demo-destination-bucket-logs`* is the destination bucket where server access logs will be delivered, and `amzn-s3-demo-source-bucket` is the source bucket. `SOURCE-ACCOUNT-ID` is the AWS account that owns the source bucket.

------
#### [ JSON ]

****  

     ```
     {
         "Version":"2012-10-17",		 	 	 
         "Statement": [
             {
                 "Sid": "S3ServerAccessLogsPolicy",
                 "Effect": "Allow",
                 "Principal": {
                     "Service": "logging.s3.amazonaws.com"
                 },
                 "Action": [
                     "s3:PutObject"
                 ],
                 "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket-logs/*",
                 "Condition": {
                     "ArnLike": {
                         "aws:SourceArn": "arn:aws:s3:::amzn-s3-demo-source-bucket"
                     },
                     "StringEquals": {
                         "aws:SourceAccount": "SOURCE-ACCOUNT-ID"
                     }
                 }
             }
         ]
     }
     ```

------
   + **Update the bucket ACL** – To grant permissions to the S3 log delivery group, use the following `put-bucket-acl` command. Replace *`amzn-s3-demo-destination-bucket-logs`* with the name of your destination (target) bucket.

     ```
     1. aws s3api put-bucket-acl --bucket amzn-s3-demo-destination-bucket-logs  --grant-write URI=http://acs.amazonaws.com/groups/s3/LogDelivery --grant-read-acp URI=http://acs.amazonaws.com/groups/s3/LogDelivery 
     ```

1. Then, create a `logging.json` file that contains your logging configuration (based on one of the three examples that follow). After you create the `logging.json` file, you can apply the logging configuration by using the following `put-bucket-logging` command. Replace *`amzn-s3-demo-destination-bucket-logs`* with the name of your destination (target) bucket.

   ```
   1. aws s3api put-bucket-logging --bucket amzn-s3-demo-destination-bucket-logs --bucket-logging-status file://logging.json 
   ```
**Note**  
Instead of using this `put-bucket-logging` command to apply the logging configuration on each destination bucket, you can use one of the bash scripts provided in the next step. To use those scripts, you must create the `policy.json` and `logging.json` files, as described in this procedure.

   The `logging.json` file is a JSON document in the current folder that contains your logging configuration. If a destination bucket uses the Bucket owner enforced setting for Object Ownership, your logging configuration can't contain destination (target) grants. For more information, see [Permissions for log delivery](#grant-log-delivery-permissions-general).  
**Example – `logging.json` without destination (target) grants**  

   The following example `logging.json` file doesn't contain destination (target) grants. Therefore, you can apply this configuration to a destination (target) bucket that uses the Bucket owner enforced setting for Object Ownership.

   ```
     {
         "LoggingEnabled": {
             "TargetBucket": "amzn-s3-demo-destination-bucket-logs",
             "TargetPrefix": "amzn-s3-demo-destination-bucket/"
          }
      }
   ```  
**Example – `logging.json` with destination (target) grants**  

   The following example `logging.json` file contains destination (target) grants.

   If the destination bucket uses the Bucket owner enforced setting for Object Ownership, you can't include destination (target) grants in your [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLogging.html) configuration. For more information, see [Permissions for log delivery](#grant-log-delivery-permissions-general).

   ```
     {
         "LoggingEnabled": {
             "TargetBucket": "amzn-s3-demo-destination-bucket-logs",
             "TargetPrefix": "amzn-s3-demo-destination-bucket/",
             "TargetGrants": [
                  {
                     "Grantee": {
                         "Type": "CanonicalUser",
                         "ID": "79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"
                      },
                     "Permission": "FULL_CONTROL"
                  }
              ]
          }
      }
   ```

**Grantee values**  
You can specify the person (grantee) to whom you're assigning access rights (by using request elements) in the following ways:
   + By the person's ID:

     ```
     {
       "Grantee": {
         "Type": "CanonicalUser",
         "ID": "ID"
       }
     }
     ```
   + By URI:

     ```
     {
       "Grantee": {
         "Type": "Group",
         "URI": "http://acs.amazonaws.com/groups/global/AuthenticatedUsers"
       }
     }
     ```  
**Example – `logging.json` with the log object key format set to S3 event time**  

   The following `logging.json` file changes the log object key format to S3 event time. For more information about setting the log object key format, see [How do I enable log delivery?](ServerLogs.md#server-access-logging-overview)

   ```
     { 
       "LoggingEnabled": {
           "TargetBucket": "amzn-s3-demo-destination-bucket-logs",
           "TargetPrefix": "amzn-s3-demo-destination-bucket/",
           "TargetObjectKeyFormat": { 
               "PartitionedPrefix": { 
                   "PartitionDateSource": "EventTime" 
               }
            }
       }
   }
   ```

1. Use one of the following bash scripts to add access logging for all the buckets in your account. Replace *`amzn-s3-demo-destination-bucket-logs`* with the name of your destination (target) bucket, and replace `us-west-2` with the name of the Region that your buckets are located in.
**Note**  
This script works only if all of your buckets are in the same Region. If you have buckets in multiple Regions, you must adjust the script.   
**Example – Grant access with bucket policies and add logging for the buckets in your account**  

   ```
     loggingBucket='amzn-s3-demo-destination-bucket-logs'
     region='us-west-2'
     
     
     # Create the logging bucket.
     aws s3 mb s3://$loggingBucket --region $region
     
     aws s3api put-bucket-policy --bucket $loggingBucket --policy file://policy.json
     
     # List the buckets in this account.
     buckets="$(aws s3 ls | awk '{print $3}')"
     
     # Put a bucket logging configuration on each bucket.
     for bucket in $buckets
         do 
           # This if statement excludes the logging bucket.
           if [ "$bucket" == "$loggingBucket" ] ; then
               continue;
           fi
           printf '{
             "LoggingEnabled": {
               "TargetBucket": "%s",
               "TargetPrefix": "%s/"
           }
         }' "$loggingBucket" "$bucket"  > logging.json
         aws s3api put-bucket-logging --bucket $bucket --bucket-logging-status file://logging.json
         echo "$bucket done"
     done
     
     rm logging.json
     
     echo "Complete"
   ```  
**Example – Grant access with bucket ACLs and add logging for the buckets in your account**  

   ```
     loggingBucket='amzn-s3-demo-destination-bucket-logs'
     region='us-west-2'
     
     
     # Create the logging bucket.
     aws s3 mb s3://$loggingBucket --region $region
     
     aws s3api put-bucket-acl --bucket $loggingBucket --grant-write URI=http://acs.amazonaws.com/groups/s3/LogDelivery --grant-read-acp URI=http://acs.amazonaws.com/groups/s3/LogDelivery
     
     # List the buckets in this account.
     buckets="$(aws s3 ls | awk '{print $3}')"
     
     # Put a bucket logging configuration on each bucket.
     for bucket in $buckets
         do 
           # This if statement excludes the logging bucket.
           if [ "$bucket" == "$loggingBucket" ] ; then
               continue;
           fi
           printf '{
             "LoggingEnabled": {
               "TargetBucket": "%s",
               "TargetPrefix": "%s/"
           }
         }' "$loggingBucket" "$bucket"  > logging.json
         aws s3api put-bucket-logging --bucket $bucket --bucket-logging-status file://logging.json
         echo "$bucket done"
     done
     
     rm logging.json
     
     echo "Complete"
   ```

## Verifying your server access logs setup
<a name="verify-access-logs"></a>

After you enable server access logging, complete the following steps: 
+ Access the destination bucket and verify that the log files are being delivered. After the access logs are set up, Amazon S3 immediately starts capturing requests and logging them. However, it might take a few hours before the logs are delivered to the destination bucket. For more information, see [Bucket logging status changes take effect over time](ServerLogs.md#BucketLoggingStatusChanges) and [Best-effort server log delivery](ServerLogs.md#LogDeliveryBestEffort).

  You can also automatically verify log delivery by using Amazon S3 request metrics and setting up Amazon CloudWatch alarms for these metrics. For more information, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md).
+ Verify that you are able to open and read the contents of the log files.

For server access logging troubleshooting information, see [Troubleshoot server access logging](troubleshooting-server-access-logging.md).

# Amazon S3 server access log format
<a name="LogFormat"></a>

Server access logging provides detailed records for the requests that are made to an Amazon S3 bucket. You can use server access logs for the following purposes: 
+ Performing security and access audits
+ Learning about your customer base
+ Understanding your Amazon S3 bill

This section describes the format and other details about Amazon S3 server access log files.

Server access log files consist of a sequence of newline-delimited log records. Each log record represents one request and consists of space-delimited fields.

The following is an example log consisting of five log records.

**Note**  
Any field can be set to `-` to indicate that the data was unknown or unavailable, or that the field was not applicable to this request. 

```
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be amzn-s3-demo-bucket1 [06/Feb/2019:00:00:38 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be 3E57427F3EXAMPLE REST.GET.VERSIONING - "GET /amzn-s3-demo-bucket1?versioning HTTP/1.1" 200 - 113 - 7 - "-" "S3Console/0.4" - s9lzHYrFp76ZVxRcpX9+5cjAnEH2ROuNkd2BHfIa6UkFVdtjf5mKR3/eTPFvsiP/XV/VLi31234= SigV4 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader amzn-s3-demo-bucket1.s3.us-west-1.amazonaws.com TLSV1.2 arn:aws:s3:us-west-1:123456789012:accesspoint/example-AP Yes us-east-1
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be amzn-s3-demo-bucket1 [06/Feb/2019:00:00:38 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be 891CE47D2EXAMPLE REST.GET.LOGGING_STATUS - "GET /amzn-s3-demo-bucket1?logging HTTP/1.1" 200 - 242 - 11 - "-" "S3Console/0.4" - 9vKBE6vMhrNiWHZmb2L0mXOcqPGzQOI5XLnCtZNPxev+Hf+7tpT6sxDwDty4LHBUOZJG96N1234= SigV4 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader amzn-s3-demo-bucket1.s3.us-west-1.amazonaws.com TLSV1.2 - - us-east-1
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be amzn-s3-demo-bucket1 [06/Feb/2019:00:00:38 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be A1206F460EXAMPLE REST.GET.BUCKETPOLICY - "GET /amzn-s3-demo-bucket1?policy HTTP/1.1" 404 NoSuchBucketPolicy 297 - 38 - "-" "S3Console/0.4" - BNaBsXZQQDbssi6xMBdBU2sLt+Yf5kZDmeBUP35sFoKa3sLLeMC78iwEIWxs99CRUrbS4n11234= SigV4 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader amzn-s3-demo-bucket1.s3.us-west-1.amazonaws.com TLSV1.2 - Yes us-east-1
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be amzn-s3-demo-bucket1 [06/Feb/2019:00:01:00 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be 7B4A0FABBEXAMPLE REST.GET.VERSIONING - "GET /amzn-s3-demo-bucket1?versioning HTTP/1.1" 200 - 113 - 33 - "-" "S3Console/0.4" - Ke1bUcazaN1jWuUlPJaxF64cQVpUEhoZKEG/hmy/gijN/I1DeWqDfFvnpybfEseEME/u7ME1234= SigV4 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader amzn-s3-demo-bucket1.s3.us-west-1.amazonaws.com TLSV1.2 - - us-east-1
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be amzn-s3-demo-bucket1 [06/Feb/2019:00:01:57 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be DD6CC733AEXAMPLE REST.PUT.OBJECT s3-dg.pdf "PUT /amzn-s3-demo-bucket1/s3-dg.pdf HTTP/1.1" 200 - - 4406583 41754 28 "-" "S3Console/0.4" - 10S62Zv81kBW7BB6SX4XJ48o6kpcl6LPwEoizZQQxJd5qDSCTLX0TgS37kYUBKQW3+bPdrg1234= SigV4 ECDHE-RSA-AES128-SHA AuthHeader amzn-s3-demo-bucket1.s3.us-west-1.amazonaws.com TLSV1.2 - Yes us-east-1
```

The following is an example log record for the **Compute checksum** operation:

```
7cd47ef2be amzn-s3-demo-bucket [06/Feb/2019:00:00:38 +0000] - 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be e5042925-b524-4b3b-a869-f3881e78ff3a S3.COMPUTE.OBJECT.CHECKSUM example-object - - - - 1048576 - - - - - bPf7qjG4XwYdPgDQTl72GW/uotRhdPz2UryEyAFLDSRmKrakUkJCYLtAw6fdANcrsUYc1M/kIulXM1u5vZQT5g== - - - - - - - -
```

**Topics**
+ [Log record fields](#log-record-fields)
+ [Additional logging for copy operations](#AdditionalLoggingforCopyOperations)
+ [Custom access log information](#LogFormatCustom)
+ [Programming considerations for extensible server access log format](#LogFormatExtensible)

## Log record fields
<a name="log-record-fields"></a>

The following list describes the log record fields.

**Bucket Owner**  
The canonical user ID of the owner of the source bucket. The canonical user ID is another form of the AWS account ID. For more information about the canonical user ID, see [AWS account identifiers](https://docs.aws.amazon.com/general/latest/gr/acct-identifiers.html) in the *AWS General Reference*. For information about how to find the canonical user ID for your account, see [Finding the canonical user ID for your AWS account](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-identifiers.html#FindCanonicalId).  
**Example entry**  

```
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
```

**Bucket**  
The name of the bucket that the request was processed against. If the system receives a malformed request and cannot determine the bucket, the request will not appear in any server access log.  
**Example entry**  

```
amzn-s3-demo-bucket1
```

**Time**  
The time at which the request was received; these dates and times are in Coordinated Universal Time (UTC). The format, using `strftime()` terminology, is as follows: `[%d/%b/%Y:%H:%M:%S %z]`  
**Example entry**  

```
[06/Feb/2019:00:00:38 +0000]
```

**Remote IP**  
The apparent IP address of the requester. Intermediate proxies and firewalls might obscure the actual IP address of the machine that's making the request.  
**Example entry**  

```
192.0.2.3
```

**Requester**  
The canonical user ID of the requester, or a `-` for unauthenticated requests. If the requester was an IAM user, this field returns the requester's IAM user name along with the AWS account that the IAM user belongs to. This identifier is the same one used for access control purposes.  
**Example entry**  

```
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
```
If the requester is using an assumed role, this field returns the assumed IAM role.  
**Example entry**  

```
arn:aws:sts::123456789012:assumed-role/roleName/test-role
```

**Request ID**  
A string generated by Amazon S3 to uniquely identify each request. For **Compute checksum** job requests, the **Request ID** field displays the associated job ID. For more information, see [Compute checksums](batch-ops-compute-checksums.md).  
**Example entry**  

```
3E57427F33A59F07
```

**Operation**  
The operation listed here is declared as `SOAP.operation`, `REST.HTTP_method.resource_type`, `WEBSITE.HTTP_method.resource_type`, or `BATCH.DELETE.OBJECT`, or `S3.action.resource_type` for [S3 Lifecycle and logging](lifecycle-and-other-bucket-config.md#lifecycle-general-considerations-logging). For [https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-compute-checksums.html](https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-compute-checksums.html) job requests, the operation is listed as `S3.COMPUTE.OBJECT.CHECKSUM`.  
**Example entry**  

```
REST.PUT.OBJECT
S3.COMPUTE.OBJECT.CHECKSUM
```

**Key**  
The key (object name) part of the request.  
**Example entry**  

```
/photos/2019/08/puppy.jpg
```

**Request-URI**  
The `Request-URI` part of the HTTP request message. This field may include unescaped quotes from the user input.  
**Example Entry**  

```
"GET /amzn-s3-demo-bucket1/photos/2019/08/puppy.jpg?x-foo=bar HTTP/1.1"
```

**HTTP status**  
The numeric HTTP status code of the response.  
**Example entry**  

```
200
```

**Error Code**  
The Amazon S3 [Error responses ](https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html), or `-` if no error occurred.  
**Example entry**  

```
NoSuchBucket
```

**Bytes Sent**  
The number of response bytes sent, excluding HTTP protocol overhead, or `-` if zero.  
**Example entry**  

```
2662992
```

**Object Size**  
The total size of the object in question.  
**Example entry**  

```
3462992
```

**Total Time**  
The number of milliseconds that the request was in flight from the server's perspective. This value is measured from the time that your request is received to the time that the last byte of the response is sent. Measurements made from the client's perspective might be longer because of network latency.  
**Example entry**  

```
70
```

**Turn-Around Time**  
The number of milliseconds that Amazon S3 spent processing your request. This value is measured from the time that the last byte of your request was received until the time that the first byte of the response was sent.  
**Example entry**  

```
10
```

**Referer**  
The value of the HTTP `Referer` header, if present. HTTP user-agents (for example, browsers) typically set this header to the URL of the linking or embedding page when making a request. This field may include unescaped quotes from the user input.  
**Example entry**  

```
"http://www.example.com/webservices"
```

**User-Agent**  
The value of the HTTP `User-Agent` header. This field may include unescaped quotes from the user input.  
**Example entry**  

```
"curl/7.15.1"
```

**Version Id**  
The version ID in the request, or `-` if the operation doesn't take a `versionId` parameter.  
**Example entry**  

```
3HL4kqtJvjVBH40Nrjfkd
```

**Host Id**  
The `x-amz-id-2` or Amazon S3 extended request ID.   
**Example entry**  

```
s9lzHYrFp76ZVxRcpX9+5cjAnEH2ROuNkd2BHfIa6UkFVdtjf5mKR3/eTPFvsiP/XV/VLi31234=
```

**Signature Version**  
The signature version, `SigV2` or `SigV4`, that was used to authenticate the request or a `-` for unauthenticated requests.  
**Example entry**  

```
SigV2
```

**Cipher Suite**  
The Transport Layer Security (TLS) cipher that was negotiated for an HTTPS request or a `-` for HTTP.  
**Example entry**  

```
ECDHE-RSA-AES128-GCM-SHA256
```

**Authentication Type**  
The type of request authentication used: `AuthHeader` for authentication headers, `QueryString` for query string (presigned URL), or a `-` for unauthenticated requests.  
**Example entry**  

```
AuthHeader
```

**Host Header**  
The endpoint used to connect to Amazon S3.  
**Example entry**  

```
s3.us-west-2.amazonaws.com
```
Some earlier Regions support legacy endpoints. You might see these endpoints in your server access logs or AWS CloudTrail logs. For more information, see [Legacy endpoints](VirtualHosting.md#s3-legacy-endpoints). For a complete list of Amazon S3 Regions and endpoints, see [Amazon S3 endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/s3.html) in the *Amazon Web Services General Reference*.

**TLS version**  
The Transport Layer Security (TLS) version negotiated by the client. The value is one of following: `TLSv1.1`, `TLSv1.2`, `TLSv1.3`, or `-` if TLS wasn't used.  
**Example entry**  

```
TLSv1.2
```

**Access Point ARN**  
The Amazon Resource Name (ARN) of the access point of the request. If the access point ARN is malformed or not used, the field will contain a `-`. For more information about access points, see [Using Amazon S3 access points for general purpose buckets](using-access-points.md). For more information about ARNs, see [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) in the *AWS Reference Guide*.  
**Example entry**  

```
arn:aws:s3:us-east-1:123456789012:accesspoint/example-AP
```

**aclRequired**  
A string that indicates whether the request required an access control list (ACL) for authorization. If the request required an ACL for authorization, the string is `Yes`. If no ACLs were required, the string is `-`. For more information about ACLs, see [Access control list (ACL) overview](acl-overview.md). For more information about using the `aclRequired` field to disable ACLs, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).   
**Example entry**  

```
Yes
```

**Source region**  
The AWS Region from which the request originated. This field shows a dash (`-`) when the origin region cannot be determined (such as PrivateLink connections, Direct Connect connections, Bring your own IP addresses (BYOIP), or non-AWS IP addresses) or when the log is generated by operations triggered based on customer-set policies or actions, such as lifecycle and checksum.  
**Example entry**  

```
us-east-1
```

## Additional logging for copy operations
<a name="AdditionalLoggingforCopyOperations"></a>

A copy operation involves a `GET` and a `PUT`. For that reason, we log two records when performing a copy operation. The previous section describes the fields related to the `PUT` part of the operation. The following list describes the fields in the record that relate to the `GET` part of the copy operation.

**Bucket Owner**  
The canonical user ID of the bucket that stores the object being copied. The canonical user ID is another form of the AWS account ID. For more information about the canonical user ID, see [AWS account identifiers](https://docs.aws.amazon.com/general/latest/gr/acct-identifiers.html) in the *AWS General Reference*. For information about how to find the canonical user ID for your account, see [Finding the canonical user ID for your AWS account](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-identifiers.html#FindCanonicalId).  
**Example entry**  

```
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
```

**Bucket**  
The name of the bucket that stores the object that's being copied.  
**Example entry**  

```
amzn-s3-demo-bucket1
```

**Time**  
The time at which the request was received; these dates and times are in Coordinated Universal Time (UTC). The format, using `strftime()` terminology, is as follows: `[%d/%B/%Y:%H:%M:%S %z]`  
**Example entry**  

```
[06/Feb/2019:00:00:38 +0000]
```

**Remote IP**  
The apparent IP address of the requester. Intermediate proxies and firewalls might obscure the actual IP address of the machine that's making the request.  
**Example entry**  

```
192.0.2.3
```

**Requester**  
The canonical user ID of the requester, or a `-` for unauthenticated requests. If the requester was an IAM user, this field will return the requester's IAM user name along with the AWS account root user that the IAM user belongs to. This identifier is the same one used for access control purposes.  
**Example entry**  

```
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
```
If the requester is using an assumed role, this field returns the assumed IAM role.  
**Example entry**  

```
arn:aws:sts::123456789012:assumed-role/roleName/test-role
```

**Request ID**  
A string generated by Amazon S3 to uniquely identify each request. For **Compute checksum** job requests, the **Request ID** field displays the associated job ID. For more information, see [Compute checksums](batch-ops-compute-checksums.md).  
**Example entry**  

```
3E57427F33A59F07
```

**Operation**  
The operation listed here is declared as `SOAP.operation`, `REST.HTTP_method.resource_type`, `WEBSITE.HTTP_method.resource_type`, or `BATCH.DELETE.OBJECT`.  
**Example entry**  

```
REST.COPY.OBJECT_GET
```

**Key**  
The key (object name) of the object being copied, or `-` if the operation doesn't take a key parameter.   
**Example entry**  

```
/photos/2019/08/puppy.jpg
```

**Request-URI**  
The `Request-URI` part of the HTTP request message. This field may include unescaped quotes from the user input.  
**Example entry**  

```
"GET /amzn-s3-demo-bucket1/photos/2019/08/puppy.jpg?x-foo=bar"
```

**HTTP status**  
The numeric HTTP status code of the `GET` portion of the copy operation.  
**Example entry**  

```
200
```

**Error Code**  
The Amazon S3 [Error responses ](https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html) of the `GET` portion of the copy operation, or `-` if no error occurred.  
**Example entry**  

```
NoSuchBucket
```

**Bytes Sent**  
The number of response bytes sent, excluding the HTTP protocol overhead, or `-` if zero.  
**Example entry**  

```
2662992
```

**Object Size**  
The total size of the object in question.  
**Example entry**  

```
3462992
```

**Total Time**  
The number of milliseconds that the request was in flight from the server's perspective. This value is measured from the time that your request is received to the time that the last byte of the response is sent. Measurements made from the client's perspective might be longer because of network latency.  
**Example entry**  

```
70
```

**Turn-Around Time**  
The number of milliseconds that Amazon S3 spent processing your request. This value is measured from the time that the last byte of your request was received until the time that the first byte of the response was sent.  
**Example entry**  

```
10
```

**Referer**  
The value of the HTTP `Referer` header, if present. HTTP user-agents (for example, browsers) typically set this header to the URL of the linking or embedding page when making a request. This field may include unescaped quotes from the user input.  
**Example entry**  

```
"http://www.example.com/webservices"
```

**User-Agent**  
The value of the HTTP `User-Agent` header. This field may include unescaped quotes from the user input.  
**Example entry**  

```
"curl/7.15.1"
```

**Version Id**  
The version ID of the object being copied, or `-` if the `x-amz-copy-source` header didn't specify a `versionId` parameter as part of the copy source.  
**Example Entry**  

```
3HL4kqtJvjVBH40Nrjfkd
```

**Host Id**  
The `x-amz-id-2` or Amazon S3 extended request ID.  
**Example entry**  

```
s9lzHYrFp76ZVxRcpX9+5cjAnEH2ROuNkd2BHfIa6UkFVdtjf5mKR3/eTPFvsiP/XV/VLi31234=
```

**Signature Version**  
The signature version, `SigV2` or `SigV4`, that was used to authenticate the request, or a `-` for unauthenticated requests.  
**Example entry**  

```
SigV4
```

**Cipher Suite**  
The Transport Layer Security (TLS) cipher that was negotiated for an HTTPS request, or a `-` for HTTP.  
**Example entry**  

```
ECDHE-RSA-AES128-GCM-SHA256
```

**Authentication Type**  
The type of request authentication used: `AuthHeader` for authentication headers, `QueryString` for query strings (presigned URLs), or a `-` for unauthenticated requests.  
**Example entry**  

```
AuthHeader
```

**Host Header**  
The endpoint that was used to connect to Amazon S3.  
**Example entry**  

```
s3.us-west-2.amazonaws.com
```
Some earlier Regions support legacy endpoints. You might see these endpoints in your server access logs or AWS CloudTrail logs. For more information, see [Legacy endpoints](VirtualHosting.md#s3-legacy-endpoints). For a complete list of Amazon S3 Regions and endpoints, see [Amazon S3 endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/s3.html) in the *Amazon Web Services General Reference*.

**TLS version**  
The Transport Layer Security (TLS) version negotiated by the client. The value is one of following: `TLSv1.1`, `TLSv1.2`, `TLSv1.3`, or `-` if TLS wasn't used.  
**Example entry**  

```
TLSv1.2
```

**Access Point ARN**  
The Amazon Resource Name (ARN) of the access point of the request. If the access point ARN is malformed or not used, the field will contain a `-`. For more information about access points, see [Using Amazon S3 access points for general purpose buckets](using-access-points.md). For more information about ARNs, see [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) in the *AWS Reference Guide*.  
**Example entry**  

```
arn:aws:s3:us-east-1:123456789012:accesspoint/example-AP
```

**aclRequired**  
A string that indicates whether the request required an access control list (ACL) for authorization. If the request required an ACL for authorization, the string is `Yes`. If no ACLs were required, the string is `-`. For more information about ACLs, see [Access control list (ACL) overview](acl-overview.md). For more information about using the `aclRequired` field to disable ACLs, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).   
**Example entry**  

```
Yes
```

**Source region**  
The AWS Region from which the request originated. This field shows a dash (`-`) when the origin region cannot be determined (such as PrivateLink connections, Direct Connect connections, Bring your own IP addresses (BYOIP), or non-AWS IP addresses) or when the log is generated by operations triggered based on customer-set policies or actions, such as lifecycle and checksum.  
**Example entry**  

```
us-east-1
```

## Custom access log information
<a name="LogFormatCustom"></a>

You can include custom information to be stored in the access log record for a request. To do this, add a custom query-string parameter to the URL for the request. Amazon S3 ignores query-string parameters that begin with `x-`, but includes those parameters in the access log record for the request, as part of the `Request-URI` field of the log record. 

For example, a `GET` request for `"s3.amazonaws.com/amzn-s3-demo-bucket1/photos/2019/08/puppy.jpg?x-user=johndoe"` works the same as the request for `"s3.amazonaws.com/amzn-s3-demo-bucket1/photos/2019/08/puppy.jpg"`, except that the `"x-user=johndoe"` string is included in the `Request-URI` field for the associated log record. This functionality is available in the REST interface only.

## Programming considerations for extensible server access log format
<a name="LogFormatExtensible"></a>

Occasionally, we might extend the access log record format by adding new fields to the end of each line. Therefore, make sure that any of your code that parses server access logs can handle trailing fields that it might not understand. 

# Deleting Amazon S3 log files
<a name="deleting-log-files-lifecycle"></a>

An Amazon S3 bucket with server access logging enabled can accumulate many server log objects over time. Your application might need these access logs for a specific period after they are created, and after that, you might want to delete them. You can use Amazon S3 Lifecycle configuration to set rules so that Amazon S3 automatically queues these objects for deletion at the end of their life. 

You can define a lifecycle configuration for a subset of objects in your S3 bucket by using a shared prefix. If you specified a prefix in your server access logging configuration, you can set a lifecycle configuration rule to delete log objects that have that prefix. 

For example, suppose that your log objects have the prefix `logs/`. You can set a lifecycle configuration rule to delete all objects in the bucket that have the prefix `logs/` after a specified period of time. 

For more information about lifecycle configuration, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md).

For general information about server access logging, see [Logging requests with server access logging](ServerLogs.md).

# Using Amazon S3 server access logs to identify requests
<a name="using-s3-access-logs-to-identify-requests"></a>

You can identify Amazon S3 requests by using Amazon S3 server access logs. 

**Note**  
To identify Amazon S3 requests, we recommend that you use AWS CloudTrail data events instead of Amazon S3 server access logs. CloudTrail data events are easier to set up and contain more information. For more information, see [Identifying Amazon S3 requests using CloudTrail](cloudtrail-request-identification.md).
Depending on how many access requests you get, analyzing your logs might require more resources or time than using CloudTrail data events.

**Topics**
+ [Querying access logs for requests by using Amazon Athena](#querying-s3-access-logs-for-requests)
+ [Identifying Signature Version 2 requests by using Amazon S3 access logs](#using-s3-access-logs-to-identify-sigv2-requests)
+ [Identifying object access requests by using Amazon S3 access logs](#using-s3-access-logs-to-identify-objects-access)

## Querying access logs for requests by using Amazon Athena
<a name="querying-s3-access-logs-for-requests"></a>

You can identify Amazon S3 requests with Amazon S3 access logs by using Amazon Athena. 

Amazon S3 stores server access logs as objects in an S3 bucket. It is often easier to use a tool that can analyze the logs in Amazon S3. Athena supports analysis of S3 objects and can be used to query Amazon S3 access logs.

**Example**  
The following example shows how you can query Amazon S3 server access logs in Amazon Athena. Replace the `user input placeholders` used in the following examples with your own information.  
To specify an Amazon S3 location in an Athena query, you must provide an S3 URI for the bucket where your logs are delivered to. This URI must include the bucket name and prefix in the following format: `s3://amzn-s3-demo-bucket1-logs/prefix/` 

1. Open the Athena console at [https://console.aws.amazon.com/athena/](https://console.aws.amazon.com/athena/home).

1. In the Query Editor, run a command similar to the following. Replace `s3_access_logs_db` with the name that you want to give to your database. 

   ```
   CREATE DATABASE s3_access_logs_db
   ```
**Note**  
It's a best practice to create the database in the same AWS Region as your S3 bucket. 

1. In the Query Editor, run a command similar to the following to create a table schema in the database that you created in step 2. Replace `s3_access_logs_db.mybucket_logs` with the name that you want to give to your table. The `STRING` and `BIGINT` data type values are the access log properties. You can query these properties in Athena. For `LOCATION`, enter the S3 bucket and prefix path as noted earlier.

------
#### [ Date-based partitioning ]

   ```
   CREATE EXTERNAL TABLE s3_access_logs_db.mybucket_logs( 
    `bucketowner` STRING, 
    `bucket_name` STRING, 
    `requestdatetime` STRING, 
    `remoteip` STRING, 
    `requester` STRING, 
    `requestid` STRING, 
    `operation` STRING, 
    `key` STRING, 
    `request_uri` STRING, 
    `httpstatus` STRING, 
    `errorcode` STRING, 
    `bytessent` BIGINT, 
    `objectsize` BIGINT, 
    `totaltime` STRING, 
    `turnaroundtime` STRING, 
    `referrer` STRING, 
    `useragent` STRING, 
    `versionid` STRING, 
    `hostid` STRING, 
    `sigv` STRING, 
    `ciphersuite` STRING, 
    `authtype` STRING, 
    `endpoint` STRING, 
    `tlsversion` STRING,
    `accesspointarn` STRING,
    `aclrequired` STRING,
    `sourceregion` STRING)
    PARTITIONED BY (
      `timestamp` string)
   ROW FORMAT SERDE 
    'org.apache.hadoop.hive.serde2.RegexSerDe' 
   WITH SERDEPROPERTIES ( 
    'input.regex'='([^ ]*) ([^ ]*) \\[(.*?)\\] ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) (\"[^\"]*\"|-) (-|[0-9]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) (\"[^\"]*\"|-) ([^ ]*)(?: ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*))?.*$') 
   STORED AS INPUTFORMAT 
    'org.apache.hadoop.mapred.TextInputFormat' 
   OUTPUTFORMAT 
    'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
   LOCATION
    's3://bucket-name/prefix-name/account-id/region/source-bucket-name/'
    TBLPROPERTIES (
     'projection.enabled'='true', 
     'projection.timestamp.format'='yyyy/MM/dd', 
     'projection.timestamp.interval'='1', 
     'projection.timestamp.interval.unit'='DAYS', 
     'projection.timestamp.range'='2024/01/01,NOW', 
     'projection.timestamp.type'='date', 
     'storage.location.template'='s3://bucket-name/prefix-name/account-id/region/source-bucket-name/${timestamp}')
   ```

------
#### [ Non-date-based partitioning ]

   ```
   CREATE EXTERNAL TABLE `s3_access_logs_db.mybucket_logs`(
     `bucketowner` STRING, 
     `bucket_name` STRING, 
     `requestdatetime` STRING, 
     `remoteip` STRING, 
     `requester` STRING, 
     `requestid` STRING, 
     `operation` STRING, 
     `key` STRING, 
     `request_uri` STRING, 
     `httpstatus` STRING, 
     `errorcode` STRING, 
     `bytessent` BIGINT, 
     `objectsize` BIGINT, 
     `totaltime` STRING, 
     `turnaroundtime` STRING, 
     `referrer` STRING, 
     `useragent` STRING, 
     `versionid` STRING, 
     `hostid` STRING, 
     `sigv` STRING, 
     `ciphersuite` STRING, 
     `authtype` STRING, 
     `endpoint` STRING, 
     `tlsversion` STRING,
     `accesspointarn` STRING,
     `aclrequired` STRING,
     `sourceregion` STRING)
   ROW FORMAT SERDE 
     'org.apache.hadoop.hive.serde2.RegexSerDe' 
   WITH SERDEPROPERTIES ( 
     'input.regex'='([^ ]*) ([^ ]*) \\[(.*?)\\] ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) (\"[^\"]*\"|-) (-|[0-9]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) (\"[^\"]*\"|-) ([^ ]*)(?: ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*))?.*$') 
   STORED AS INPUTFORMAT 
     'org.apache.hadoop.mapred.TextInputFormat' 
   OUTPUTFORMAT 
     'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
   LOCATION
     's3://amzn-s3-demo-bucket1-logs/prefix/'
   ```

------

1. In the navigation pane, under **Database**, choose your database.

1. Under **Tables**, choose **Preview table** next to your table name.

   In the **Results** pane, you should see data from the server access logs, such as `bucketowner`, `bucket`, `requestdatetime`, and so on. This means that you successfully created the Athena table. You can now query the Amazon S3 server access logs.

**Example — Show who deleted an object and when (timestamp, IP address, and IAM user)**  

```
SELECT requestdatetime, remoteip, requester, key 
FROM s3_access_logs_db.mybucket_logs 
WHERE key = 'images/picture.jpg' AND operation like '%DELETE%';
```

**Example — Show all operations that were performed by an IAM user**  

```
SELECT * 
FROM s3_access_logs_db.mybucket_logs 
WHERE requester='arn:aws:iam::123456789123:user/user_name';
```

**Example — Show all operations that were performed on an object in a specific time period**  

```
SELECT *
FROM s3_access_logs_db.mybucket_logs
WHERE Key='prefix/images/picture.jpg' 
AND parse_datetime(requestdatetime,'dd/MMM/yyyy:HH:mm:ss Z')
BETWEEN parse_datetime('2017-02-18:07:00:00','yyyy-MM-dd:HH:mm:ss')
AND parse_datetime('2017-02-18:08:00:00','yyyy-MM-dd:HH:mm:ss');
```

**Example — Show how much data was transferred to a specific IP address in a specific time period**  

```
SELECT coalesce(SUM(bytessent), 0) AS bytessenttotal
FROM s3_access_logs_db.mybucket_logs
WHERE remoteip='192.0.2.1'
AND parse_datetime(requestdatetime,'dd/MMM/yyyy:HH:mm:ss Z')
BETWEEN parse_datetime('2022-06-01','yyyy-MM-dd')
AND parse_datetime('2022-07-01','yyyy-MM-dd');
```

**Example — Find request IDs for HTTP 5xx errors in a specific time period**  

```
SELECT requestdatetime, key, httpstatus, errorcode, requestid, hostid 
FROM s3_access_logs_db.mybucket_logs
WHERE httpstatus like '5%' AND timestamp
BETWEEN '2024/01/29'
AND '2024/01/30'
```

**Note**  
To reduce the time that you retain your logs, you can create an S3 Lifecycle configuration for your server access logs bucket. Create lifecycle configuration rules to remove log files periodically. Doing so reduces the amount of data that Athena analyzes for each query. For more information, see [Setting an S3 Lifecycle configuration on a bucket](how-to-set-lifecycle-configuration-intro.md).

## Identifying Signature Version 2 requests by using Amazon S3 access logs
<a name="using-s3-access-logs-to-identify-sigv2-requests"></a>

Amazon S3 support for Signature Version 2 will be turned off (deprecated). After that, Amazon S3 will no longer accept requests that use Signature Version 2, and all requests must use Signature Version 4 signing. You can identify Signature Version 2 access requests by using Amazon S3 access logs. 

**Note**  
To identify Signature Version 2 requests, we recommend that you use AWS CloudTrail data events instead of Amazon S3 server access logs. CloudTrail data events are easier to set up and contain more information than server access logs. For more information, see [Identifying Amazon S3 Signature Version 2 requests by using CloudTrail](cloudtrail-request-identification.md#cloudtrail-identification-sigv2-requests).

**Example — Show all requesters that are sending Signature Version 2 traffic**  

```
SELECT requester, sigv, Count(sigv) as sigcount 
FROM s3_access_logs_db.mybucket_logs
GROUP BY requester, sigv;
```

## Identifying object access requests by using Amazon S3 access logs
<a name="using-s3-access-logs-to-identify-objects-access"></a>

You can use queries on Amazon S3 server access logs to identify Amazon S3 object access requests, for operations such as `GET`, `PUT`, and `DELETE`, and discover further information about those requests.

The following Amazon Athena query example shows how to get all `PUT` object requests for Amazon S3 from a server access log. 

**Example — Show all requesters that are sending `PUT` object requests in a certain period**  

```
SELECT bucket_name, requester, remoteip, key, httpstatus, errorcode, requestdatetime
FROM s3_access_logs_db.mybucket_logs
WHERE operation='REST.PUT.OBJECT' 
AND parse_datetime(requestdatetime,'dd/MMM/yyyy:HH:mm:ss Z') 
BETWEEN parse_datetime('2019-07-01:00:42:42',yyyy-MM-dd:HH:mm:ss')
AND parse_datetime('2019-07-02:00:42:42','yyyy-MM-dd:HH:mm:ss')
```

The following Amazon Athena query example shows how to get all `GET` object requests for Amazon S3 from the server access log. 

**Example — Show all requesters that are sending `GET` object requests in a certain period**  

```
SELECT bucket_name, requester, remoteip, key, httpstatus, errorcode, requestdatetime
FROM s3_access_logs_db.mybucket_logs
WHERE operation='REST.GET.OBJECT' 
AND parse_datetime(requestdatetime,'dd/MMM/yyyy:HH:mm:ss Z') 
BETWEEN parse_datetime('2019-07-01:00:42:42','yyyy-MM-dd:HH:mm:ss')
AND parse_datetime('2019-07-02:00:42:42','yyyy-MM-dd:HH:mm:ss')
```

The following Amazon Athena query example shows how to get all anonymous requests to your S3 buckets from the server access log. 

**Example — Show all anonymous requesters that are making requests to a bucket during a certain period**  

```
SELECT bucket_name, requester, remoteip, key, httpstatus, errorcode, requestdatetime
FROM s3_access_logs_db.mybucket_logs
WHERE requester IS NULL 
AND parse_datetime(requestdatetime,'dd/MMM/yyyy:HH:mm:ss Z') 
BETWEEN parse_datetime('2019-07-01:00:42:42','yyyy-MM-dd:HH:mm:ss')
AND parse_datetime('2019-07-02:00:42:42','yyyy-MM-dd:HH:mm:ss')
```

The following Amazon Athena query shows how to identify all requests to your S3 buckets that required an access control list (ACL) for authorization. You can use this information to migrate those ACL permissions to the appropriate bucket policies and disable ACLs. After you've created these bucket policies, you can disable ACLs for these buckets. For more information about disabling ACLs, see [Prerequisites for disabling ACLs](object-ownership-migrating-acls-prerequisites.md). 

**Example — Identify all requests that required an ACL for authorization**  

```
SELECT bucket_name, requester, key, operation, aclrequired, requestdatetime
FROM s3_access_logs_db.mybucket_logs
WHERE aclrequired = 'Yes' 
AND parse_datetime(requestdatetime,'dd/MMM/yyyy:HH:mm:ss Z')
BETWEEN parse_datetime('2022-05-10:00:00:00','yyyy-MM-dd:HH:mm:ss')
AND parse_datetime('2022-08-10:00:00:00','yyyy-MM-dd:HH:mm:ss')
```

**Note**  
You can modify the date range as needed to suit your needs.
These query examples might also be useful for security monitoring. You can review the results for `PutObject` or `GetObject` calls from unexpected or unauthorized IP addresses or requesters and for identifying any anonymous requests to your buckets.
This query only retrieves information from the time at which logging was enabled. 
If you are using AWS CloudTrail logs, see [Identifying access to S3 objects by using CloudTrail](cloudtrail-request-identification.md#cloudtrail-identification-object-access). 

# Troubleshoot server access logging
<a name="troubleshooting-server-access-logging"></a>

The following topics can help you troubleshoot issues that you might encounter when setting up logging with Amazon S3.

**Topics**
+ [Common error messages when setting up logging](#common-errors)
+ [Troubleshooting delivery failures](#delivery-failures)

## Common error messages when setting up logging
<a name="common-errors"></a>

The following common error messages can appear when you're enabling logging through the AWS Command Line Interface (AWS CLI) and AWS SDKs: 

Error: Cross S3 location logging not allowed

If the destination bucket (also known as a *target bucket*) is in a different Region than the source bucket, a Cross S3 location logging not allowed error occurs. To resolve this error, make sure that the destination bucket configured to receive the access logs is in the same AWS Region and AWS account as the source bucket.

Error: The owner for the bucket to be logged and the target bucket must be the same

When you're enabling server access logging, this error occurs if the specified destination bucket belongs to a different account. To resolve this error, make sure that the destination bucket is in the same AWS account as the source bucket.

**Note**  
We recommend that you choose a destination bucket that's different from the source bucket. When the source bucket and destination bucket are the same, additional logs are created for the logs that are written to the bucket, which can increase your storage bill. These extra logs about logs can also make it difficult to find the particular logs that you're looking for. For simpler log management, we recommend saving access logs in a different bucket. For more information, see [How do I enable log delivery?](ServerLogs.md#server-access-logging-overview).

Error: The target bucket for logging does not exist

The destination bucket must exist prior to setting the configuration. This error indicates that the destination bucket doesn't exist or can't be found. Make sure that the bucket name is spelled correctly, and then try again.

Error: Target grants not allowed for bucket owner enforced buckets

This error indicates that the destination bucket uses the Bucket owner enforced setting for S3 Object Ownership. The Bucket owner enforced setting doesn't support destination (target) grants. For more information, see [Permissions for log delivery](enable-server-access-logging.md#grant-log-delivery-permissions-general).

## Troubleshooting delivery failures
<a name="delivery-failures"></a>

To avoid server access logging issues, make sure that you're following these best practices:
+ **The S3 log delivery group has write access to the destination bucket** – The S3 log delivery group delivers server access logs to the destination bucket. A bucket policy or bucket access control list (ACL) can be used to grant write access to the destination bucket. However, we recommend that you use a bucket policy instead of an ACL. For more information about how to grant write access to your destination bucket, see [Permissions for log delivery](enable-server-access-logging.md#grant-log-delivery-permissions-general).
**Note**  
If the destination bucket uses the Bucket owner enforced setting for Object Ownership, be aware of the following:   
ACLs are disabled and no longer affect permissions. This means that you can't update your bucket ACL to grant access to the S3 log delivery group. Instead, to grant access to the logging service principal, you must update the bucket policy for the destination bucket. 
You can't include destination grants in your `PutBucketLogging` configuration. 
+ **The bucket policy for the destination bucket allows access to the logs** – Check the bucket policy of the destination bucket. Search the bucket policy for any statements that contain `"Effect": "Deny"`. Then, verify that the `Deny` statement isn't preventing access logs from being written to the bucket.
+ **S3 Object Lock isn't enabled on the destination bucket** – Check if the destination bucket has Object Lock enabled. Object Lock blocks server access log delivery. You must choose a destination bucket that doesn't have Object Lock enabled.
+ **Amazon S3 managed keys (SSE-S3) is selected if default encryption is enabled on the destination bucket** – You can use default bucket encryption on the destination bucket only if you use server-side encryption with Amazon S3 managed keys (SSE-S3). Default server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) is not supported for server access logging destination buckets. For more information about how to enable default encryption, see [Configuring default encryption](default-bucket-encryption.md).
+ **The destination bucket does not have Requester Pays enabled** – Using a Requester Pays bucket as the destination bucket for server access logging is not supported. To allow delivery of server access logs, disable the Requester Pays option on the destination bucket.
+ **Review your AWS Organizations service control policies (SCPs) and resource control policies (RCPs)** – When you're using AWS Organizations, check the service control policies and resource control policies to make sure that Amazon S3 access is allowed. These policies specify the maximum permissions for principals and resources in the affected accounts. Search the policies for any statements that contain `"Effect": "Deny"` and verify that `Deny` statements aren't preventing any access logs from being written to the bucket. For more information, see [Authorization policies in AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_authorization_policies.html) in the *AWS Organizations User Guide*.
+ **Allow some time for recent logging configuration changes to take effect** – Enabling server access logging for the first time, or changing the destination bucket for logs, requires time to fully take effect. It might take longer than an hour for all requests to be properly logged and delivered. 

  To check for log delivery failures, enable request metrics in Amazon CloudWatch. If the logs are not delivered within a few hours, look for the `4xxErrors` metric, which can indicate log delivery failures. For more information about enabling request metrics, see [Creating a CloudWatch metrics configuration for all the objects in your bucket](configure-request-metrics-bucket.md).

# Monitoring metrics with Amazon CloudWatch
<a name="cloudwatch-monitoring"></a>

Amazon CloudWatch metrics for Amazon S3 can help you understand and improve the performance of applications that use Amazon S3. There are several ways that you can use CloudWatch with Amazon S3.

**Daily storage metrics for buckets**  
Monitor bucket storage using CloudWatch, which collects and processes storage data from Amazon S3 into readable, daily metrics. These storage metrics for Amazon S3 are reported once per day and are provided to all customers at no additional cost.

**Request metrics**   
Monitor Amazon S3 requests to quickly identify and act on operational issues. The metrics are available at 1-minute intervals after some latency for processing. These CloudWatch metrics are billed at the same rate as the Amazon CloudWatch custom metrics. For information about CloudWatch pricing, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/). To learn how to opt in to getting these metrics, see [CloudWatch metrics configurations](metrics-configurations.md).  
When enabled, request metrics are reported for all object operations. By default, these 1-minute metrics are available at the Amazon S3 bucket level. You can also define a filter for the metrics using a shared prefix, object tag, or access point:  
+ **Access point** – Access points are named network endpoints that are attached to buckets and simplify managing data access at scale for shared datasets in S3. With the access point filter, you can gain insights into your access point usage. For more information about access points, see [Monitoring and logging access points](access-points-monitoring-logging.md).
+ **Prefix** – Although the Amazon S3 data model is a flat structure, you can use prefixes to infer a hierarchy. A prefix is similar to a directory name that enables you to group similar objects together in a bucket. The S3 console supports prefixes with the concept of folders. If you filter by prefix, objects that have the same prefix are included in the metrics configuration. For more information about prefixes, see [Organizing objects using prefixes](using-prefixes.md). 
+ **Tags** – Tags are key-value name pairs that you can add to objects. Tags help you find and organize objects easily. You can also use tags as a filter for metrics configurations so that only objects with those tags are included in the metrics configuration. For more information about object tags, see [Categorizing your objects using tags](object-tagging.md). 
To align these metrics to specific business applications, workflows, or internal organizations, you can filter on a shared prefix, object tag, or access point. 

**Replication metrics**  
Monitor the total number of S3 API operations that are pending replication, the total size of objects pending replication, the maximum replication time to the destination AWS Region, and the total number of operations that failed replication. Replication rules that have S3 Replication Time Control (S3 RTC) or S3 Replication metrics enabled will publish replication metrics.   
For more information, see [Monitoring replication with metrics, event notifications, and statuses](replication-metrics.md) or [Meeting compliance requirements with S3 Replication Time Control](replication-time-control.md).

**Amazon S3 Storage Lens metrics**  
You can publish S3 Storage Lens usage and activity metrics to Amazon CloudWatch to create a unified view of your operational health in CloudWatch [dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html). S3 Storage Lens metrics are available in the `AWS/S3/Storage-Lens` namespace. The CloudWatch publishing option is available for S3 Storage Lens dashboards upgraded to *advanced metrics and recommendations*. You can enable the CloudWatch publishing option for a new or existing dashboard configuration in S3 Storage Lens.  
For more information, see [Monitor S3 Storage Lens metrics in CloudWatch](storage_lens_view_metrics_cloudwatch.md).

All CloudWatch statistics are retained for a period of 15 months so that you can access historical information and gain a better perspective on how your web application or service is performing. For more information about CloudWatch, see [What is Amazon CloudWatch?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) in the *Amazon CloudWatch User Guide*. You may need some additional configurations to your CloudWatch alarms, depending on your use cases. For example, you can use metric math expression to create an alarm. For more information, see [Use CloudWatch metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html), [Use metric math](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html), [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html), and [Create a CloudWatch alarm based on a metric math expression](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html) in the *Amazon CloudWatch User Guide*.

**Best-effort CloudWatch metrics delivery**  
 CloudWatch metrics are delivered on a best-effort basis. Most requests for an Amazon S3 object that have request metrics result in a data point being sent to CloudWatch.

The completeness and timeliness of metrics are not guaranteed. The data point for a particular request might be returned with a timestamp that is later than when the request was actually processed. The data point for a minute might be delayed before being available through CloudWatch, or it might not be delivered at all. CloudWatch request metrics give you an idea of the nature of traffic against your bucket in near-real time. It is not meant to be a complete accounting of all requests.

It follows from the best-effort nature of this feature that the reports available at the [Billing & Cost Management Dashboard](https://console.aws.amazon.com/billing/home?#/) might include one or more access requests that do not appear in the bucket metrics.

For more information, see the following topics.

**Topics**
+ [Metrics and dimensions](metrics-dimensions.md)
+ [Accessing CloudWatch metrics](cloudwatch-monitoring-accessing.md)
+ [CloudWatch metrics configurations](metrics-configurations.md)

# Metrics and dimensions
<a name="metrics-dimensions"></a>

The storage metrics and dimensions that Amazon S3 sends to Amazon CloudWatch are listed in the following tables.

**Best-effort CloudWatch metrics delivery**  
 CloudWatch metrics are delivered on a best-effort basis. Most requests for an Amazon S3 object that have request metrics result in a data point being sent to CloudWatch.

The completeness and timeliness of metrics are not guaranteed. The data point for a particular request might be returned with a timestamp that is later than when the request was actually processed. The data point for a minute might be delayed before being available through CloudWatch, or it might not be delivered at all. CloudWatch request metrics give you an idea of the nature of traffic against your bucket in near-real time. It is not meant to be a complete accounting of all requests.

It follows from the best-effort nature of this feature that the reports available at the [Billing & Cost Management Dashboard](https://console.aws.amazon.com/billing/home?#/) might include one or more access requests that do not appear in the bucket metrics.

**Topics**
+ [Amazon S3 daily storage metrics for buckets in CloudWatch](#s3-cloudwatch-metrics)
+ [Amazon S3 request metrics in CloudWatch](#s3-request-cloudwatch-metrics)
+ [S3 Replication metrics in CloudWatch](#s3-cloudwatch-replication-metrics)
+ [S3 Storage Lens metrics in CloudWatch](#storage-lens-metrics-cloudwatch-publish)
+ [S3 Object Lambda request metrics in CloudWatch](#olap-cloudwatch-metrics)
+ [Amazon S3 dimensions in CloudWatch](#s3-cloudwatch-dimensions)
+ [S3 Replication dimensions in CloudWatch](#s3-replication-dimensions)
+ [S3 Storage Lens dimensions in CloudWatch](#storage-lens-dimensions)
+ [S3 Object Lambda request dimensions in CloudWatch](#olap-dimensions)
+ [Amazon S3 usage metrics](#s3-service-quota-metrics)

## Amazon S3 daily storage metrics for buckets in CloudWatch
<a name="s3-cloudwatch-metrics"></a>

The `AWS/S3` namespace includes the following daily storage metrics for buckets.


| Metric | Description | 
| --- | --- | 
| BucketSizeBytes |  The amount of data in bytes that is stored in a bucket in the following storage classes:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/metrics-dimensions.html) This value is calculated by summing the size of all objects and metadata (such as bucket names) in the bucket (both current and noncurrent objects), including the size of all parts for all incomplete multipart uploads to the bucket.   The S3 Express One Zone storage class is available only for directory buckets.  Valid storage-type filters (see the `StorageType` dimension):  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/metrics-dimensions.html) Units: Bytes Valid statistics: Average For more information about the `StorageType` dimensions, see [Amazon S3 dimensions in CloudWatch](#s3-cloudwatch-dimensions).  | 
| NumberOfObjects |  The total number of objects stored in a general purpose bucket for all storage classes. This value is calculated by counting all objects in the bucket, which includes current and noncurrent objects, delete markers, and the total number of parts for all incomplete multipart uploads to the bucket. For directory buckets with objects in the S3 Express One Zone storage class, this value is calculated by counting all objects in the bucket, but it doesn't include incomplete multipart uploads to the bucket. Valid storage type filters: `AllStorageTypes` (see the `StorageType` dimension) Units: Count Valid statistics: Average  | 

## Amazon S3 request metrics in CloudWatch
<a name="s3-request-cloudwatch-metrics"></a>

The `AWS/S3` namespace includes the following request metrics. These metrics include non-billable requests (in the case of `GET` requests from `CopyObject` and Replication).


| Metric | Description | 
| --- | --- | 
| AllRequests |  The total number of HTTP requests made to an Amazon S3 bucket, regardless of type. If you're using a metrics configuration with a filter, then this metric returns only the HTTP requests that meet the filter's requirements. Units: Count Valid statistics: Sum  | 
| GetRequests |  The number of HTTP `GET` requests made for objects in an Amazon S3 bucket. This doesn't include list operations. This metric is incremented for the source of each `CopyObject` request. Units: Count Valid statistics: Sum  Paginated list-oriented requests, such as [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETVersion.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETVersion.html), and others, are not included in this metric.   | 
| PutRequests |  The number of HTTP `PUT` requests made for objects in an Amazon S3 bucket. This metric is incremented for the destination of each `CopyObject` request. Units: Count Valid statistics: Sum  | 
| DeleteRequests |  The number of HTTP `DELETE` requests made for objects in an Amazon S3 bucket. This metric also includes [https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html](https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html) requests. This metric shows the number of requests made, not the number of objects deleted. Units: Count Valid statistics: Sum  | 
| HeadRequests |  The number of HTTP `HEAD` requests made to an Amazon S3 bucket. Units: Count Valid statistics: Sum  | 
| PostRequests |  The number of HTTP `POST` requests made to an Amazon S3 bucket. Units: Count Valid statistics: Sum  [https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html](https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html) and [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectSELECTContent.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectSELECTContent.html) requests are not included in this metric.    | 
| SelectRequests |  The number of Amazon S3 [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectSELECTContent.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectSELECTContent.html) requests made for objects in an Amazon S3 bucket.  Units: Count Valid statistics: Sum  | 
| SelectBytesScanned |  The number of bytes of data scanned with Amazon S3 [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectSELECTContent.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectSELECTContent.html) requests in an Amazon S3 bucket.   Units: Bytes  Valid statistics: Average (bytes per request), Sum (bytes per period), Sample Count, Min, Max (same as p100), any percentile between p0.0 and p99.9  | 
| SelectBytesReturned |  The number of bytes of data returned with Amazon S3 [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectSELECTContent.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectSELECTContent.html) requests in an Amazon S3 bucket.   Units: Bytes  Valid statistics: Average (bytes per request), Sum (bytes per period), Sample Count, Min, Max (same as p100), any percentile between p0.0 and p99.9  | 
| ListRequests |  The number of HTTP requests that list the contents of a bucket. Units: Count Valid statistics: Sum  | 
| BytesDownloaded |  The number of bytes downloaded for requests made to an Amazon S3 bucket, where the response includes a body. Units: Bytes Valid statistics: Average (bytes per request), Sum (bytes per period), Sample Count, Min, Max (same as p100), any percentile between p0.0 and p99.9  | 
| BytesUploaded |  The number of bytes uploaded for requests made to an Amazon S3 bucket, where the request includes a body. Units: Bytes Valid statistics: Average (bytes per request), Sum (bytes per period), Sample Count, Min, Max (same as p100), any percentile between p0.0 and p99.9  | 
| 4xxErrors |  The number of HTTP 4*xx* client error status code requests made to an Amazon S3 bucket with a value of either 0 or 1. The Average statistic shows the error rate, and the Sum statistic shows the count of that type of error, during each period. Units: Count Valid statistics: Average (reports per request), Sum (reports per period), Min, Max, Sample Count  | 
| 5xxErrors |  The number of HTTP 5*xx* server error status code requests made to an Amazon S3 bucket with a value of either 0 or 1. The Average statistic shows the error rate, and the Sum statistic shows the count of that type of error, during each period. Units: Count Valid statistics: Average (reports per request), Sum (reports per period), Min, Max, Sample Count  | 
| FirstByteLatency |  The per-request time from the complete request being received by an Amazon S3 bucket to when the response starts to be returned. Units: Milliseconds Valid statistics: Average, Sum, Min, Max (same as p100), Sample Count, any percentile between p0.0 and p100  | 
| TotalRequestLatency |  The elapsed per-request time from the first byte received to the last byte sent to an Amazon S3 bucket. This metric includes the time taken to receive the request body and send the response body, which is not included in `FirstByteLatency`. Units: Milliseconds Valid statistics: Average, Sum, Min, Max (same as p100), Sample Count, any percentile between p0.0 and p100  | 

## S3 Replication metrics in CloudWatch
<a name="s3-cloudwatch-replication-metrics"></a>

You can monitor the progress of replication with S3 Replication metrics by tracking bytes pending, operations pending, and replication latency. For more information, see [Monitoring progress with replication metrics](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-metrics.html).

**Note**  
You can enable alarms for your replication metrics in Amazon CloudWatch. When you set up alarms for your replication metrics, set the **Missing data treatment** field to **Treat missing data as ignore (maintain the alarm state)**.


| Metric | Description | 
| --- | --- | 
| ReplicationLatency |  The maximum number of seconds by which the replication destination AWS Region is behind the source AWS Region for a given replication rule.  Units: Seconds Valid statistics: Max  | 
| BytesPendingReplication |  The total number of bytes of objects pending replication for a given replication rule. Units: Bytes Valid statistics: Max  | 
| OperationsPendingReplication |  The number of operations pending replication for a given replication rule. Units: Count Valid statistics: Max  | 
| OperationsFailedReplication |  The number of operations that failed to replicate for a given replication rule. Units: Count  Valid statistics: Sum (total number of failed operations), Average (failure rate), Sample Count (total number of replication operations)  | 

## S3 Storage Lens metrics in CloudWatch
<a name="storage-lens-metrics-cloudwatch-publish"></a>

You can publish S3 Storage Lens usage and activity metrics to Amazon CloudWatch to create a unified view of your operational health in [CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html). S3 Storage Lens metrics are published to the `AWS/S3/Storage-Lens` namespace in CloudWatch. The CloudWatch publishing option is available for S3 Storage Lens dashboards that have been upgraded to advanced metrics and recommendations.

For a list of S3 Storage Lens metrics that are published to CloudWatch, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md). For a complete list of dimensions, see [Dimensions](storage-lens-cloudwatch-metrics-dimensions.md#storage-lens-cloudwatch-dimensions).

## S3 Object Lambda request metrics in CloudWatch
<a name="olap-cloudwatch-metrics"></a>

S3 Object Lambda includes the following request metrics.


| Metric | Description | 
| --- | --- | 
| AllRequests |  The total number of HTTP requests made to an Amazon S3 bucket by using an Object Lambda Access Point. Units: Count Valid statistics: Sum  | 
| GetRequests |  The number of HTTP `GET` requests made for objects by using an Object Lambda Access Point. This metric does not include list operations. Units: Count Valid statistics: Sum  | 
| BytesUploaded |  The number of bytes uploaded to an Amazon S3 bucket by using an Object Lambda Access Point, where the request includes a body. Units: Bytes Valid statistics: Average (bytes per request), Sum (bytes per period), Sample Count, Min, Max (same as p100), any percentile between p0.0 and p99.9  | 
| PostRequests |  The number of HTTP `POST` requests made to an Amazon S3 bucket by using an Object Lambda Access Point. Units: Count Valid statistics: Sum  | 
| PutRequests |  The number of HTTP `PUT` requests made for objects in an Amazon S3 bucket by using an Object Lambda Access Point.  Units: Count  Valid statistics: Sum  | 
| DeleteRequests |  The number of HTTP `DELETE` requests made for objects in an Amazon S3 bucket by using an Object Lambda Access Point. This metric includes [https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html](https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html) requests. This metric shows the number of requests made, not the number of objects deleted. Units: Count Valid statistics: Sum  | 
| BytesDownloaded |  The number of bytes downloaded for requests made to an Amazon S3 bucket by using an Object Lambda Access Point, where the response includes a body. Units: Bytes  Valid statistics: Average (bytes per request), Sum (bytes per period), Sample Count, Min, Max (same as p100), any percentile between p0.0 and p99.9  | 
| FirstByteLatency |  The per-request time from the complete request being received by an Amazon S3 bucket through an Object Lambda Access Point to when the response starts to be returned. This metric is dependent on the AWS Lambda function's running time to transform the object before the function returns the bytes to the Object Lambda Access Point. Units: Milliseconds  Valid statistics: Average, Sum, Min, Max (same as p100), Sample Count, any percentile between p0.0 and p100  | 
| TotalRequestLatency |  The elapsed per-request time from the first byte received to the last byte sent to an Object Lambda Access Point. This metric includes the time taken to receive the request body and send the response body, which is not included in `FirstByteLatency`. Units: Milliseconds Valid statistics: Average, Sum, Min, Max (same as p100), Sample Count, any percentile between p0.0 and p100  | 
| HeadRequests |  The number of HTTP `HEAD` requests made to an Amazon S3 bucket by using an Object Lambda Access Point. Units: Count Valid statistics: Sum  | 
| ListRequests |  The number of HTTP `GET` requests that list the contents of an Amazon S3 bucket. This metric includes both `ListObjects` and `ListObjectsV2` operations. Units: Count Valid statistics: Sum  | 
| 4xxErrors |  The number of HTTP 4*xx* client error status code requests made to an Amazon S3 bucket by using an Object Lambda Access Point with a value of either 0 or 1. The Average statistic shows the error rate, and the Sum statistic shows the count of that type of error, during each period. Units: Count  Valid statistics: Average (reports per request), Sum (reports per period), Min, Max, Sample Count  | 
| 5xxErrors |  The number of HTTP 5*xx* server error status code requests made to an Amazon S3 bucket by using an Object Lambda Access Point with a value of either 0 or 1. The Average statistic shows the error rate, and the Sum statistic shows the count of that type of error, during each period. Units: Count  Valid statistics: Average (reports per request), Sum (reports per period), Min, Max, Sample Count  | 
| ProxiedRequests |  The number of HTTP requests to an Object Lambda Access Point that return the standard Amazon S3 API response. (Such requests do not have a Lambda function configured.) Units: Count Valid statistics: Sum  | 
| InvokedLambda |  The number of HTTP requests to an S3 object where a Lambda function was invoked. Units: Count Valid statistics: Sum  | 
| LambdaResponseRequests |  The number of `WriteGetObjectResponse` requests made by the Lambda function. This metric applies only to `GetObject` requests.  | 
| LambdaResponse4xx |  The number of HTTP 4*xx* client errors that occur when calling `WriteGetObjectResponse` from a Lambda function. This metric provides the same information as `4xxErrors`, but only for `WriteGetObjectResponse` calls.  | 
| LambdaResponse5xx |  The number of HTTP 5*xx* server errors that occur when calling `WriteGetObjectResponse` from a Lambda function. This metric provides the same information as `5xxErrors`, but only for `WriteGetObjectResponse` calls.  | 

## Amazon S3 dimensions in CloudWatch
<a name="s3-cloudwatch-dimensions"></a>

The following dimensions are used to filter Amazon S3 metrics.


|  Dimension  |  Description  | 
| --- | --- | 
|  BucketName  |  This dimension filters the data that you request for the identified bucket only.  | 
|  StorageType  |  This dimension filters the data that you have stored in a bucket by the following types of storage:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/metrics-dimensions.html)  | 
| FilterId | This dimension filters metrics configurations that you specify for the request metrics on a bucket. When you create a metrics configuration, you specify a filter ID (for example, a prefix, a tag, or an access point). For more information, see [Creating a metrics configuration](https://docs.aws.amazon.com/AmazonS3/latest/userguide/metrics-configurations.html). | 

## S3 Replication dimensions in CloudWatch
<a name="s3-replication-dimensions"></a>

The following dimensions are used to filter S3 Replication metrics.


|  Dimension  |  Description  | 
| --- | --- | 
|  SourceBucket  |  The name of the bucket objects are replicated from.  | 
|  DestinationBucket  |  The name of the bucket objects are replicated to.  | 
|  RuleId  |  A unique identifier for the rule that triggered this replication metric to update.  | 

## S3 Storage Lens dimensions in CloudWatch
<a name="storage-lens-dimensions"></a>

For a list of dimensions that are used to filter S3 Storage Lens metrics in CloudWatch, see [Dimensions](storage-lens-cloudwatch-metrics-dimensions.md#storage-lens-cloudwatch-dimensions).

## S3 Object Lambda request dimensions in CloudWatch
<a name="olap-dimensions"></a>

The following dimensions are used to filter data from an Object Lambda Access Point.


| Dimension | Description | 
| --- | --- | 
| AccessPointName |  The name of the access point of which requests are being made.  | 
| DataSourceARN |  The source the Object Lambda Access Point is retrieving the data from. If the request invokes a Lambda function this refers to the Lambda Amazon Resource Name (ARN). Otherwise this refers to the access point ARN.  | 

## Amazon S3 usage metrics
<a name="s3-service-quota-metrics"></a>

You can use CloudWatch usage metrics to provide visibility into your account's usage of resources. Use these metrics to visualize your current service usage on CloudWatch graphs and dashboards.

Amazon S3 usage metrics correspond to AWS service quotas. You can configure alarms that alert you when your usage approaches a service quota. For more information about CloudWatch integration with service quotas, see [AWS usage metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Service-Quota-Integration.html) in the *Amazon CloudWatch User Guide*.

Amazon S3 publishes the following metrics in the `AWS/Usage` namespace.


| Metric | Description | 
| --- | --- | 
| `ResourceCount` |  The number of the specified resources running in your account. The resources are defined by the dimensions associated with the metric.  | 

The following dimensions are used to refine the usage metrics that are published by Amazon S3.


| Dimension | Description | 
| --- | --- | 
|  Service  |  The name of the AWS service containing the resource. For Amazon S3 usage metrics, the value for this dimension is `S3`.  | 
|  Type  |  The type of entity that is being reported. Currently, the only valid value for Amazon S3 usage metrics is `Resource`.  | 
|  Resource  |  The type of resource that is running. Currently, the only valid value for Amazon S3 usage metrics is `GeneralPurposeBuckets`, which returns the number of general purpose buckets in an AWS account. General purpose buckets allow objects that are stored across all storage classes, except S3 Express One Zone.  | 

# Accessing CloudWatch metrics
<a name="cloudwatch-monitoring-accessing"></a>

You can use the following procedures to view the storage metrics for Amazon S3. To get the Amazon S3 metrics involved, you must set a start and end timestamp. For metrics for any given 24-hour period, set the time period to 86400 seconds, the number of seconds in a day. Also, remember to set the `BucketName` and `StorageType` dimensions.

## Using the AWS CLI
<a name="accessing-cw-metrics-cli"></a>

For example, if you want to use the AWS CLI to get the average of a specific bucket's size in bytes, you could use the following command:

```
aws cloudwatch get-metric-statistics --metric-name BucketSizeBytes --namespace AWS/S3 --start-time 2016-10-19T00:00:00Z --end-time 2016-10-20T00:00:00Z --statistics Average --unit Bytes --region us-west-2 --dimensions Name=BucketName,Value=amzn-s3-demo-bucket Name=StorageType,Value=StandardStorage --period 86400 --output json
```

This example produces the following output.

```
{
    "Datapoints": [
        {
            "Timestamp": "2016-10-19T00:00:00Z", 
            "Average": 1025328.0, 
            "Unit": "Bytes"
        }
    ], 
    "Label": "BucketSizeBytes"
}
```

## Using the S3 console
<a name="accessing-cw-metrics-console"></a>

**To view metrics by using the Amazon CloudWatch console**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the left navigation pane, choose **Metrics**. 

1. Choose the **S3** namespace.

1. (Optional) To view a metric, enter the metric name in the search box.

1. (Optional) To filter by the **StorageType** dimension, enter the name of the storage class in the search box.

**To view a list of valid metrics stored for your AWS account by using the AWS CLI**
+ At a command prompt, use the following command.

  ```
  1. aws cloudwatch list-metrics --namespace "AWS/S3"
  ```

For more information about the permissions required to access CloudWatch dashboards, see [Amazon CloudWatch dashboard permissions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/dashboard-permissions-update.html) in the *Amazon CloudWatch User Guide*.

# CloudWatch metrics configurations
<a name="metrics-configurations"></a>

With Amazon CloudWatch request metrics for Amazon S3, you can receive 1-minute CloudWatch metrics, set CloudWatch alarms, and access CloudWatch dashboards to view near-real-time operations and performance of your Amazon S3 storage. For applications that depend on cloud storage, these metrics let you quickly identify and act on operational issues. When enabled, these 1-minute metrics are available at the Amazon S3 bucket-level, by default.

If you want to get the CloudWatch request metrics for the objects in a bucket, you must create a metrics configuration for the bucket. For more information, see [Creating a CloudWatch metrics configuration for all the objects in your bucket](configure-request-metrics-bucket.md). 

You can also use a shared prefix, object tags, or an access point to define a filter for the metrics collected. This method of defining a filter allows you to align metrics filters to specific business applications, workflows, or internal organizations. For more information, see [Creating a metrics configuration that filters by prefix, object tag, or access point](metrics-configurations-filter.md). For more information about the CloudWatch metrics that are available and the differences between storage and request metrics, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md).

Keep the following in mind when using metrics configurations:
+ You can have a maximum of 1,000 metrics configurations per bucket.
+ You can choose which objects in a bucket to include in metrics configurations by using filters. You can filter on a shared prefix, object tag, or access point to align metrics filters to specific business applications, workflows, or internal organizations. To request metrics for the entire bucket, create a metrics configuration without a filter.
+ Metrics configurations are necessary only to enable request metrics. Bucket-level daily storage metrics are always turned on, and are provided at no additional cost. Currently, it's not possible to get daily storage metrics for a filtered subset of objects.
+ Each metrics configuration enables the full set of [available request metrics](metrics-dimensions.md#s3-request-cloudwatch-metrics). Operation-specific metrics (such as `PostRequests`) are reported only if there are requests of that type for your bucket or your filter.
+ Request metrics are reported for object-level operations. They are also reported for operations that list bucket contents, like [GET Bucket (List Objects)](https://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html), [GET Bucket Object Versions](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETVersion.html), and [List Multipart Uploads](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html), but they are not reported for other operations on buckets.
+ Request metrics support filtering by prefix, object tags, or access point, but storage metrics do not.

**Best-effort CloudWatch metrics delivery**  
 CloudWatch metrics are delivered on a best-effort basis. Most requests for an Amazon S3 object that have request metrics result in a data point being sent to CloudWatch.

The completeness and timeliness of metrics are not guaranteed. The data point for a particular request might be returned with a timestamp that is later than when the request was actually processed. The data point for a minute might be delayed before being available through CloudWatch, or it might not be delivered at all. CloudWatch request metrics give you an idea of the nature of traffic against your bucket in near-real time. It is not meant to be a complete accounting of all requests.

It follows from the best-effort nature of this feature that the reports available at the [Billing & Cost Management Dashboard](https://console.aws.amazon.com/billing/home?#/) might include one or more access requests that do not appear in the bucket metrics.

For more information about working with CloudWatch metrics in Amazon S3, see the following topics.

**Topics**
+ [Creating a CloudWatch metrics configuration for all the objects in your bucket](configure-request-metrics-bucket.md)
+ [Creating a metrics configuration that filters by prefix, object tag, or access point](metrics-configurations-filter.md)
+ [Deleting a metrics filter](delete-request-metrics-filter.md)

# Creating a CloudWatch metrics configuration for all the objects in your bucket
<a name="configure-request-metrics-bucket"></a>

When you configure request metrics, you can create a CloudWatch metrics configuration for all the objects in your bucket, or you can filter by prefix, object tag, or access point. The procedures in this topic show you how to create a configuration for all the objects in your bucket. To create a configuration that filters by object tag, prefix, or access point, see [Creating a metrics configuration that filters by prefix, object tag, or access point](metrics-configurations-filter.md).

There are three types of Amazon CloudWatch metrics for Amazon S3: storage metrics, request metrics, and replication metrics. Storage metrics are reported once per day and are provided to all customers at no additional cost. Request metrics are available at one-minute intervals after some latency for processing. Request metrics are billed at the standard CloudWatch rate. You must opt in to request metrics by configuring them in the console or using the Amazon S3 API. [S3 Replication metrics](https://docs.aws.amazon.com/AmazonS3/latest/userguide/viewing-replication-metrics.html) provide detailed metrics for the replication rules in your replication configuration. With replication metrics, you can monitor minute-by-minute progress by tracking bytes pending, operations pending, operations that failed replication, and replication latency.

For more information about CloudWatch metrics for Amazon S3, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md). 

You can add metrics configurations to a bucket using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), or the Amazon S3 REST API.

## Using the S3 console
<a name="configure-metrics"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that contains the objects you want request metrics for.

1. Choose the **Metrics** tab.

1. Under **Bucket metrics**, choose **View additional charts**.

1. Choose the **Request metrics** tab.

1. Choose **Create filter**.

1. In the **Filter name** box, enter your filter name. 

   Names can only contain letters, numbers, periods, dashes, and underscores. We recommend using the name `EntireBucket` for a filter that applies to all objects.

1. Under **Filter scope**, choose **This filter applies to all objects in the bucket**.

   You can also define a filter so that the metrics are only collected and reported on a subset of objects in the bucket. For more information, see [Creating a metrics configuration that filters by prefix, object tag, or access point](metrics-configurations-filter.md).

1. Choose **Save changes**.

1. On the **Request metrics** tab, under **Filters**, choose the filter that you just created.

   After about 15 minutes, CloudWatch begins tracking these request metrics. You can see them on the **Request metrics** tab. You can see graphs for the metrics on the Amazon S3 or CloudWatch console. Request metrics are billed at the standard CloudWatch rate. For more information, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/). 

## Using the REST API
<a name="metrics-configuration-api"></a>

You can also add metrics configurations programmatically with the Amazon S3 REST API. For more information about adding and working with metrics configurations, see the following topics in the *Amazon Simple Storage Service API Reference*:
+ [PUT Bucket Metric Configuration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTMetricConfiguration.html)
+ [GET Bucket Metric Configuration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETMetricConfiguration.html)
+ [List Bucket Metric Configuration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTListBucketMetricsConfiguration.html)
+ [DELETE Bucket Metric Configuration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTDeleteBucketMetricsConfiguration.html)

## Using the AWS CLI
<a name="add-metrics-configurations"></a>

1. Install and set up the AWS CLI. For instructions, see [Installing, updating, and uninstalling the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html) in the *AWS Command Line Interface User Guide*.

1. Open a terminal.

1. Run the following command to add a metrics configuration.

   ```
   aws s3api put-bucket-metrics-configuration --endpoint https://s3.us-west-2.amazonaws.com --bucket bucket-name --id metrics-config-id --metrics-configuration '{"Id":"metrics-config-id"}'
   ```

# Creating a metrics configuration that filters by prefix, object tag, or access point
<a name="metrics-configurations-filter"></a>

There are three types of Amazon CloudWatch metrics for Amazon S3: storage metrics, request metrics, and replication metrics. Storage metrics are reported once per day and are provided to all customers at no additional cost. Request metrics are available at one-minute intervals after some latency for processing. Request metrics are billed at the standard CloudWatch rate. You must opt in to request metrics by configuring them in the console or using the Amazon S3 API. [S3 Replication metrics](https://docs.aws.amazon.com/AmazonS3/latest/userguide/viewing-replication-metrics.html) provide detailed metrics for the replication rules in your replication configuration. With replication metrics, you can monitor minute-by-minute progress by tracking bytes pending, operations pending, operations that failed replication, and replication latency.

For more information about CloudWatch metrics for Amazon S3, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md). 

When you configure CloudWatch metrics, you can create a filter for all the objects in your bucket, or you can filter the configuration into groups of related objects within a single bucket. You can filter objects in a bucket for inclusion in a metrics configuration based on one or more of the following filter types:
+ **Object key name prefix** – Although the Amazon S3 data model is a flat structure, you can infer a hierarchy by using a prefix. The Amazon S3 console supports these prefixes with the concept of folders. If you filter by prefix, objects that have the same prefix are included in the metrics configuration. For more information about prefixes, see [Organizing objects using prefixes](using-prefixes.md). 
+ **Tag** – You can add tags, which are key-value name pairs, to objects. Tags help you find and organize objects easily. You can also use tags as filters for metrics configurations. For more information about object tags, see [Categorizing your objects using tags](object-tagging.md). 
+ **Access point** – S3 Access Points are named network endpoints that are attached to buckets and simplify managing data access at scale for shared datasets in S3. When you create an access point filter, Amazon S3 includes requests to the access point that you specify in the metrics configuration. For more information, see [Monitoring and logging access points](access-points-monitoring-logging.md).
**Note**  
When you create a metrics configuration that filters by access point, you must use the access point Amazon Resource Name (ARN), not the access point alias. Make sure that you use the ARN for the access point itself, not the ARN for a specific object. For more information about access point ARNs, see [Using Amazon S3 access points for general purpose buckets](using-access-points.md).

If you specify a filter, only requests that operate on single objects can match the filter and be included in the reported metrics. Requests like [https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html](https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html) and `ListObjects` requests don't return any metrics for configurations with filters.

To request more complex filtering, choose two or more elements. Only objects that have all of those elements are included in the metrics configuration. If you don't set filters, all of the objects in the bucket are included in the metrics configuration.

## Using the S3 console
<a name="configure-metrics-filter"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**

1. In the buckets list, choose the name of the bucket that contains the objects that you want request metrics for.

1. Choose the **Metrics** tab.

1. Under **Bucket metrics**, choose **View additional charts**.

1. Choose the **Request metrics** tab.

1. Choose **Create filter**.

1. In the **Filter name** box, enter your filter name. 

   Names can contain only letters, numbers, periods, dashes, and underscores.

1. Under **Filter scope**, choose **Limit the scope of this filter using a prefix, object tags, and an S3 Access Point, or a combination of all three**.

1. Under **Filter type**, choose at least one filter type: **Prefix**, **Object tags**, or **Access Point**.

1. To define a prefix filter and limit the scope of the filter to a single path, in the **Prefix** box, enter a prefix.

1. To define an object tags filter, under **Object tags**, choose **Add tag**, and then enter a tag **Key** and **Value**.

1. To define an access point filter, in the **S3 Access Point** field, enter the access point ARN, or choose **Browse S3** to navigate to the access point.
**Important**  
You cannot enter an access point alias. You must enter the ARN for the access point itself, not the ARN for a specific object.

1. Choose **Save changes**.

   Amazon S3 creates a filter that uses the prefix, tags, or access point that you specified.

1. On the **Request metrics** tab, under **Filters**, choose the filter that you just created.

   You have now created a filter that limits the request metrics scope by prefix, object tags, or access point. About 15 minutes after CloudWatch begins tracking these request metrics, you can see charts for the metrics on both the Amazon S3 and CloudWatch consoles. Request metrics are billed at the standard CloudWatch rate. For more information, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/). 

   You can also configure request metrics at the bucket level. For information, see [Creating a CloudWatch metrics configuration for all the objects in your bucket](configure-request-metrics-bucket.md).

## Using the AWS CLI
<a name="add-metrics-configurations"></a>

1. Install and set up the AWS CLI. For instructions, see [Installing, updating, and uninstalling the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the *AWS Command Line Interface User Guide*.

1. Open a terminal.

1. To add a metrics configuration, run one of the following commands:  
**Example : To filter by prefix**  

   ```
   aws s3api put-bucket-metrics-configuration --bucket amzn-s3-demo-bucket --id metrics-config-id --metrics-configuration '{"Id":"metrics-config-id", "Filter":{"Prefix":"prefix1"}} '
   ```  
**Example : To filter by tags**  

   ```
   aws s3api put-bucket-metrics-configuration --bucket amzn-s3-demo-bucket --id metrics-config-id --metrics-configuration '{"Id":"metrics-config-id", "Filter":{"Tag": {"Key": "string", "Value": "string"}} '
   ```  
**Example : To filter by access point**  

   ```
   aws s3api put-bucket-metrics-configuration --bucket amzn-s3-demo-bucket --id metrics-config-id --metrics-configuration '{"Id":"metrics-config-id", "Filter":{"AccessPointArn":"arn:aws:s3:Region:account-id:accesspoint/access-point-name"}} '
   ```  
**Example : To filter by prefix, tags, and access point**  

   ```
   aws s3api put-bucket-metrics-configuration --endpoint https://s3.Region.amazonaws.com --bucket amzn-s3-demo-bucket --id metrics-config-id --metrics-configuration '
   {
       "Id": "metrics-config-id",
       "Filter": {
           "And": {
               "Prefix": "string",
               "Tags": [
                   {
                       "Key": "string",
                       "Value": "string"
                   }
               ],
               "AccessPointArn": "arn:aws:s3:Region:account-id:accesspoint/access-point-name"
           }
       }
   }'
   ```

## Using the REST API
<a name="configure-cw-filter-rest"></a>

You can also add metrics configurations programmatically with the Amazon S3 REST API. For more information about adding and working with metrics configurations, see the following topics in the *Amazon Simple Storage Service API Reference*:
+ [PUT Bucket Metric Configuration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTMetricConfiguration.html)
+ [GET Bucket Metric Configuration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETMetricConfiguration.html)
+ [List Bucket Metric Configuration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTListBucketMetricsConfiguration.html)
+ [DELETE Bucket Metric Configuration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTDeleteBucketMetricsConfiguration.html)

# Deleting a metrics filter
<a name="delete-request-metrics-filter"></a>

You can delete an Amazon CloudWatch request metrics filter if you no longer need it. When you delete a filter, you are no longer charged for request metrics that use that *specific filter*. However, you will continue to be charged for any other filter configurations that exist. 

When you delete a filter, you can no longer use the filter for request metrics. Deleting a filter cannot be undone. 

For information about creating a request metrics filter, see the following topics:
+ [Creating a CloudWatch metrics configuration for all the objects in your bucket](configure-request-metrics-bucket.md)
+ [Creating a metrics configuration that filters by prefix, object tag, or access point](metrics-configurations-filter.md)

## Using the S3 console
<a name="delete-request-metrics-filter-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket you want to delete a request metrics filter for.

1. Choose the **Metrics** tab.

1. Under **Bucket metrics**, choose **View additional charts**.

1. Choose the **Request metrics** tab.

1. Choose **Manage filters**.

1. Choose your filter.
**Important**  
Deleting a filter cannot be undone.

1. Choose **Delete**.

   Amazon S3 deletes your filter.

## Using the REST API
<a name="delete-request-metrics-filter-rest"></a>

You can also add metrics configurations programmatically with the Amazon S3 REST API. For more information about adding and working with metrics configurations, see the following topics in the *Amazon Simple Storage Service API Reference*:
+ [PUT Bucket Metric Configuration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTMetricConfiguration.html)
+ [GET Bucket Metric Configuration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETMetricConfiguration.html)
+ [List Bucket Metric Configuration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTListBucketMetricsConfiguration.html)
+ [DELETE Bucket Metric Configuration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTDeleteBucketMetricsConfiguration.html)

# Amazon S3 Event Notifications
<a name="EventNotifications"></a>

You can use the Amazon S3 Event Notifications feature to receive notifications when certain events happen in your S3 bucket. To enable notifications, add a notification configuration that identifies the events that you want Amazon S3 to publish. Make sure that it also identifies the destinations where you want Amazon S3 to send the notifications. You store this configuration in the *notification* subresource that's associated with a bucket. For more information, see [General purpose buckets configuration options](UsingBucket.md#bucket-config-options-intro). Amazon S3 provides an API for you to manage this subresource. 

**Important**  
Amazon S3 event notifications are designed to be delivered at least once. Typically, event notifications are delivered in seconds but can sometimes take a minute or longer. 

## Overview of Amazon S3 Event Notifications
<a name="notification-how-to-overview"></a>

Currently, Amazon S3 can publish notifications for the following events:
+ New object created events
+ Object removal events
+ Restore object events
+ Reduced Redundancy Storage (RRS) object lost events
+ Replication events
+ S3 Lifecycle expiration events
+ S3 Lifecycle transition events
+ S3 Intelligent-Tiering automatic archival events
+ Object tagging events
+ Object ACL PUT events

For full descriptions of all the supported event types, see [Supported event types for SQS, SNS, and Lambda](notification-how-to-event-types-and-destinations.md#supported-notification-event-types). 

Amazon S3 can send event notification messages to the following destinations. You specify the Amazon Resource Name (ARN) value of these destinations in the notification configuration.
+ Amazon Simple Notification Service (Amazon SNS) topics
+ Amazon Simple Queue Service (Amazon SQS) queues
+ AWS Lambda function
+ Amazon EventBridge

For more information, see [Supported event destinations](notification-how-to-event-types-and-destinations.md#supported-notification-destinations).

**Note**  
Amazon Simple Queue Service FIFO (First-In-First-Out) queues aren't supported as an Amazon S3 event notification destination. To send a notification for an Amazon S3 event to an Amazon SQS FIFO queue, you can use Amazon EventBridge. For more information, see [Enabling Amazon EventBridge](enable-event-notifications-eventbridge.md).

**Warning**  
If your notification writes to the same bucket that triggers the notification, it could cause an execution loop. For example, if the bucket triggers a Lambda function each time an object is uploaded, and the function uploads an object to the bucket, then the function indirectly triggers itself. To avoid this, use two buckets, or configure the trigger to only apply to a prefix used for incoming objects.  
For more information and an example of using Amazon S3 notifications with AWS Lambda, see [Using AWS Lambda with Amazon S3](https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html) in the *AWS Lambda Developer Guide*. 

For more information about the number of event notification configurations that you can create per bucket, see [Amazon S3 service quotas](https://docs.aws.amazon.com/general/latest/gr/s3.html#limits_s3) in *AWS General Reference*.

For more information about event notifications, see the following sections.

**Topics**
+ [Overview of Amazon S3 Event Notifications](#notification-how-to-overview)
+ [Event notification types and destinations](notification-how-to-event-types-and-destinations.md)
+ [Using Amazon SQS, Amazon SNS, and Lambda](how-to-enable-disable-notification-intro.md)
+ [Using EventBridge](EventBridge.md)

# Event notification types and destinations
<a name="notification-how-to-event-types-and-destinations"></a>

Amazon S3 supports several event notification types and destinations where the notifications can be published. You can specify the event type and destination when configuring your event notifications. Only one destination can be specified for each event notification. Amazon S3 event notifications send one event entry for each notification message.

**Topics**
+ [Supported event destinations](#supported-notification-destinations)
+ [Supported event types for SQS, SNS, and Lambda](#supported-notification-event-types)
+ [Supported event types for Amazon EventBridge](#supported-notification-event-types-eventbridge)
+ [Event ordering and duplicate events](#event-ordering-and-duplicate-events)

## Supported event destinations
<a name="supported-notification-destinations"></a>

Amazon S3 can send event notification messages to the following destinations.
+ Amazon Simple Notification Service (Amazon SNS) topics
+ Amazon Simple Queue Service (Amazon SQS) queues
+ AWS Lambda
+ Amazon EventBridge

However, only one destination type can be specified for each event notification.

**Note**  
You must grant Amazon S3 permissions to post messages to an Amazon SNS topic or an Amazon SQS queue. You must also grant Amazon S3 permission to invoke an AWS Lambda function on your behalf. For instructions on how to grant these permissions, see [Granting permissions to publish event notification messages to a destination](grant-destinations-permissions-to-s3.md). 

### Amazon SNS topic
<a name="amazon-sns-topic"></a>

Amazon SNS is a flexible, fully managed push messaging service. You can use this service to push messages to mobile devices or distributed services. With SNS, you can publish a message once, and deliver it one or more times. Currently, Standard SNS is only allowed as an S3 event notification destination, whereas SNS FIFO is not allowed.

Amazon SNS both coordinates and manages sending and delivering messages to subscribing endpoints or clients. You can use the Amazon SNS console to create an Amazon SNS topic that your notifications can be sent to. 

The topic must be in the same AWS Region as your Amazon S3 bucket. For instructions on how to create an Amazon SNS topic, see [Getting started with Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/sns-getting-started.html) in the *Amazon Simple Notification Service Developer Guide* and the [Amazon SNS FAQ](https://aws.amazon.com/sns/faqs/).

Before you can use the Amazon SNS topic that you created as an event notification destination, you need the following:
+ The Amazon Resource Name (ARN) for the Amazon SNS topic
+ A valid Amazon SNS topic subscription. With it, topic subscribers are notified when a message is published to your Amazon SNS topic.

### Amazon SQS queue
<a name="amazon-sqs-queue"></a>

Amazon SQS offers reliable and scalable hosted queues for storing messages as they travel between computers. You can use Amazon SQS to transmit any volume of data without requiring other services to be always available. You can use the Amazon SQS console to create an Amazon SQS queue that your notifications can be sent to. 

The Amazon SQS queue must be in the same AWS Region as your Amazon S3 bucket. For instructions on how to create an Amazon SQS queue, see [What is Amazon Simple Queue Service](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) and [Getting started with Amazon SQS](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-getting-started.html) in the *Amazon Simple Queue Service Developer Guide*. 

Before you can use the Amazon SQS queue as an event notification destination, you need the following:
+ The Amazon Resource Name (ARN) for the Amazon SQS queue

**Note**  
Amazon Simple Queue Service FIFO (First-In-First-Out) queues aren't supported as an Amazon S3 event notification destination. To send a notification for an Amazon S3 event to an Amazon SQS FIFO queue, you can use Amazon EventBridge. For more information, see [Enabling Amazon EventBridge](enable-event-notifications-eventbridge.md).

### Lambda function
<a name="lambda-function"></a>

You can use AWS Lambda to extend other AWS services with custom logic, or create your own backend that operates at AWS scale, performance, and security. With Lambda, you can create discrete, event-driven applications that run only when needed. You can also use it to scale these applications automatically from a few requests a day to thousands a second. 

Lambda can run custom code in response to Amazon S3 bucket events. You upload your custom code to Lambda and create what's called a Lambda function. When Amazon S3 detects an event of a specific type, it can publish the event to AWS Lambda and invoke your function in Lambda. In response, Lambda runs your function. One event type it might detect, for example, is an object created event.

You can use the AWS Lambda console to create a Lambda function that uses the AWS infrastructure to run the code on your behalf. The Lambda function must be in the same Region as your S3 bucket. You must also have the name or the ARN of a Lambda function to set up the Lambda function as an event notification destination.

**Warning**  
If your notification writes to the same bucket that triggers the notification, it could cause an execution loop. For example, if the bucket triggers a Lambda function each time an object is uploaded, and the function uploads an object to the bucket, then the function indirectly triggers itself. To avoid this, use two buckets, or configure the trigger to only apply to a prefix used for incoming objects.  
For more information and an example of using Amazon S3 notifications with AWS Lambda, see [Using AWS Lambda with Amazon S3](https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html) in the *AWS Lambda Developer Guide*. 

### Amazon EventBridge
<a name="eventbridge-dest"></a>

Amazon EventBridge is a serverless event bus, which receives events from AWS services. You can set up rules to match events and deliver them to targets, such as an AWS service or an HTTP endpoint. For more information, see [What is EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) in the *Amazon EventBridge User Guide*.

Unlike other destinations, you can either enable or disable events to be delivered to EventBridge for a bucket. If you enable delivery, all events are sent to EventBridge. Moreover, you can use EventBridge rules to route events to additional targets.

## Supported event types for SQS, SNS, and Lambda
<a name="supported-notification-event-types"></a>

Amazon S3 can publish events of the following types. You specify these event types in the notification configuration.


|  Event types |  Description  | 
| --- | --- | 
|  `s3:TestEvent`  |  When a notification is enabled, Amazon S3 publishes a test notification. This is to ensure that the topic exists and that the bucket owner has permission to publish the specified topic. If enabling the notification fails, you don't receive a test notification.  | 
|  `s3:ObjectCreated:*` `s3:ObjectCreated:Put` `s3:ObjectCreated:Post` `s3:ObjectCreated:Copy` `s3:ObjectCreated:CompleteMultipartUpload`  |  Amazon S3 API operations such as `PUT`, `POST`, and `COPY` can create an object. With these event types, you can enable notifications when an object is created using a specific API operation. Alternatively, you can use the `s3:ObjectCreated:*` event type to request notification regardless of the API that was used to create an object.  `s3:ObjectCreated:CompleteMultipartUpload` includes objects that are created using [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html) for Copy operations.  | 
|  `s3:ObjectRemoved:*` `s3:ObjectRemoved:Delete` `s3:ObjectRemoved:DeleteMarkerCreated`  |  By using the `ObjectRemoved` event types, you can enable notification when an object or a batch of objects is removed from a bucket. You can request notification when an object is deleted or a versioned object is permanently deleted by using the `s3:ObjectRemoved:Delete` event type. Alternatively, you can request notification when a delete marker is created for a versioned object using `s3:ObjectRemoved:DeleteMarkerCreated`. For instructions on how to delete versioned objects, see [Deleting object versions from a versioning-enabled bucket](DeletingObjectVersions.md). You can also use a wildcard `s3:ObjectRemoved:*` to request notification anytime an object is deleted.  These event notifications don't alert you for automatic deletes from lifecycle configurations or from failed operations.  | 
|  `s3:ObjectRestore:*` `s3:ObjectRestore:Post` `s3:ObjectRestore:Completed` `s3:ObjectRestore:Delete`  |  By using the `ObjectRestore` event types, you can receive notifications for event initiation and completion when restoring objects from the S3 Glacier Flexible Retrieval storage class, S3 Glacier Deep Archive storage class, S3 Intelligent-Tiering Archive Access tier, and S3 Intelligent-Tiering Deep Archive Access tier. You can also receive notifications for when the restored copy of an object expires. The `s3:ObjectRestore:Post` event type notifies you of object restoration initiation. The `s3:ObjectRestore:Completed` event type notifies you of restoration completion. The `s3:ObjectRestore:Delete` event type notifies you when the temporary copy of a restored object expires.  | 
| s3:ReducedRedundancyLostObject | You receive this notification event when Amazon S3 detects that an object of the RRS storage class is lost. | 
|  `s3:Replication:*` `s3:Replication:OperationFailedReplication` `s3:Replication:OperationMissedThreshold` `s3:Replication:OperationReplicatedAfterThreshold` `s3:Replication:OperationNotTracked`  |  By using the `Replication` event types, you can receive notifications for replication configurations that have S3 Replication metrics or S3 Replication Time Control (S3 RTC) enabled. You can monitor the minute-by-minute progress of replication events by tracking bytes pending, operations pending, and replication latency. For information about replication metrics, see [Monitoring replication with metrics, event notifications, and statuses](replication-metrics.md). [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-how-to-event-types-and-destinations.html)  | 
|  `s3:LifecycleExpiration:*` `s3:LifecycleExpiration:Delete` `s3:LifecycleExpiration:DeleteMarkerCreated`  |  By using the `LifecycleExpiration` event types, you can receive a notification when Amazon S3 deletes an object based on your S3 Lifecycle configuration. The `s3:LifecycleExpiration:Delete` event type notifies you when an object in an unversioned bucket is deleted. It also notifies you when an object version is permanently deleted by an S3 Lifecycle configuration. The `s3:LifecycleExpiration:DeleteMarkerCreated` event type notifies you when S3 Lifecycle creates a delete marker when a current version of an object in versioned bucket is deleted.   | 
| s3:LifecycleTransition | You receive this notification event when an object is transitioned to another Amazon S3 storage class by an S3 Lifecycle configuration. | 
| s3:IntelligentTiering | You receive this notification event when an object within the S3 Intelligent-Tiering storage class moved to the Archive Access tier or Deep Archive Access tier.  | 
|  `s3:ObjectTagging:*` `s3:ObjectTagging:Put` `s3:ObjectTagging:Delete`  |  By using the `ObjectTagging` event types, you can enable notification when an object tag is added or deleted from an object. The `s3:ObjectTagging:Put` event type notifies you when a tag is PUT on an object or an existing tag is updated. The `s3:ObjectTagging:Delete` event type notifies you when a tag is removed from an object. | 
| s3:ObjectAcl:Put | You receive this notification event when an ACL is PUT on an object or when an existing ACL is changed. An event is not generated when a request results in no change to an object’s ACL. | 

## Supported event types for Amazon EventBridge
<a name="supported-notification-event-types-eventbridge"></a>

For a list of event types Amazon S3 will send to Amazon EventBridge, see [Using EventBridge](EventBridge.md).

## Event ordering and duplicate events
<a name="event-ordering-and-duplicate-events"></a>

Amazon S3 Event Notifications is designed to deliver notifications at least once, but they aren’t guaranteed to arrive in the same order that the events occurred. On rare occasions, Amazon S3’s retry mechanism might cause duplicate S3 Event Notifications for the same object event. For more about handling duplicate or out of order events, see [Manage event ordering and duplicate events with Amazon S3 Event Notifications](https://aws.amazon.com/blogs/storage/manage-event-ordering-and-duplicate-events-with-amazon-s3-event-notifications/) on the *AWS Storage Blog*.

# Using Amazon SQS, Amazon SNS, and Lambda
<a name="how-to-enable-disable-notification-intro"></a>

Enabling notifications is a bucket-level operation. You store notification configuration information in the *notification* subresource that's associated with a bucket. After you create or change the bucket notification configuration, it usually takes about five minutes for the changes to take effect. When the notification is first enabled, an `s3:TestEvent` occurs. You can use any of the following methods to manage notification configuration:
+ **Using the Amazon S3 console** — You can use the console UI to set a notification configuration on a bucket without having to write any code. For more information, see [Enabling and configuring event notifications using the Amazon S3 console](enable-event-notifications.md).
+ **Programmatically using the AWS SDKs** — Internally, both the console and the SDKs call the Amazon S3 REST API to manage *notification* subresources that are associated with the bucket. For examples of notification configurations that use AWS SDK, see [Walkthrough: Configuring a bucket for notifications (SNS topic or SQS queue)](ways-to-add-notification-config-to-bucket.md).
**Note**  
You can also make the Amazon S3 REST API calls directly from your code. However, this can be cumbersome because to do so you must write code to authenticate your requests. 

Regardless of the method that you use, Amazon S3 stores the notification configuration as XML in the *notification* subresource that's associated with a bucket. For information about bucket subresources, see [General purpose buckets configuration options](UsingBucket.md#bucket-config-options-intro).

**Note**  
If you have multiple failed event notifications due to deleted destinations you may receive the **Unable to validate the following destination configurations** when trying to delete them. You can resolve this in the S3 console by deleting all the failed notifications at the same time.

**Topics**
+ [Granting permissions to publish event notification messages to a destination](grant-destinations-permissions-to-s3.md)
+ [Enabling and configuring event notifications using the Amazon S3 console](enable-event-notifications.md)
+ [Configuring event notifications programmatically](#event-notification-configuration)
+ [Walkthrough: Configuring a bucket for notifications (SNS topic or SQS queue)](ways-to-add-notification-config-to-bucket.md)
+ [Configuring event notifications using object key name filtering](notification-how-to-filtering.md)
+ [Event message structure](notification-content-structure.md)

# Granting permissions to publish event notification messages to a destination
<a name="grant-destinations-permissions-to-s3"></a>

You must grant the Amazon S3 principal the necessary permissions to call the relevant API to publish messages to an SNS topic, an SQS queue, or a Lambda function. This is so that Amazon S3 can publish event notification messages to a destination.

To troubleshoot publishing event notification messages to a destination, see [ Troubleshoot to publish Amazon S3 event notifications to an Amazon Simple Notification Service topic ](https://repost.aws/knowledge-center/sns-not-receiving-s3-event-notifications).

**Topics**
+ [Granting permissions to invoke an AWS Lambda function](#grant-lambda-invoke-permission-to-s3)
+ [Granting permissions to publish messages to an SNS topic or an SQS queue](#grant-sns-sqs-permission-for-s3)

## Granting permissions to invoke an AWS Lambda function
<a name="grant-lambda-invoke-permission-to-s3"></a>

Amazon S3 publishes event messages to AWS Lambda by invoking a Lambda function and providing the event message as an argument.

When you use the Amazon S3 console to configure event notifications on an Amazon S3 bucket for a Lambda function, the console sets up the necessary permissions on the Lambda function. This is so that Amazon S3 has permissions to invoke the function from the bucket. For more information, see [Enabling and configuring event notifications using the Amazon S3 console](enable-event-notifications.md). 

You can also grant Amazon S3 permissions from AWS Lambda to invoke your Lambda function. For more information, see [Tutorial: Using AWS Lambda with Amazon S3](https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html) in the *AWS Lambda Developer Guide*.

## Granting permissions to publish messages to an SNS topic or an SQS queue
<a name="grant-sns-sqs-permission-for-s3"></a>

To grant Amazon S3 permissions to publish messages to the SNS topic or SQS queue, attach an AWS Identity and Access Management (IAM) policy to the destination SNS topic or SQS queue. 

For an example of how to attach a policy to an SNS topic or an SQS queue, see [Walkthrough: Configuring a bucket for notifications (SNS topic or SQS queue)](ways-to-add-notification-config-to-bucket.md). For more information about permissions, see the following topics:
+ [Example cases for Amazon SNS access control](https://docs.aws.amazon.com/sns/latest/dg/AccessPolicyLanguage_UseCases_Sns.html) in the *Amazon Simple Notification Service Developer Guide*
+ [Identity and access management in Amazon SQS](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/UsingIAM.html) in the *Amazon Simple Queue Service Developer Guide*

### IAM policy for a destination SNS topic
<a name="sns-topic-policy"></a>

The following is an example of an AWS Identity and Access Management (IAM) policy that you attach to the destination SNS topic. For instructions on how to use this policy to set up a destination Amazon SNS topic for event notifications, see [Walkthrough: Configuring a bucket for notifications (SNS topic or SQS queue)](ways-to-add-notification-config-to-bucket.md).

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "example-ID",
    "Statement": [
        {
            "Sid": "Example SNS topic policy",
            "Effect": "Allow",
            "Principal": {
                "Service": "s3.amazonaws.com"
            },
            "Action": [
                "SNS:Publish"
            ],
            "Resource": "arn:aws:sns:us-east-1:111122223333:example-sns-topic",
            "Condition": {
                "ArnEquals": {
                    "aws:SourceArn": "arn:aws:s3:::amzn-s3-demo-bucket"
                },
                "StringEquals": {
                    "aws:SourceAccount": "bucket-owner-123456789012"
                }
            }
        }
    ]
}
```

------

### IAM policy for a destination SQS queue
<a name="sqs-queue-policy"></a>

The following is an example of an IAM policy that you attach to the destination SQS queue. For instructions on how to use this policy to set up a destination Amazon SQS queue for event notifications, see [Walkthrough: Configuring a bucket for notifications (SNS topic or SQS queue)](ways-to-add-notification-config-to-bucket.md).

To use this policy, you must update the Amazon SQS queue ARN, bucket name, and bucket owner's AWS account ID.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "example-ID",
    "Statement": [
        {
            "Sid": "example-statement-ID",
            "Effect": "Allow",
            "Principal": {
                "Service": "s3.amazonaws.com"
            },
            "Action": [
                "SQS:SendMessage"
            ],
            "Resource": "arn:aws:sqs:us-east-1:111122223333:queue-name",
            "Condition": {
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:s3:*:*:amzn-s3-demo-bucket"
                },
                "StringEquals": {
                    "aws:SourceAccount": "bucket-owner-123456789012"
                }
            }
        }
    ]
}
```

------

For both the Amazon SNS and Amazon SQS IAM policies, you can specify the `StringLike` condition in the policy instead of the `ArnLike` condition.

When `ArnLike` is used, the partition, service, account ID, resource type, and partial resource ID portions of the ARN must exactly match to the ARN in the request context. Only the Region and resource path allow partial matching.

 When `StringLike` is used instead of `ArnLike`, matching ignores the ARN structure and allows partial matching, regardless of the portion that's replaced by the wildcard character. For more information, see [IAM JSON policy elements](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) in the *IAM User Guide*.

```
"Condition": {         
  "StringLike": { "aws:SourceArn": "arn:aws:s3:*:*:amzn-s3-demo-bucket" }
  }
```

### AWS KMS key policy
<a name="key-policy-sns-sqs"></a>

If the SQS queue or SNS topics are encrypted with an AWS Key Management Service (AWS KMS) customer managed key, you must grant the Amazon S3 service principal permission to work with the encrypted topics or queue. To grant the Amazon S3 service principal permission, add the following statement to the key policy for the customer managed key.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "example-ID",
    "Statement": [
        {
            "Sid": "example-statement-ID",
            "Effect": "Allow",
            "Principal": {
                "Service": "s3.amazonaws.com"
            },
            "Action": [
                "kms:GenerateDataKey",
                "kms:Decrypt"
            ],
            "Resource": "*"
        }
    ]
}
```

------

For more information about AWS KMS key policies, see [Using key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) in the *AWS Key Management Service Developer Guide*. 

For more information about using server-side encryption with AWS KMS for Amazon SQS and Amazon SNS, see the following:
+ [Key management](https://docs.aws.amazon.com/sns/latest/dg/sns-key-management.html) in the *Amazon Simple Notification Service Developer Guide*.
+ [Key management](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-key-management.html) in the *Amazon Simple Queue Service Developer Guide*.
+ [Encrypting messages published to Amazon SNS with AWS KMS](https://aws.amazon.com/blogs/compute/encrypting-messages-published-to-amazon-sns-with-aws-kms/) in the *AWS Compute Blog*.

# Enabling and configuring event notifications using the Amazon S3 console
<a name="enable-event-notifications"></a>

You can enable certain Amazon S3 general purpose bucket events to send a notification message to a destination whenever those events occur. This section explains how to use the Amazon S3 console to enable event notifications. For information about how to use event notifications with the AWS SDKs and the Amazon S3 REST APIs, see [Configuring event notifications programmatically](how-to-enable-disable-notification-intro.md#event-notification-configuration). 

**Prerequisites**: Before you can enable event notifications for your bucket, you must set up one of the destination types and then configure permissions. For more information, see [Supported event destinations](notification-how-to-event-types-and-destinations.md#supported-notification-destinations) and [Granting permissions to publish event notification messages to a destination](grant-destinations-permissions-to-s3.md).

**Note**  
Amazon Simple Queue Service FIFO (First-In-First-Out) queues aren't supported as an Amazon S3 event notification destination. To send a notification for an Amazon S3 event to an Amazon SQS FIFO queue, you can use Amazon EventBridge. For more information, see [Enabling Amazon EventBridge](enable-event-notifications-eventbridge.md).

**Topics**
+ [Enabling Amazon SNS, Amazon SQS, or Lambda notifications using the Amazon S3 console](#enable-event-notifications-sns-sqs-lam)

## Enabling Amazon SNS, Amazon SQS, or Lambda notifications using the Amazon S3 console
<a name="enable-event-notifications-sns-sqs-lam"></a>

**To enable and configure event notifications for an S3 bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to enable events for.

1. Choose **Properties**.

1. Navigate to the **Event Notifications** section and choose **Create event notification**.

1. In the **General configuration** section, specify descriptive event name for your event notification. Optionally, you can also specify a prefix and a suffix to limit the notifications to objects with keys ending in the specified characters.

   1. Enter a description for the **Event name**.

      If you don't enter a name, a globally unique identifier (GUID) is generated and used for the name. 

   1. (Optional) To filter event notifications by prefix, enter a **Prefix**. 

      For example, you can set up a prefix filter so that you receive notifications only when files are added to a specific folder (for example, `images/`). 

   1. (Optional) To filter event notifications by suffix, enter a **Suffix**. 

      For more information, see [Configuring event notifications using object key name filtering](notification-how-to-filtering.md). 

1. In the **Event types** section, select one or more event types that you want to receive notifications for. 

   For a list of the different event types, see [Supported event types for SQS, SNS, and Lambda](notification-how-to-event-types-and-destinations.md#supported-notification-event-types).

1. In the **Destination** section, choose the event notification destination. 
**Note**  
Before you can publish event notifications, you must grant the Amazon S3 principal the necessary permissions to call the relevant API. This is so that it can publish notifications to a Lambda function, SNS topic, or SQS queue.

   1. Select the destination type: **Lambda Function**, **SNS Topic**, or **SQS Queue**.

   1. After you choose your destination type, choose a function, topic, or queue from the list.

   1. Or, if you prefer to specify an Amazon Resource Name (ARN), select **Enter ARN** and enter the ARN.

   For more information, see [Supported event destinations](notification-how-to-event-types-and-destinations.md#supported-notification-destinations).

1. Choose **Save changes**, and Amazon S3 sends a test message to the event notification destination.

## Configuring event notifications programmatically
<a name="event-notification-configuration"></a>

By default, notifications aren't enabled for any type of event. Therefore, the *notification* subresource initially stores an empty configuration.

```
<NotificationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> 
</NotificationConfiguration>
```

To enable notifications for events of specific types, you replace the XML with the appropriate configuration that identifies the event types you want Amazon S3 to publish and the destination where you want the events published. For each destination, you add a corresponding XML configuration. 

**To publish event messages to an SQS queue**  
To set an SQS queue as the notification destination for one or more event types, add the `QueueConfiguration`.

```
<NotificationConfiguration>
  <QueueConfiguration>
    <Id>optional-id-string</Id>
    <Queue>sqs-queue-arn</Queue>
    <Event>event-type</Event>
    <Event>event-type</Event>
     ...
  </QueueConfiguration>
   ...
</NotificationConfiguration>
```

**To publish event messages to an SNS topic**  
To set an SNS topic as the notification destination for specific event types, add the `TopicConfiguration`.

```
<NotificationConfiguration>
  <TopicConfiguration>
     <Id>optional-id-string</Id>
     <Topic>sns-topic-arn</Topic>
     <Event>event-type</Event>
     <Event>event-type</Event>
      ...
  </TopicConfiguration>
   ...
</NotificationConfiguration>
```

**To invoke the AWS Lambda function and provide an event message as an argument**  
To set a Lambda function as the notification destination for specific event types, add the `CloudFunctionConfiguration`.

```
<NotificationConfiguration>
  <CloudFunctionConfiguration>   
     <Id>optional-id-string</Id>   
     <CloudFunction>cloud-function-arn</CloudFunction>        
     <Event>event-type</Event>      
     <Event>event-type</Event>      
      ...  
  </CloudFunctionConfiguration>
   ...
</NotificationConfiguration>
```

**To remove all notifications configured on a bucket**  
To remove all notifications configured on a bucket, save an empty `<NotificationConfiguration/>` element in the *notification* subresource. 

When Amazon S3 detects an event of the specific type, it publishes a message with the event information. For more information, see [Event message structure](notification-content-structure.md). 

For more information about configuring event notifications, see the following topics: 
+ [Walkthrough: Configuring a bucket for notifications (SNS topic or SQS queue)](ways-to-add-notification-config-to-bucket.md).
+ [Configuring event notifications using object key name filtering](notification-how-to-filtering.md)

# Walkthrough: Configuring a bucket for notifications (SNS topic or SQS queue)
<a name="ways-to-add-notification-config-to-bucket"></a>

You can receive Amazon S3 notifications using Amazon Simple Notification Service (Amazon SNS) or Amazon Simple Queue Service (Amazon SQS). In this walkthrough, you add a notification configuration to your bucket using an Amazon SNS topic and an Amazon SQS queue.

**Note**  
Amazon Simple Queue Service FIFO (First-In-First-Out) queues aren't supported as an Amazon S3 event notification destination. To send a notification for an Amazon S3 event to an Amazon SQS FIFO queue, you can use Amazon EventBridge. For more information, see [Enabling Amazon EventBridge](enable-event-notifications-eventbridge.md).

**Topics**
+ [Walkthrough summary](#notification-walkthrough-summary)
+ [Step 1: Create an Amazon SQS queue](#step1-create-sqs-queue-for-notification)
+ [Step 2: Create an Amazon SNS topic](#step1-create-sns-topic-for-notification)
+ [Step 3: Add a notification configuration to your bucket](#step2-enable-notification)
+ [Step 4: Test the setup](#notification-walkthrough-1-test)

## Walkthrough summary
<a name="notification-walkthrough-summary"></a>

This walkthrough helps you do the following:
+ Publish events of the `s3:ObjectCreated:*` type to an Amazon SQS queue.
+ Publish events of the `s3:ReducedRedundancyLostObject` type to an Amazon SNS topic.

For information about notification configuration, see [Using Amazon SQS, Amazon SNS, and Lambda](how-to-enable-disable-notification-intro.md).

You can do all these steps using the console, without writing any code. In addition, code examples using AWS SDKs for Java and .NET are also provided to help you add notification configurations programmatically.

The procedure includes the following steps:

1. Create an Amazon SQS queue.

   Using the Amazon SQS console, create an SQS queue. You can access any messages Amazon S3 sends to the queue programmatically. But, for this walkthrough, you verify notification messages in the console. 

   You attach an access policy to the queue to grant Amazon S3 permission to post messages.

1. Create an Amazon SNS topic.

   Using the Amazon SNS console, create an SNS topic and subscribe to the topic. That way, any events posted to it are delivered to you. You specify email as the communications protocol. After you create a topic, Amazon SNS sends an email. You use the link in the email to confirm the topic subscription. 

   You attach an access policy to the topic to grant Amazon S3 permission to post messages. 

1. Add notification configuration to a bucket. 

## Step 1: Create an Amazon SQS queue
<a name="step1-create-sqs-queue-for-notification"></a>

Follow the steps to create and subscribe to an Amazon Simple Queue Service (Amazon SQS) queue.

1. Using the Amazon SQS console, create a queue. For instructions, see [Getting Started with Amazon SQS](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-getting-started.html) in the *Amazon Simple Queue Service Developer Guide*. 

1. Replace the access policy that's attached to the queue with the following policy.

   1. In the Amazon SQS console, in the **Queues** list, choose the queue name.

   1. On the **Access policy** tab, choose **Edit**.

   1. Replace the access policy that's attached to the queue. In it, provide your Amazon SQS ARN, source bucket name, and bucket owner account ID.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Id": "example-ID",
          "Statement": [
              {
                  "Sid": "example-statement-ID",
                  "Effect": "Allow",
                  "Principal": {
                      "Service": "s3.amazonaws.com"
                  },
                  "Action": [
                      "SQS:SendMessage"
                  ],
                  "Resource": "arn:aws:sqs:us-west-2:111122223333:s3-notification-queue",
                  "Condition": {
                      "ArnLike": {
                          "aws:SourceArn": "arn:aws:s3:*:*:awsexamplebucket1"
                      },
                      "StringEquals": {
                          "aws:SourceAccount": "bucket-owner-123456789012"
                      }
                  }
              }
          ]
      }
      ```

------

   1. Choose **Save**.

1. (Optional) If the Amazon SQS queue or the Amazon SNS topic is server-side encryption enabled with AWS Key Management Service (AWS KMS), add the following policy to the associated symmetric encryption customer managed key. 

   You must add the policy to a customer managed key because you cannot modify the AWS managed key for Amazon SQS or Amazon SNS. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Id": "example-ID",
       "Statement": [
           {
               "Sid": "example-statement-ID",
               "Effect": "Allow",
               "Principal": {
                   "Service": "s3.amazonaws.com"
               },
               "Action": [
                   "kms:GenerateDataKey",
                   "kms:Decrypt"
               ],
               "Resource": "*"
           }
       ]
   }
   ```

------

   For more information about using SSE for Amazon SQS and Amazon SNS with AWS KMS, see the following:
   + [Key management](https://docs.aws.amazon.com/sns/latest/dg/sns-key-management.html) in the *Amazon Simple Notification Service Developer Guide*.
   + [Key management](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-key-management.html) in the *Amazon Simple Queue Service Developer Guide*.

1. Note the queue ARN. 

   The SQS queue that you created is another resource in your AWS account. It has a unique Amazon Resource Name (ARN). You need this ARN in the next step. The ARN is of the following format:

   ```
   arn:aws:sqs:aws-region:account-id:queue-name
   ```

## Step 2: Create an Amazon SNS topic
<a name="step1-create-sns-topic-for-notification"></a>

Follow the steps to create and subscribe to an Amazon SNS topic.

1. Using Amazon SNS console, create a topic. For instructions, see [Creating an Amazon SNS topic](https://docs.aws.amazon.com/sns/latest/dg/CreateTopic.html) in the *Amazon Simple Notification Service Developer Guide*. 

1. Subscribe to the topic. For this exercise, use email as the communications protocol. For instructions, see [Subscribing to an Amazon SNS topic](https://docs.aws.amazon.com/sns/latest/dg/sns-create-subscribe-endpoint-to-topic.html) in the *Amazon Simple Notification Service Developer Guide*. 

   You get an email requesting you to confirm your subscription to the topic. Confirm the subscription. 

1. Replace the access policy attached to the topic with the following policy. In it, provide your SNS topic ARN, bucket name, and bucket owner's account ID.

1. Note the topic ARN.

   The SNS topic you created is another resource in your AWS account, and it has a unique ARN. You will need this ARN in the next step. The ARN will be of the following format:

   ```
   arn:aws:sns:aws-region:account-id:topic-name
   ```

## Step 3: Add a notification configuration to your bucket
<a name="step2-enable-notification"></a>

You can enable bucket notifications either by using the Amazon S3 console or programmatically by using AWS SDKs. Choose any one of the options to configure notifications on your bucket. This section provides code examples using the AWS SDKs for Java and .NET.

### Option A: Enable notifications on a bucket using the console
<a name="step2-enable-notification-using-console"></a>

Using the Amazon S3 console, add a notification configuration requesting Amazon S3 to do the following:
+ Publish events of the **All object create events** type to your Amazon SQS queue.
+ Publish events of the **Object in RRS lost** type to your Amazon SNS topic.

After you save the notification configuration, Amazon S3 posts a test message, which you get via email. 

For instructions, see [Enabling and configuring event notifications using the Amazon S3 console](enable-event-notifications.md). 

### Option B: Enable notifications on a bucket using the AWS SDKs
<a name="step2-enable-notification-using-awssdk-dotnet"></a>

------
#### [ .NET ]

The following C\$1 code example provides a complete code listing that adds a notification configuration to a bucket. You must update the code and provide your bucket name and SNS topic ARN. For information about setting up and running the code examples, see [Getting Started with the AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/net-dg-setup.html) in the *AWS SDK for .NET Developer Guide*. 

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
    class EnableNotificationsTest
    {
        private const string bucketName = "*** bucket name ***";
        private const string snsTopic = "*** SNS topic ARN ***";
        private const string sqsQueue = "*** SQS topic ARN ***";
        // Specify your bucket region (an example region is shown).
        private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
        private static IAmazonS3 client;

        public static void Main()
        {
            client = new AmazonS3Client(bucketRegion);
            EnableNotificationAsync().Wait();
        }

        static async Task EnableNotificationAsync()
        {
            try
            {
               PutBucketNotificationRequest request = new PutBucketNotificationRequest
                {
                    BucketName = bucketName
                };

                TopicConfiguration c = new TopicConfiguration
                {
                    Events = new List<EventType> { EventType.ObjectCreatedCopy },
                    Topic = snsTopic
                };
                request.TopicConfigurations = new List<TopicConfiguration>();
                request.TopicConfigurations.Add(c);
                request.QueueConfigurations = new List<QueueConfiguration>();
                request.QueueConfigurations.Add(new QueueConfiguration()
                {
                    Events = new List<EventType> { EventType.ObjectCreatedPut },
                    Queue = sqsQueue
                });
                
                PutBucketNotificationResponse response = await client.PutBucketNotificationAsync(request);
            }
            catch (AmazonS3Exception e)
            {
                Console.WriteLine("Error encountered on server. Message:'{0}' ", e.Message);
            }
            catch (Exception e)
            {
                Console.WriteLine("Unknown error encountered on server. Message:'{0}' ", e.Message);
            }
        }
    }
}
```

------
#### [ Java ]

For examples of how to configure bucket notifications with the AWS SDK for Java, see [Process S3 event notifications](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_Scenario_ProcessS3EventNotification_section.html) in the *Amazon S3 API Reference*.

------

## Step 4: Test the setup
<a name="notification-walkthrough-1-test"></a>

Now, you can test the setup by uploading an object to your bucket and verifying the event notification in the Amazon SQS console. For instructions, see [Receiving a Message](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-getting-started.htmlReceiveMessage.html) in the *Amazon Simple Queue Service Developer Guide "Getting Started" section*. 

# Configuring event notifications using object key name filtering
<a name="notification-how-to-filtering"></a>

When configuring an Amazon S3 event notification, you must specify which supported Amazon S3 event types cause Amazon S3 to send the notification. If an event type that you didn't specify occurs in your S3 bucket, Amazon S3 doesn't send the notification.

You can configure notifications to be filtered by the prefix and suffix of the key name of objects. For example, you can set up a configuration where you're sent a notification only when image files with a "`.jpg`" file name extension are added to a bucket. Or, you can have a configuration that delivers a notification to an Amazon SNS topic when an object with the prefix "`images/`" is added to the bucket, while having notifications for objects with a "`logs/`" prefix in the same bucket delivered to an AWS Lambda function. 

**Note**  
A wildcard character ("\$1") can't be used in filters as a prefix or suffix. If your prefix or suffix contains a space, you must replace it with the "\$1" character. If you use any other special characters in the value of the prefix or suffix, you must enter them in [URL-encoded (percent-encoded) format](https://en.wikipedia.org/wiki/Percent-encoding). For a complete list of special characters that must be converted to URL-encoded format when used in a prefix or suffix for event notifications, see [Safe characters](object-keys.md#object-key-guidelines-safe-characters).

You can set up notification configurations that use object key name filtering in the Amazon S3 console. You can do so by using Amazon S3 APIs through the AWS SDKs or the REST APIs directly. For information about using the console UI to set a notification configuration on a bucket, see [Enabling and configuring event notifications using the Amazon S3 console](enable-event-notifications.md). 

Amazon S3 stores the notification configuration as XML in the *notification* subresource associated with a bucket as described in [Using Amazon SQS, Amazon SNS, and Lambda](how-to-enable-disable-notification-intro.md). You use the `Filter` XML structure to define the rules for notifications to be filtered by the prefix or suffix of an object key name. For information about the `Filter` XML structure, see [PUT Bucket notification](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTnotification.html) in the *Amazon Simple Storage Service API Reference*. 

Notification configurations that use `Filter` cannot define filtering rules with overlapping prefixes, overlapping suffixes, or prefix and suffix overlapping. The following sections have examples of valid notification configurations with object key name filtering. They also contain examples of notification configurations that are not valid because of prefix and suffix overlapping. 

**Topics**
+ [Examples of valid notification configurations with object key name filtering](#notification-how-to-filtering-example-valid)
+ [Examples of notification configurations with invalid prefix and suffix overlapping](#notification-how-to-filtering-examples-invalid)

## Examples of valid notification configurations with object key name filtering
<a name="notification-how-to-filtering-example-valid"></a>

The following notification configuration contains a queue configuration identifying an Amazon SQS queue for Amazon S3 to publish events to of the `s3:ObjectCreated:Put` type. The events are published whenever an object that has a prefix of `images/` and a `jpg` suffix is PUT to a bucket. 

```
<NotificationConfiguration>
  <QueueConfiguration>
      <Id>1</Id>
      <Filter>
          <S3Key>
              <FilterRule>
                  <Name>prefix</Name>
                  <Value>images/</Value>
              </FilterRule>
              <FilterRule>
                  <Name>suffix</Name>
                  <Value>jpg</Value>
              </FilterRule>
          </S3Key>
     </Filter>
     <Queue>arn:aws:sqs:us-west-2:444455556666:s3notificationqueue</Queue>
     <Event>s3:ObjectCreated:Put</Event>
  </QueueConfiguration>
</NotificationConfiguration>
```

The following notification configuration has multiple non-overlapping prefixes. The configuration defines that notifications for PUT requests in the `images/` folder go to queue-A, while notifications for PUT requests in the `logs/` folder go to queue-B.

```
<NotificationConfiguration>
  <QueueConfiguration>
     <Id>1</Id>
     <Filter>
            <S3Key>
                <FilterRule>
                    <Name>prefix</Name>
                    <Value>images/</Value>
                </FilterRule>
            </S3Key>
     </Filter>
     <Queue>arn:aws:sqs:us-west-2:444455556666:sqs-queue-A</Queue>
     <Event>s3:ObjectCreated:Put</Event>
  </QueueConfiguration>
  <QueueConfiguration>
     <Id>2</Id>
     <Filter>
            <S3Key>
                <FilterRule>
                    <Name>prefix</Name>
                    <Value>logs/</Value>
                </FilterRule>
            </S3Key>
     </Filter>
     <Queue>arn:aws:sqs:us-west-2:444455556666:sqs-queue-B</Queue>
     <Event>s3:ObjectCreated:Put</Event>
  </QueueConfiguration>
</NotificationConfiguration>
```

The following notification configuration has multiple non-overlapping suffixes. The configuration defines that all `.jpg` images newly added to the bucket are processed by Lambda cloud-function-A, and all newly added `.png` images are processed by cloud-function-B. The `.png` and `.jpg` suffixes aren't overlapping even though they have the same last letter. If a given string can end with both suffixes, the two suffixes are considered overlapping. A string can't end with both `.png` and `.jpg`, so the suffixes in the example configuration aren't overlapping suffixes. 

```
<NotificationConfiguration>
  <CloudFunctionConfiguration>
     <Id>1</Id>
     <Filter>
            <S3Key>
                <FilterRule>
                    <Name>suffix</Name>
                    <Value>.jpg</Value>
                </FilterRule>
            </S3Key>
     </Filter>
     <CloudFunction>arn:aws:lambda:us-west-2:444455556666:cloud-function-A</CloudFunction>
     <Event>s3:ObjectCreated:Put</Event>
  </CloudFunctionConfiguration>
  <CloudFunctionConfiguration>
     <Id>2</Id>
     <Filter>
            <S3Key>
                <FilterRule>
                    <Name>suffix</Name>
                    <Value>.png</Value>
                </FilterRule>
            </S3Key>
     </Filter>
     <CloudFunction>arn:aws:lambda:us-west-2:444455556666:cloud-function-B</CloudFunction>
     <Event>s3:ObjectCreated:Put</Event>
  </CloudFunctionConfiguration>
</NotificationConfiguration>
```

Your notification configurations that use `Filter` can't define filtering rules with overlapping prefixes for the same event types. They can only do so, if the overlapping prefixes that are used with suffixes that don't overlap. The following example configuration shows how objects created with a common prefix but non-overlapping suffixes can be delivered to different destinations.

```
<NotificationConfiguration>
  <CloudFunctionConfiguration>
     <Id>1</Id>
     <Filter>
            <S3Key>
                <FilterRule>
                    <Name>prefix</Name>
                    <Value>images</Value>
                </FilterRule>
                <FilterRule>
                    <Name>suffix</Name>
                    <Value>.jpg</Value>
                </FilterRule>
            </S3Key>
     </Filter>
     <CloudFunction>arn:aws:lambda:us-west-2:444455556666:cloud-function-A</CloudFunction>
     <Event>s3:ObjectCreated:Put</Event>
  </CloudFunctionConfiguration>
  <CloudFunctionConfiguration>
     <Id>2</Id>
     <Filter>
            <S3Key>
                <FilterRule>
                    <Name>prefix</Name>
                    <Value>images</Value>
                </FilterRule>
                <FilterRule>
                    <Name>suffix</Name>
                    <Value>.png</Value>
                </FilterRule>
            </S3Key>
     </Filter>
     <CloudFunction>arn:aws:lambda:us-west-2:444455556666:cloud-function-B</CloudFunction>
     <Event>s3:ObjectCreated:Put</Event>
  </CloudFunctionConfiguration>
</NotificationConfiguration>
```

## Examples of notification configurations with invalid prefix and suffix overlapping
<a name="notification-how-to-filtering-examples-invalid"></a>

For the most part, your notification configurations that use `Filter` can't define filtering rules with overlapping prefixes, overlapping suffixes, or overlapping combinations of prefixes and suffixes for the same event types. You can have overlapping prefixes as long as the suffixes don't overlap. For an example, see [Configuring event notifications using object key name filtering](#notification-how-to-filtering).

You can use overlapping object key name filters with different event types. For example, you can create a notification configuration that uses the prefix `image/` for the `ObjectCreated:Put` event type and the prefix `image/` for the `ObjectRemoved:*` event type. 

You get an error if you try to save a notification configuration that has invalid overlapping name filters for the same event types when using the Amazon S3 console or API. This section shows examples of notification configurations that aren't valid because of overlapping name filters. 

Any existing notification configuration rule is assumed to have a default prefix and suffix that match any other prefix and suffix, respectively. The following notification configuration isn't valid because it has overlapping prefixes. Specifically, the root prefix overlaps with any other prefix. The same thing is true if you use a suffix instead of a prefix in this example. The root suffix overlaps with any other suffix.

```
<NotificationConfiguration>
     <TopicConfiguration>
         <Topic>arn:aws:sns:us-west-2:444455556666:sns-notification-one</Topic>
         <Event>s3:ObjectCreated:*</Event>
    </TopicConfiguration>
    <TopicConfiguration>
         <Topic>arn:aws:sns:us-west-2:444455556666:sns-notification-two</Topic>
         <Event>s3:ObjectCreated:*</Event>
         <Filter>
             <S3Key>
                 <FilterRule>
                     <Name>prefix</Name>
                     <Value>images</Value>
                 </FilterRule>
            </S3Key>
        </Filter>
    </TopicConfiguration>             
</NotificationConfiguration>
```

The following notification configuration isn't valid because it has overlapping suffixes. If a given string can end with both suffixes, the two suffixes are considered overlapping. A string can end with `jpg` and `pg`. So, the suffixes overlap. The same is true for prefixes. If a given string can begin with both prefixes, the two prefixes are considered overlapping.

```
 <NotificationConfiguration>
     <TopicConfiguration>
         <Topic>arn:aws:sns:us-west-2:444455556666:sns-topic-one</Topic>
         <Event>s3:ObjectCreated:*</Event>
         <Filter>
             <S3Key>
                 <FilterRule>
                     <Name>suffix</Name>
                     <Value>jpg</Value>
                 </FilterRule>
            </S3Key>
        </Filter>
    </TopicConfiguration>
    <TopicConfiguration>
         <Topic>arn:aws:sns:us-west-2:444455556666:sns-topic-two</Topic>
         <Event>s3:ObjectCreated:Put</Event>
         <Filter>
             <S3Key>
                 <FilterRule>
                     <Name>suffix</Name>
                     <Value>pg</Value>
                 </FilterRule>
            </S3Key>
        </Filter>
    </TopicConfiguration>
</NotificationConfiguration
```

The following notification configuration isn't valid because it has overlapping prefixes and suffixes. 

```
<NotificationConfiguration>
     <TopicConfiguration>
         <Topic>arn:aws:sns:us-west-2:444455556666:sns-topic-one</Topic>
         <Event>s3:ObjectCreated:*</Event>
         <Filter>
             <S3Key>
                 <FilterRule>
                     <Name>prefix</Name>
                     <Value>images</Value>
                 </FilterRule>
                 <FilterRule>
                     <Name>suffix</Name>
                     <Value>jpg</Value>
                 </FilterRule>
            </S3Key>
        </Filter>
    </TopicConfiguration>
    <TopicConfiguration>
         <Topic>arn:aws:sns:us-west-2:444455556666:sns-topic-two</Topic>
         <Event>s3:ObjectCreated:Put</Event>
         <Filter>
             <S3Key>
                 <FilterRule>
                     <Name>suffix</Name>
                     <Value>jpg</Value>
                 </FilterRule>
            </S3Key>
        </Filter>
    </TopicConfiguration>
</NotificationConfiguration>
```

# Event message structure
<a name="notification-content-structure"></a>

The notification message that Amazon S3 sends to publish an event is in the JSON format.

For a general overview and instructions on configuring event notifications, see [Amazon S3 Event Notifications](EventNotifications.md).

This example shows *version 2.1* of the event notification JSON structure. Amazon S3 uses *versions 2.1*, *2.2*, and *2.3* of this event structure. Amazon S3 uses version 2.2 for cross-Region replication event notifications. It uses version 2.3 for S3 Lifecycle, S3 Intelligent-Tiering, object ACL, object tagging, and object restoration delete events. These versions contain extra information specific to these operations. Versions 2.2 and 2.3 are otherwise compatible with version 2.1, which Amazon S3 currently uses for all other event notification types.

```
{  
   "Records":[  
      {  
         "eventVersion":"2.1",
         "eventSource":"aws:s3",
         "awsRegion":"us-west-2",
         "eventTime":"The time, in ISO-8601 format (for example, 1970-01-01T00:00:00.000Z) when Amazon S3 finished processing the request",
         "eventName":"The event type",
         "userIdentity":{  
            "principalId":"The unique ID of the IAM resource that caused the event"
         },
         "requestParameters":{  
            "sourceIPAddress":"The IP address where the request came from"
         },
         "responseElements":{  
            "x-amz-request-id":"The Amazon S3 generated request ID",
            "x-amz-id-2":"The Amazon S3 host that processed the request"
         },
         "s3":{  
            "s3SchemaVersion":"1.0",
            "configurationId":"The ID found in the bucket notification configuration",
            "bucket":{  
               "name":"The name of the bucket, for example, amzn-s3-demo-bucket",
               "ownerIdentity":{  
                  "principalId":"The Amazon retail customer ID of the bucket owner"
               },
               "arn":"The bucket Amazon Resource Name (ARN)"
            },
            "object":{  
               "key":"The object key name",
               "size":"The object size in bytes (as a number)",
               "eTag":"The object entity tag (ETag)",
               "versionId":"The object version if the bucket is versioning-enabled; null or not present if the bucket isn't versioning-enabled",
               "sequencer": "A string representation of a hexadecimal value used to determine event sequence; only used with PUT and DELETE requests"
            }
         },
         "glacierEventData": {
            "restoreEventData": {
               "lifecycleRestorationExpiryTime": "The time, in ISO-8601 format (for example, 1970-01-01T00:00:00.000Z), when the temporary copy of the restored object expires",
               "lifecycleRestoreStorageClass": "The source storage class for restored objects"
            }
         }
      }
   ]
}
```

Note the following about the event message structure:
+ The `eventVersion` key value contains a major and minor version in the form `major`.`minor`.

  The major version is incremented if Amazon S3 makes a change to the event structure that's not backward compatible. This includes removing a JSON field that's already present or changing how the contents of a field are represented (for example, a date format).

  The minor version is incremented if Amazon S3 adds new fields to the event structure. This might occur if new information is provided for some or all existing events. This might also occur if new information is provided only for newly introduced event types. To stay compatible with new minor versions of the event structure, we recommend that your applications ignore new fields.

  If new event types are introduced, but the structure of the event is otherwise unmodified, the event version doesn't change.

  To ensure that your applications can parse the event structure correctly, we recommend that you do an equal-to comparison on the major version number. To ensure that the fields that are expected by your application are present, we also recommend doing a greater-than-or-equal-to comparison on the minor version.
+ The `eventName` key value references the list of [event notification types](https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-how-to-event-types-and-destinations.html) but doesn't contain the `s3:` prefix.
+ The `userIdentity` key value references the unique ID of the AWS Identity and Access Management (IAM) resource (a user, role, group, and so on) that caused the event. For a definition of each IAM identification prefix (for example, AIDA, AROA, AGPA) and information about how to get the unique identifier, see [Unique identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html##identifiers-unique-ids) in the *IAM User Guide*.
+ The `responseElements` key value is useful if you want to trace a request by following up with AWS Support. Both `x-amz-request-id` and `x-amz-id-2` help Amazon S3 trace an individual request. These values are the same as those that Amazon S3 returns in the response to the request that initiates the events. Therefore, you can use these values to match the event to the request.
+ The `s3` key value provides information about the bucket and object involved in the event. The object key name value is URL encoded. For example, `red flower.jpg` becomes `red+flower.jpg`. (Amazon S3 returns "`application/x-www-form-urlencoded`" as the content type in the response.)

  The `ownerIdentity` key value corresponds to the Amazon retail (Amazon.com) customer ID of the bucket owner. This ID value is no longer used and is maintained only for backward compatibility. 
+ The `sequencer` key value provides a way to determine the sequence of events. Event notifications aren't guaranteed to arrive in the same order that the events occurred. However, notifications from events that create objects (`PUT` requests) and delete objects contain a `sequencer`. You can use this value to determine the order of events for a given object key. 

  If you compare the `sequencer` strings from two event notifications on the same object key, the event notification with the greater `sequencer` hexadecimal value is the event that occurred later. If you're using event notifications to maintain a separate database or index of your Amazon S3 objects, we recommend that you compare and store the `sequencer` values as you process each event notification. 

  Note the following:
  + You can't use the `sequencer` key value to determine the order for events on different object keys.
  + The `sequencer` strings can be of different lengths. So, to compare these values, first left-pad the shorter value with zeros, and then do a lexicographical comparison.
+ The `glacierEventData` key value is only visible for `s3:ObjectRestore:Completed` events. 
+ The `restoreEventData` key value contains attributes that are related to your restore request.
+ The `replicationEventData` key value is only visible for replication events.
+ The `intelligentTieringEventData` key value is only visible for S3 Intelligent-Tiering events.
+ The `lifecycleEventData` key value is only visible for S3 Lifecycle transition events.

## Example messages
<a name="notification-content-structure-examples"></a>

The following are examples of Amazon S3 event notification messages.

**Amazon S3 test message**  
After you configure an event notification on a bucket, Amazon S3 sends the following test message.

```
1. {  
2.    "Service":"Amazon S3",
3.    "Event":"s3:TestEvent",
4.    "Time":"2014-10-13T15:57:02.089Z",
5.    "Bucket":"amzn-s3-demo-bucket",
6.    "RequestId":"5582815E1AEA5ADF",
7.    "HostId":"8cLeGAmw098X5cv4Zkwcmo8vvZa3eH3eKxsPzbB9wrR+YstdA6Knx4Ip8EXAMPLE"
8. }
```

**Note**  
The `s3:TestEvent` message uses a different format than regular S3 event notifications. Unlike other event notifications that use the `Records` array structure shown earlier, the test event uses a simplified format with direct fields. When implementing event handling, ensure your code can distinguish between and properly handle both message formats.

**Example message when an object is created using a `PUT` request**  
The following is an example of a message that Amazon S3 sends to publish an `s3:ObjectCreated:Put` event.

```
 1. {  
 2.    "Records":[  
 3.       {  
 4.          "eventVersion":"2.1",
 5.          "eventSource":"aws:s3",
 6.          "awsRegion":"us-west-2",
 7.          "eventTime":"1970-01-01T00:00:00.000Z",
 8.          "eventName":"ObjectCreated:Put",
 9.          "userIdentity":{  
10.             "principalId":"AIDAJDPLRKLG7UEXAMPLE"
11.          },
12.          "requestParameters":{  
13.             "sourceIPAddress":"172.16.0.1"
14.          },
15.          "responseElements":{  
16.             "x-amz-request-id":"C3D13FE58DE4C810",
17.             "x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
18.          },
19.          "s3":{  
20.             "s3SchemaVersion":"1.0",
21.             "configurationId":"testConfigRule",
22.             "bucket":{  
23.                "name":"amzn-s3-demo-bucket",
24.                "ownerIdentity":{  
25.                   "principalId":"A3NL1KOZZKExample"
26.                },
27.                "arn":"arn:aws:s3:::amzn-s3-demo-bucket"
28.             },
29.             "object":{  
30.                "key":"HappyFace.jpg",
31.                "size":1024,
32.                "eTag":"d41d8cd98f00b204e9800998ecf8427e",
33.                "versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko",
34.                "sequencer":"0055AED6DCD90281E5"
35.             }
36.          }
37.       }
38.    ]
39. }
```



# Using EventBridge
<a name="EventBridge"></a>

Amazon S3 can send events to Amazon EventBridge whenever certain events happen in your bucket. Unlike other destinations, you don't need to select which event types you want to deliver. After EventBridge is enabled, all events below are sent to EventBridge. You can use EventBridge rules to route events to additional targets. The following lists the events Amazon S3 sends to EventBridge.


|  Event type |  Description  | 
| --- | --- | 
|  *Object Created*  |  An object was created. The reason field in the event message structure indicates which S3 API was used to create the object: [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), or [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html).  | 
|  *Object Deleted (DeleteObject)* *Object Deleted (Lifecycle expiration)*  |  An object was deleted. When an object is deleted using an S3 API call, the reason field is set to DeleteObject. When an object is deleted by an S3 Lifecycle expiration rule, the reason field is set to Lifecycle Expiration. For more information, see [Expiring objects](lifecycle-expire-general-considerations.md). When an unversioned object is deleted, or a versioned object is permanently deleted, the deletion-type field is set to Permanently Deleted. When a delete marker is created for a versioned object, the `deletion-type` field is set to Delete Marker Created. For more information, see [Deleting object versions from a versioning-enabled bucket](DeletingObjectVersions.md).  | 
|  *Object Restore Initiated*  |  An object restore was initiated from S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage class or from S3 Intelligent-Tiering Archive Access or Deep Archive Access tier. For more information, see [Working with archived objects](archived-objects.md).  | 
|  *Object Restore Completed*  |  An object restore was completed.  | 
|  *Object Restore Expired*  |  The temporary copy of an object restored from S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive expired and was deleted.  | 
|  *Object Storage Class Changed*  |  An object was transitioned to a different storage class. For more information, see [Transitioning objects using Amazon S3 Lifecycle](lifecycle-transition-general-considerations.md).  | 
|  *Object Access Tier Changed*  |  An object was transitioned to the S3 Intelligent-Tiering Archive Access tier or Deep Archive Access tier. For more information, see [Managing storage costs with Amazon S3 Intelligent-Tiering](intelligent-tiering.md).  | 
|  *Object ACL Updated*  |  An object's access control list (ACL) was set using `PutObjectAcl`. An event is not generated when a request results in no change to an object’s ACL. For more information, see [Access control list (ACL) overview](acl-overview.md).  | 
|  *Object Tags Added*  |  A set of tags was added to an object using `PutObjectTagging`. For more information, see [Categorizing your objects using tags](object-tagging.md).  | 
|  *Object Tags Deleted*  |  All tags were removed from an object using `DeleteObjectTagging`. For more information, see [Categorizing your objects using tags](object-tagging.md).  | 

**Note**  
For more information about how Amazon S3 event types map to EventBridge event types, see [Amazon EventBridge mapping and troubleshooting](ev-mapping-troubleshooting.md).

You can use Amazon S3 Event Notifications with EventBridge to write rules that take actions when an event occurs in your bucket. For example, you can have it send you a notification. For more information, see [What is EventBridge?](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) in the *Amazon EventBridge User Guide*.

For more information about the actions and data types you can interact with using the EventBridge API, see the [Amazon EventBridge API Reference](https://docs.aws.amazon.com/eventbridge/latest/APIReference/Welcome.html) in the *Amazon EventBridge API Reference*.

For information about pricing, see [Amazon EventBridge pricing](https://aws.amazon.com/eventbridge/pricing).

**Topics**
+ [Amazon EventBridge permissions](ev-permissions.md)
+ [Enabling Amazon EventBridge](enable-event-notifications-eventbridge.md)
+ [EventBridge event message structure](ev-events.md)
+ [Amazon EventBridge mapping and troubleshooting](ev-mapping-troubleshooting.md)

# Amazon EventBridge permissions
<a name="ev-permissions"></a>

Amazon S3 does not require any additional permissions to deliver events to Amazon EventBridge.

# Enabling Amazon EventBridge
<a name="enable-event-notifications-eventbridge"></a>

You can enable Amazon EventBridge by using the S3 console, AWS Command Line Interface (AWS CLI), or Amazon S3 REST API. 

**Note**  
After you enable EventBridge, it takes around five minutes for the changes to take effect.

## Using the S3 console
<a name="eventbridge-console"></a>

**To enable EventBridge event delivery in the S3 console.**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to enable events for.

1. Choose **Properties**.

1. Navigate to the **Event Notifications** section and find the **Amazon EventBridge** subsection. Choose **Edit**.

1. Under** Send notifications to Amazon EventBridge for all events in this bucket** choose **On**.

## Using the AWS CLI
<a name="eventbridge-cli"></a>

The following example creates a bucket notification configuration for bucket *`amzn-s3-demo-bucket1`* with Amazon EventBridge enabled.

```
aws s3api put-bucket-notification-configuration --bucket amzn-s3-demo-bucket1 --notification-configuration='{ "EventBridgeConfiguration": {} }'
```

## Using the REST API
<a name="eventbridge-api"></a>

You can programmatically enable Amazon EventBridge on a bucket by calling the Amazon S3 REST API. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketNotificationConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketNotificationConfiguration.html) in the *Amazon Simple Storage Service API Reference*.

The following example shows the XML used to create a bucket notification configuration with Amazon EventBridge enabled.

```
<NotificationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <EventBridgeConfiguration>
  </EventBridgeConfiguration>
</NotificationConfiguration>
```

## Creating EventBridge rules
<a name="ev-tutorial"></a>

Once enabled you can create Amazon EventBridge rules for certain tasks. For example, you can send email notifications when an object is created. For a full tutorial, see [Tutorial: Send a notification when an Amazon S3 object is created](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-s3-object-created-tutorial.html) in the *Amazon EventBridge User Guide*.

# EventBridge event message structure
<a name="ev-events"></a>

The notification message that Amazon S3 sends to publish an event is in the JSON format. When Amazon S3 sends an event to Amazon EventBridge, the following fields are present.
+ `version` – Currently 0 (zero) for all events.
+ `id` – A UUID generated for every event.
+ `detail-type` – The type of event that's being sent. See [Using EventBridge](EventBridge.md) for a list of event types.
+ `source` – Identifies the service that generated the event.
+ `account` – The 12-digit AWS account ID of the bucket owner.
+ `time` – The time the event occurred.
+ `region` – Identifies the AWS Region of the bucket.
+ `resources` – A JSON array that contains the Amazon Resource Name (ARN) of the bucket.
+ `detail` – A JSON object that contains information about the event. For more information about what can be included in this field, see [Event message detail field](#ev-events-detail).

## Event message structure examples
<a name="ev-events-list"></a>

The following are examples of some of the Amazon S3 event notification messages that can be sent to Amazon EventBridge.

### Object created
<a name="ev-events-object-created"></a>

```
{
  "version": "0",
  "id": "17793124-05d4-b198-2fde-7ededc63b103",
  "detail-type": "Object Created",
  "source": "aws.s3",
  "account": "111122223333",
  "time": "2021-11-12T00:00:00Z",
  "region": "ca-central-1",
  "resources": [
    "arn:aws:s3:::amzn-s3-demo-bucket1"
  ],
  "detail": {
    "version": "0",
    "bucket": {
      "name": "amzn-s3-demo-bucket1"
    },
    "object": {
      "key": "example-key",
      "size": 5,
      "etag": "b1946ac92492d2347c6235b4d2611184",
      "version-id": "IYV3p45BT0ac8hjHg1houSdS1a.Mro8e",
      "sequencer": "617f08299329d189"
    },
    "request-id": "N4N7GDK58NMKJ12R",
    "requester": "123456789012",
    "source-ip-address": "1.2.3.4",
    "reason": "PutObject"
  }
}
```

### Object deleted (using DeleteObject)
<a name="ev-events-object-deleted"></a>

```
{
  "version": "0",
  "id": "2ee9cc15-d022-99ea-1fb8-1b1bac4850f9",
  "detail-type": "Object Deleted",
  "source": "aws.s3",
  "account": "111122223333",
  "time": "2021-11-12T00:00:00Z",
  "region": "ca-central-1",
  "resources": [
    "arn:aws:s3:::amzn-s3-demo-bucket1"
  ],
  "detail": {
    "version": "0",
    "bucket": {
      "name": "amzn-s3-demo-bucket1"
    },
    "object": {
      "key": "example-key",
      "etag": "d41d8cd98f00b204e9800998ecf8427e",
      "version-id": "1QW9g1Z99LUNbvaaYVpW9xDlOLU.qxgF",
      "sequencer": "617f0837b476e463"
    },
    "request-id": "0BH729840619AG5K",
    "requester": "123456789012",
    "source-ip-address": "1.2.3.4",
    "reason": "DeleteObject",
    "deletion-type": "Delete Marker Created"
  }
}
```

### Object deleted (using lifecycle expiration)
<a name="ev-events-object-deleted-lifecycle"></a>

```
{
  "version": "0",
  "id": "ad1de317-e409-eba2-9552-30113f8d88e3",
  "detail-type": "Object Deleted",
  "source": "aws.s3",
  "account": "111122223333",
  "time": "2021-11-12T00:00:00Z",
  "region": "ca-central-1",
  "resources": [
    "arn:aws:s3:::amzn-s3-demo-bucket1"
  ],
  "detail": {
    "version": "0",
    "bucket": {
      "name": "amzn-s3-demo-bucket1"
    },
    "object": {
      "key": "example-key",
      "etag": "d41d8cd98f00b204e9800998ecf8427e",
      "version-id": "mtB0cV.jejK63XkRNceanNMC.qXPWLeK",
      "sequencer": "617b398000000000"
    },
    "request-id": "20EB74C14654DC47",
    "requester": "s3.amazonaws.com",
    "reason": "Lifecycle Expiration",
    "deletion-type": "Delete Marker Created"
  }
}
```

### Object restore completed
<a name="ev-events-object-restore-complete"></a>

```
{
  "version": "0",
  "id": "6924de0d-13e2-6bbf-c0c1-b903b753565e",
  "detail-type": "Object Restore Completed",
  "source": "aws.s3",
  "account": "111122223333",
  "time": "2021-11-12T00:00:00Z",
  "region": "ca-central-1",
  "resources": [
    "arn:aws:s3:::amzn-s3-demo-bucket1"
  ],
  "detail": {
    "version": "0",
    "bucket": {
      "name": "amzn-s3-demo-bucket1"
    },
    "object": {
      "key": "example-key",
      "size": 5,
      "etag": "b1946ac92492d2347c6235b4d2611184",
      "version-id": "KKsjUC1.6gIjqtvhfg5AdMI0eCePIiT3"
    },
    "request-id": "189F19CB7FB1B6A4",
    "requester": "s3.amazonaws.com",
    "restore-expiry-time": "2021-11-13T00:00:00Z",
    "source-storage-class": "GLACIER"
  }
}
```

## Event message detail field
<a name="ev-events-detail"></a>

The detail field contains a JSON object with information about the event. The following fields may be present in the detail field.
+ `version` – Currently 0 (zero) for all events.
+ `bucket` – Information about the Amazon S3 bucket involved in the event.
+ `object` – Information about the Amazon S3 object involved in the event.
+ `request-id` – Request ID in S3 response.
+ `requester` – AWS account ID or AWS service principal of requester.
+ `source-ip-address` – Source IP address of S3 request. Only present for events triggered by an S3 request.
+ `reason` – For **Object Created** events, the S3 API used to create the object: [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), or [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html). For **Object Deleted** events, this is set to **DeleteObject** when an object is deleted by an S3 API call, or **Lifecycle Expiration** when an object is deleted by an S3 Lifecycle expiration rule. For more information, see [Expiring objects](lifecycle-expire-general-considerations.md).
+ `deletion-type` – For **Object Deleted** events, when an unversioned object is deleted, or a versioned object is permanently deleted, this is set to **Permanently Deleted**. When a delete marker is created for a versioned object, this is set to **Delete Marker Created**. For more information, see [Deleting object versions from a versioning-enabled bucket](DeletingObjectVersions.md).
**Note**  
Some object attributes (such as `etag` and `size`) are present only when a delete marker is created.
+ `restore-expiry-time` – For **Object Restore Completed** events, the time when the temporary copy of the object will be deleted from S3. For more information, see [Working with archived objects](archived-objects.md).
+ `source-storage-class` – For **Object Restore Initiated** and **Object Restore Completed** events, the storage class of the object being restored. For more information, see [Working with archived objects](archived-objects.md).
+ `destination-storage-class` – For **Object Storage Class Changed** events, the new storage class of the object. For more information, see [Transitioning objects using Amazon S3 Lifecycle](lifecycle-transition-general-considerations.md).
+ `destination-access-tier` – For **Object Access Tier Changed** events, the new access tier of the object. For more information, see [Managing storage costs with Amazon S3 Intelligent-Tiering](intelligent-tiering.md).

# Amazon EventBridge mapping and troubleshooting
<a name="ev-mapping-troubleshooting"></a>

The following table describes how Amazon S3 event types are mapped to Amazon EventBridge event types.


|  S3 event type |  Amazon EventBridge detail type  | 
| --- | --- | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html) [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html) [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html)  |  Object Created  | 
|  ObjectRemoved:Delete ObjectRemoved:DeleteMarkerCreated LifecycleExpiration:Delete LifecycleExpiration:DeleteMarkerCreated  |  Object Deleted  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html)  |  Object Restore Initiated  | 
|  ObjectRestore:Completed  |  Object Restore Completed  | 
|  ObjectRestore:Delete  |  Object Restore Expired  | 
|  LifecycleTransition  |  Object Storage Class Changed  | 
|  IntelligentTiering  |  Object Access Tier Changed  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html)  |  Object Tags Added  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html)  |  Object Tags Deleted  | 
|  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html)  |  Object ACL Updated  | 

## Amazon EventBridge troubleshooting
<a name="ev-troubleshooting"></a>

For information about how to troubleshoot EventBridge, see [Troubleshooting Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-troubleshooting.html) in the *Amazon EventBridge User Guide*.

# Monitoring your storage activity and usage with Amazon S3 Storage Lens
<a name="storage_lens"></a>

Amazon S3 Storage Lens is a cloud-storage analytics feature that you can use to gain organization-wide visibility into object storage and activity. S3 Storage Lens also analyzes metrics to deliver contextual recommendations that you can use to optimize storage costs and apply best practices for protecting your data. 

You can use S3 Storage Lens metrics to generate summary insights. For example, you can find out how much storage you have across your entire organization or which are the fastest-growing buckets and prefixes. You can also use S3 Storage Lens metrics to identify cost optimization opportunities, implement data protection and access management best practices, and improve the performance of application workloads. For example, you can identify buckets that don't have S3 Lifecycle rules set up to expire incomplete multipart uploads that are more than 7 days old. You can also identify buckets that aren't following data protection best practices, such as using S3 Replication or S3 Versioning. 

S3 Storage Lens aggregates your metrics and displays the information in the **Account snapshot** section on the Amazon S3 console **Buckets** page. S3 Storage Lens also provides an interactive dashboard that you can use to visualize insights and trends, flag outliers, and receive recommendations for optimizing storage costs and applying data protection best practices. Your dashboard has drill-down options to generate and visualize insights at the organization, account, AWS Region, storage class, bucket, prefix, or Storage Lens group level. You can also send a daily metrics report in CSV or Parquet format to a general purpose S3 bucket or export the metrics directly to an AWS-managed S3 table bucket. 

![\[The Snapshot for date section in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/storage-lens-dashboard.png)


## S3 Storage Lens metrics and features
<a name="storage-lens-dashboards-intro"></a>

S3 Storage Lens provides an interactive *default dashboard* that is updated daily. S3 Storage Lens preconfigures this dashboard to visualize the summarized insights and trends for your entire account and updates them daily in the S3 console. Metrics from this dashboard are also summarized in your account snapshot on the **Buckets** page. For more information, see [Default dashboard](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_default_dashboard).

To create other dashboards and scope them by AWS Regions, S3 buckets, or accounts (for AWS Organizations), you create an S3 Storage Lens dashboard configuration. You can create and manage S3 Storage Lens dashboard configurations by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), AWS SDKs, or Amazon S3 REST API. When you create or edit an S3 Storage Lens dashboard, you define your dashboard scope and metrics selection. 

S3 Storage Lens offers free tier metrics and advanced tier metrics, which you can upgrade to for an additional charge. With the advanced tier, you can access additional metrics and features for gaining insight into your storage. These features include advanced metric categories, prefix aggregation, contextual recommendations, expanded prefixes metrics reports, and Amazon CloudWatch publishing. Prefix aggregation and contextual recommendations are available only in the Amazon S3 console. For information about S3 Storage Lens pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing).

**Metrics categories**  
Within the free and advanced tiers, metrics are organized into categories that align with key use cases, such as cost optimization and data protection. Free metrics include summary, cost optimization, data protection, access management, performance, and event metrics. When you upgrade to the advanced tier, you can enable advanced cost optimization and data protection metrics. You can use these advanced metrics to further reduce your S3 storage costs and improve your data protection stance. You can also enable activity metrics and detailed status-code metrics to improve the performance of application workloads that are accessing your S3 buckets. For more information about the free and advanced metrics categories, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

You can assess your storage based on S3 best practices, such as analyzing the percentage of your buckets that have encryption or S3 Object Lock or S3 Versioning enabled. You can also identify potential cost-savings opportunities. For example, you can use S3 Lifecycle rule count metrics to identify buckets that are missing lifecycle expiration or transition rules. You can also analyze your request activity per bucket to find buckets where objects could be transitioned to a lower-cost storage class. For more information, see [Amazon S3 Storage Lens metrics use cases](storage-lens-use-cases.md).

**Metrics export**

**Default metrics report**  
The default metrics report in S3 Storage Lens includes free metrics and advanced tier metrics covering object storage usage and activity trends across your AWS accounts. The report includes prefix aggregation for prefixes whose objects comprise at least 1% of the total data stored in the bucket, and supports up to 10 levels of prefix depth. The report can be exported daily in CSV or Parquet format to an S3 general purpose bucket. The report can also be sent to an AWS-managed S3 table bucket (with name `aws-s3`) making it easy to query using AWS analytics services or third-party tools.

With the default metrics report, you can identify cost optimization opportunities like buckets without S3 Lifecycle rules for incomplete multipart uploads and buckets not following data protection best practices such as S3 Replication or S3 Versioning. The default metrics report also provides contextual recommendations for optimizing storage costs and applying data protection best practices, at no additional charge beyond standard S3 storage costs.

**Expanded prefixes metrics report**  
The Storage Lens expanded prefixes metrics report provides comprehensive prefix-level analytics across your entire S3 storage data, expanding coverage to support billions of prefixes in your bucket. This report delivers metrics for all prefixes in your buckets, including storage usage, bytes transferred, request counts by status code, and data protection compliance metrics, which you can export daily in CSV or Parquet format to S3 general purpose bucket. You can also export the metrics directly to the `aws-s3` AWS-managed S3 table bucket.

**Note**  
The report processes metrics for prefixes up to 50 levels deep and excludes prefix-level metrics for any bucket where the prefix and storage class combinations exceed twice the object count.

With the expanded prefixes metrics report, you can identify performance optimization opportunities, such as high error rates, small objects, or sub-optimal request patterns, across billions of prefixes in your bucket. Unlike the default metrics report, the expanded prefixes metrics report delivers metrics for granular prefixes in your bucket. For example, you can identify prefixes with large numbers of objects of size less than 128KB to quickly isolate such datasets for compaction that will improve application performance. This report is available in all AWS Regions as an opt-in feature in the Storage Lens advanced tier dashboard configuration.

**Metrics publishing**

**Amazon CloudWatch publishing**  
You can publish S3 Storage Lens usage and activity metrics to Amazon CloudWatch to create a unified view of your operational health in CloudWatch [dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html). You can also use CloudWatch features, such as alarms and triggered actions, metric math, and anomaly detection, to monitor and take action on S3 Storage Lens metrics. In addition, CloudWatch API operations enable applications, including third-party providers, to access your S3 Storage Lens metrics. The CloudWatch publishing option is available for dashboards that are upgraded to the S3 Storage Lens advanced tier. For more information about support for S3 Storage Lens metrics in CloudWatch, see [Monitor S3 Storage Lens metrics in CloudWatch](storage_lens_view_metrics_cloudwatch.md).

For more information about using S3 Storage Lens, see the following topics.

**Topics**
+ [S3 Storage Lens metrics and features](#storage-lens-dashboards-intro)
+ [Understanding Amazon S3 Storage Lens](storage_lens_basics_metrics_recommendations.md)
+ [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md)
+ [Setting Amazon S3 Storage Lens permissions](storage_lens_iam_permissions.md)
+ [Working with Amazon S3 Storage Lens by using the console and API](S3LensExamples.md)
+ [Viewing metrics with Amazon S3 Storage Lens](storage_lens_view_metrics.md)
+ [Working with S3 Storage Lens data in S3 Tables](storage-lens-s3-tables.md)
+ [Using Amazon S3 Storage Lens with AWS Organizations](storage_lens_with_organizations.md)
+ [Working with S3 Storage Lens groups to filter and aggregate metrics](storage-lens-groups-overview.md)

# Understanding Amazon S3 Storage Lens
<a name="storage_lens_basics_metrics_recommendations"></a>

**Important**  
Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. Starting January 5, 2023, all new object uploads to Amazon S3 are automatically encrypted at no additional cost and with no impact on performance. The automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in CloudTrail logs, S3 Inventory, S3 Storage Lens, the Amazon S3 console, and as an additional Amazon S3 API response header in the AWS CLI and AWS SDKs. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html).

Amazon S3 Storage Lens is a cloud-storage analytics feature that you can use to gain organization-wide visibility into object-storage usage and activity. You can use S3 Storage Lens metrics to generate summary insights, such as finding out how much storage you have across your entire organization or which are the fastest-growing buckets and prefixes. You can also use S3 Storage Lens metrics to identify cost-optimization opportunities, implement data-protection and security best practices, and improve the performance of application workloads. For example, you can identify buckets that don't have S3 Lifecycle rules to expire incomplete multipart uploads that are more than 7 days old. You can also identify buckets that aren't following data-protection best practices, such as using S3 Replication or S3 Versioning. S3 Storage Lens also analyzes metrics to deliver contextual recommendations that you can use to optimize storage costs and apply best practices for protecting your data. 

S3 Storage Lens aggregates your metrics and displays the information in the **Account snapshot** section on the Amazon S3 console **Buckets** page. S3 Storage Lens also provides an interactive dashboard that you can use to visualize insights and trends, flag outliers, and receive recommendations for optimizing storage costs and applying data protection best practices. Your dashboard has drill-down options to generate and visualize insights at the organization, account, AWS Region, storage class, bucket, prefix, or Storage Lens group level. You can also send a daily metrics report in CSV or Parquet format to a general purpose S3 bucket or export the metrics directly to an AWS-managed S3 table bucket. You can create and manage S3 Storage Lens dashboards by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), AWS SDKs, or Amazon S3 REST API. 

## S3 Storage Lens concepts and terminology
<a name="storage_lens_basics"></a>

This section contains the terminology and concepts that are essential for successfully understanding and using Amazon S3 Storage Lens.

**Topics**
+ [Dashboard configuration](#storage_lens_basics_configuration)
+ [Default dashboard](#storage_lens_basics_default_dashboard)
+ [Dashboards](#storage_lens_basics_dashboards)
+ [Account snapshot](#storage_lens_basics_account_snapshot)
+ [Metrics export](#storage_lens_basics_metrics_export)
+ [Metrics export destinations](#storage_lens_basics_metrics_export_destinations)
+ [Home Region](#storage_lens_basics_home_region)
+ [Retention period](#storage_lens_basics_data_queries)
+ [Metrics categories](#storage_lens_basics_metrics_types)
+ [Recommendations](#storage_lens_basics_recommendations)
+ [Metrics selection](#storage_lens_basics_metrics_selection)
+ [Prefix delimiter](#storage_lens_basics_prefix_delimiter)
+ [S3 Storage Lens and AWS Organizations](#storage_lens_basics_organizations)

### Dashboard configuration
<a name="storage_lens_basics_configuration"></a>

S3 Storage Lens requires a dashboard configuration that contains the properties required to aggregate metrics on your behalf for a single dashboard or export. When you create a configuration, you choose the dashboard name and the home Region, which you can't change after you create the dashboard. You can optionally add tags and configure a metrics export in CSV or Parquet format. 

In the dashboard configuration, you also define the dashboard scope and the metrics selection. The scope can include all the storage for your organization account or sections that are filtered by Region, bucket, and account. When you configure the metrics selection, you choose between free tier metrics and advanced tier metrics, which you can upgrade to for an additional charge. With the advanced tier, you can access additional metrics and features. These features include advanced metric categories, prefix-level aggregation, contextual recommendations, and Amazon CloudWatch publishing. For information about S3 Storage Lens pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing).

### Default dashboard
<a name="storage_lens_basics_default_dashboard"></a>

The S3 Storage Lens default dashboard on the console is named **default-account-dashboard**. S3 preconfigures this dashboard to visualize the summarized insights and trends for your entire account and updates them daily in the S3 console. You can't modify the configuration scope of the default dashboard, but you can upgrade the metrics selection from free tier metrics to advanced tier metrics. You can configure the optional metrics export or even disable the dashboard. However, you can't delete the default dashboard.

**Note**  
If you disable your default dashboard, it's no longer updated. You'll no longer receive any new daily metrics in your S3 Storage Lens dashboard, your metrics export, or the account snapshot on the S3 **Buckets** page. If your dashboard uses advanced metrics, you'll no longer be charged. You can still see historic data in the dashboard until the 14-day period for data queries expires. This period is 15 months if you've enabled advanced metrics. To access historic data, you can re-enable the dashboard within the expiration period.

### Dashboards
<a name="storage_lens_basics_dashboards"></a>

You can create additional S3 Storage Lens dashboards and scope them by AWS Regions, S3 buckets, or accounts (for AWS Organizations). When you create or edit a S3 Storage Lens dashboard, you define your dashboard scope and metrics selection. S3 Storage Lens offers free tier metrics and advanced tier metrics, which you can upgrade to for an additional charge. With advanced metrics, you can access additional metrics and features for gaining insight into your storage. These include advanced metric categories, prefix-level aggregation, contextual recommendations, and Amazon CloudWatch publishing. For information about S3 Storage Lens pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing).

You can also disable or delete dashboards. If you disable a dashboard, it's no longer updated, and you will no longer receive any new daily metrics. You can still see historic data until the 14-day expiration period. If you enabled advanced metrics for that dashboard, this period is 15 months. To access historic data, you can re-enable the dashboard within the expiration period. 

If you delete your dashboard, you lose all your dashboard configuration settings. You will no longer receive any new daily metrics, and you also lose access to the historical data associated with that dashboard. If you want to access the historic data for a deleted dashboard, you must create another dashboard with the same name in the same home Region.

**Note**  
You can use S3 Storage Lens to create up to 50 dashboards per home Region.
Organization-level dashboards can be limited only to a Regional scope.

### Account snapshot
<a name="storage_lens_basics_account_snapshot"></a>

The S3 Storage Lens **Account snapshot** summarizes metrics from your default dashboard and displays your total storage, object count, and average object size on the S3 console **Buckets** page. This account snapshot gives you quick access to insights about your storage without having to leave the **Buckets** page. The account snapshot also provides one-click access to your interactive S3 Storage Lens dashboard. 

You can use your dashboard to visualize insights and trends, flag outliers, and receive recommendations for optimizing storage costs and applying data protection best practices. Your dashboard has drill-down options to generate insights at the organization, account, bucket, object, or prefix level. You can also send a once-daily metrics export to an S3 bucket in CSV or Parquet format.

You can't modify the dashboard scope of the **default-account dashboard** because it's linked to the **Account snapshot**. However, you can upgrade the metrics selection in your **default-account-dashboard** from free metrics to paid advanced metrics. After upgrading, you can then display all requests, bytes uploaded, and bytes downloaded in the S3 Storage Lens **Account snapshot**. 

**Note**  
If you disable your default dashboard, your **Account snapshot** is no longer updated. To continue displaying metrics in the **Account snapshot**, you can re-enable the **default-account-dashboard**.

### Metrics export
<a name="storage_lens_basics_metrics_export"></a>

An S3 Storage Lens metrics export is a file that contains all the metrics identified in your S3 Storage Lens configuration. This information is generated daily in CSV or Parquet format and is sent to a general purpose S3 bucket. You can also export the metrics directly to the `aws-s3` AWS-managed S3 table bucket making it easy to query using AWS analytics services or third-party tools. You can use the metrics export for further analysis by using the metrics tool of your choice. The bucket specified for your metrics export must be in the same Region as your S3 Storage Lens configuration. You can generate an S3 Storage Lens metrics export from the S3 console by editing your dashboard configuration. You can also configure a metrics export by using the AWS CLI and AWS SDKs.

There are two types of metric exports available in Storage Lens:
+ **Default metrics report** – The default metrics report in S3 Storage Lens includes free metrics and activity trends across your AWS account and aggregates usage metrics for top prefixes.
+ **Expanded prefixes metrics report** – The Storage Lens expanded prefixes metrics report provides granular storage and activity metrics (such as storage usage, bytes transferred, and request counts by status code) at the prefix level for every prefix in your bucket. This report is available as an opt-in feature in all AWS Regions, through the advanced pricing tier in your Storage Lens dashboard configuration. For information about S3 Storage Lens feature pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing).

**Note**  
Storage Lens only generates metrics for [S3 general purpose buckets](UsingBucket.md).

### Metrics export destinations
<a name="storage_lens_basics_metrics_export_destinations"></a>

When exporting Storage Lens metrics data, you can choose both an S3 general purpose bucket or an S3 table bucket as your destination. General purpose buckets provide broad compatibility with existing tools and applications, offering flexibility to process data within your account, using your preferred analytics services. This option supports standard S3 access patterns and integrations for data analysis within individual buckets in your Region. In contrast, S3 table bucket lets you run immediate queries across multiple accounts and regions, create custom dashboards with Amazon Quick, and join data with other AWS services or third-party tools, without the need for additional processing infrastructure. For example, you can combine Storage Lens metrics with S3 Metadata to analyze object activity patterns across your organization.

#### S3 general purpose bucket
<a name="storage_lens_basics_s3_general_purpose_bucket"></a>

Exporting Storage Lens metrics to an S3 general purpose bucket offers flexibility and continuity for storing your Storage Lens data. You can maintain existing workflows and operational consistency by continuing to use your current infrastructure and existing extract, transform, and load (ETL) processes, analytics tools, or automated workflows. General purpose buckets also work with the full range of AWS services and third-party tools that support standard S3 APIs. This gives you maximum flexibility in how you process, analyze, or visualize your Storage Lens insights. Additionally, you can implement S3 lifecycle policies to automatically manage data retention, transitioning older metrics to lower-cost storage classes or deleting them after specified periods to optimize costs. Therefore, if operational continuity and workflow flexibility are your priorities for Storage Lens implementation, then consider choosing an S3 general purpose bucket for exporting your Storage Lens data. For more information about S3 general purpose buckets pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing).

#### S3 table bucket
<a name="storage_lens_basics_s3_table_bucket"></a>

When exporting Storage Lens metrics to S3 table bucket, you can easily analyze your storage usage and activity metrics without building data pipelines. Your metrics are organized in S3 Tables that are created in an AWS-managed S3 table bucket called `aws-s3` for optimal query performance, with customizable retention periods and encryption settings to meet your data management needs. With your metrics in S3 Tables, you can run queries across multiple accounts and Regions using SQL tools and AWS analytics services (like Amazon Athena, Amazon Quick, Amazon EMR, and Amazon Redshift) to create custom dashboards and generate deeper insights. For example, you can join S3 Storage Lens metrics with S3 Metadata to identify objects in prefixes that aren't showing any recent activity. Any data stored in an S3 table bucket incurs S3 Tables costs. For more information about S3 Tables pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing).

### Home Region
<a name="storage_lens_basics_home_region"></a>

The home Region is the AWS Region where all S3 Storage Lens metrics for a given dashboard configuration are stored. You must choose a home Region when you create your S3 Storage Lens dashboard configuration. After you choose a home Region, you can't change it. Also, if you're creating a Storage Lens group, we recommend that you choose the same home Region as your Storage Lens dashboard.

**Note**  
You can choose one of the following Regions as your home Region:  
US East (N. Virginia) – `us-east-1`
US East (Ohio) – `us-east-2`
US West (N. California) – `us-west-1`
US West (Oregon) – `us-west-2`
Asia Pacific (Mumbai) – `ap-south-1`
Asia Pacific (Seoul) – `ap-northeast-2`
Asia Pacific (Singapore) – `ap-southeast-1`
Asia Pacific (Sydney) – `ap-southeast-2`
Asia Pacific (Tokyo) – `ap-northeast-1`
Canada (Central) – `ca-central-1`
China (Beijing) – `cn-north-1`
China (Ningxia) – `cn-northwest-1`
Europe (Frankfurt) – `eu-central-1`
Europe (Ireland) – `eu-west-1`
Europe (London) – `eu-west-2`
Europe (Paris) – `eu-west-3`
Europe (Stockholm) – `eu-north-1`
South America (São Paulo) – `sa-east-1`

### Retention period
<a name="storage_lens_basics_data_queries"></a>

S3 Storage Lens metrics are retained so that you can see historical trends and compare differences in your storage and activity over time. You can use Amazon S3 Storage Lens metrics for queries so that you can see historical trends and compare differences in your storage usage and activity over time. 

All S3 Storage Lens metrics are retained for a period of 15 months. However, metrics are only available for queries for a specific duration, which depends on your [metrics selection](#storage_lens_basics_metrics_selection). This duration can't be modified. Free metrics are available for queries for a 14-day period, and advanced metrics are available for queries for a 15-month period.

### Metrics categories
<a name="storage_lens_basics_metrics_types"></a>

Within the free and advanced tiers, S3 Storage Lens metrics are organized into categories that align with key use cases, such as cost optimization and data protection. Free metrics include summary, cost optimization, data protection, access management, performance, and event metrics. When you upgrade to advanced metrics, you can enable additional cost optimization and data protection metrics that you can use to further reduce your S3 storage costs and ensure your data is protected. You can also enable activity metrics and detailed status-code metrics that you can use to improve the performance of application workflows.

The following list shows all of the free and advanced metric categories. For a complete list of the individual metrics included in each category, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

**Summary metrics**  
Summary metrics provide general insights about your S3 storage, including your total storage bytes and object count. 

**Cost optimization metrics**  
Cost optimization metrics provide insights that you can use to manage and optimize your storage costs. For example, you can identify buckets that have incomplete multipart uploads that are more than 7-days old.

With advanced metrics, you can enable advanced cost optimization metrics. These metrics include S3 Lifecycle rule count metrics that you can use to get per-bucket expiration and transition S3 Lifecycle rule counts. 

**Data-protection metrics**  
Data-protection metrics provide insights for data protection features, such as encryption and S3 Versioning. You can use these metrics to identify buckets that are not following data protection best practices. For example, you can identify buckets that are not using default encryption with AWS Key Management Service keys (SSE-KMS) or S3 Versioning.

With advanced metrics, you can enable advanced data protection metrics. These metrics include per-bucket replication rule count metrics.

**Access management metrics**  
Access management metrics provide insights for S3 Object Ownership. You can use these metrics to see which Object Ownership settings your buckets use.

**Event metrics**  
Event metrics provide insights for S3 Event Notifications. With event metrics, you can see which buckets have S3 Event Notifications configured.

**Performance metrics**  
Performance metrics provide insights for S3 Transfer Acceleration. With performance metrics, you can see which buckets have Transfer Acceleration enabled.

**Activity metrics (advanced)**  
If you upgrade your dashboard to the **Advanced tier**, you can enable activity metrics. Activity metrics provide details about how your storage is requested (for example, all requests, Get requests, Put requests), bytes uploaded or downloaded, and errors.

Prefix-level activity metrics can be used to help you determine which prefixes are being used infrequently, so that you can [transition to a more optimal storage class using S3 Lifecycle](lifecycle-transition-general-considerations.md).

**Detailed status code metrics (advanced)**  
If you upgrade your dashboard to the **Advanced tier**, you can enable detailed status code metrics. Detailed status code metrics provide insights for HTTP status codes, such as 403 Forbidden and 503 Service Unavailable, that you can use to troubleshoot access or performance issues. For example, you can look at the **403 Forbidden error count** metric to identify workloads that are accessing buckets without the correct permissions applied.

Prefix-level detailed status code metrics can be used to gain a better understanding of the HTTP status code occurrences by prefix. For example, 503 error count metrics enable you to identify prefixes receiving throttling requests during data ingestion.

**Advanced cost optimization metrics**  
Advanced cost optimization metrics provide detailed insights into your S3 lifecycle management configurations to help you optimize storage costs through automated data transitions and deletions. These metrics track the number of lifecycle rules configured across different lifecycle rule types. You can use these metrics to ensure comprehensive lifecycle rule coverage across your buckets and identify opportunities to implement additional cost optimization strategies through automated data management.

**Advanced data protection metrics**  
Advanced data protection metrics help you protect your data by providing insights into replication rule counts, SSE-KMS encryption usage, and security vulnerabilities such as unsupported signature and TLS requests. (**Note:** Replication rule count metrics aren't available for prefixes.)

This visibility enables you to ensure proper data redundancy, validate encryption compliance, identify security risks from outdated protocols, troubleshoot replication misconfigurations, and maintain robust data protection strategies at the organization, account, and bucket levels.

**Advanced performance metrics**  
Advanced performance metrics reveal how your applications interact with data in S3 and can help identify opportunities to optimize application performance such as inefficient I/O patterns, cross-region access, and unique object access count. Storage Lens advanced performance metrics eliminates the need for expensive custom monitoring tools and enables customers to implement S3 best practices more effectively, particularly benefiting performance sensitive applications such as machine learning training, data analytics, and other high-performance compute workloads.

### Recommendations
<a name="storage_lens_basics_recommendations"></a>

S3 Storage Lens provides automated recommendations to help you optimize your storage. Recommendations are placed contextually alongside relevant metrics in the S3 Storage Lens dashboard. Historical data is not eligible for recommendations because recommendations are relevant to what is happening in the most recent period. Recommendations appear only when they are relevant.

S3 Storage Lens recommendations come in the following forms:
+ **Suggestions**

  Suggestions alert you to trends within your storage and activity that might indicate a storage-cost optimization opportunity or a data protection best practice. You can use the suggested topics in the *Amazon S3 User Guide* and the S3 Storage Lens dashboard to drill down for more details about the specific Regions, buckets, or prefixes.
+ **Call-outs**

  Call-outs are recommendations that alert you to interesting anomalies within your storage and activity over a period that might need further attention or monitoring.
  + **Outlier call-outs**

    S3 Storage Lens provides call-outs for metrics that are outliers, based on your recent 30-day trend. The outlier is calculated by using a standard score, also known as a *z-score*. In this score, the current day's metric is subtracted from the average of the last 30 days for that metric. The current day's metric is then divided by the standard deviation for that metric over the last 30 days. The resulting score is usually between -3 and \$13. This number represents the number of standard deviations that the current day's metric is from the mean. 

    S3 Storage Lens considers metrics with a score >2 or <-2 to be outliers because they are higher or lower than 95 percent of normally distributed data. 
  + **Significant change call-outs**

    The significant change call-out applies to metrics that are expected to change less frequently. Therefore, it's set to a higher sensitivity than the outlier calculation, which is typically in the range of \$1/- 20 percent versus the prior day, week, or month.

    **Addressing call-outs in your storage and activity** – If you receive a significant change call-out, it’s not necessarily a problem. The call-out could be the result of an anticipated change in your storage. For example, you might have recently added a large number of new objects, deleted a large number of objects, or made similar planned changes. 

    If you see a significant change call-out on your dashboard, take note of it and determine whether it can be explained by recent circumstances. If not, use the S3 Storage Lens dashboard to drill down for more details to understand the specific Regions, buckets, or prefixes that are driving the fluctuation.
+ **Reminders**

  Reminders provide insights into how Amazon S3 works. They can help you learn more about ways to use S3 features to reduce storage costs or apply data protection best practices.

### Metrics selection
<a name="storage_lens_basics_metrics_selection"></a>

S3 Storage Lens offers two metrics selections that you can choose for your dashboard and export: *free tier* and *advanced tier*.
+ **Free tier**

  S3 Storage Lens offers free metrics for all dashboards and configurations. Free metrics contain metrics that are relevant to your storage, such as the number of buckets and the objects in your account. Free metrics also include use-case based metrics (for example, cost optimization and data protection metrics) that you can use to investigate whether your storage is configured according to S3 best practices. All free tier metrics are collected daily and can be exported to either an S3 general purpose bucket (CSV or Parquet format) or S3 table bucket (Parquet format only). Data is available for queries for 14 days in the Amazon S3 console. For more information about which metrics are available with free metrics, see the [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).
+ **Advanced tier**

  S3 Storage Lens offers free metrics for all dashboards and configurations with the option to upgrade to advanced metrics. Additional charges apply. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing).

  Advanced tier metrics include all the metrics in free metrics along with additional metrics, such as advanced data protection and cost optimization metrics, activity metrics, and detailed status-code metrics. Advanced tier metrics also provide recommendations to help you optimize your storage. Recommendations are placed contextually alongside relevant metrics in the dashboard.

  Advanced tier includes the following features:
  + **Advanced metrics categories** – Generate additional metrics. For a complete list of advanced metric categories, see [Metrics categories](#storage_lens_basics_metrics_types). For a complete list of metrics, see the [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).
  + **Amazon CloudWatch publishing** – Publishes S3 Storage Lens metrics to CloudWatch to create a unified view of your operational health in CloudWatch [dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html). You can also use CloudWatch API operations and features, such as alarms and triggered actions, metric math, and anomaly detection, to monitor and take action on S3 Storage Lens metrics. For more information, see [Monitor S3 Storage Lens metrics in CloudWatch](storage_lens_view_metrics_cloudwatch.md).
  + **Default metrics report** – The default metrics report in S3 Storage Lens includes free metrics and prefix aggregation capabilities for top prefixes for object storage usage and activity trends across your AWS accounts. With the default metrics report, you can identify cost optimization opportunities at no additional charge beyond standard S3 storage costs.
  + **Expanded prefixes metrics report** – The Storage Lens expanded prefixes metrics report provides comprehensive prefix-level analytics across your entire S3 storage data, expanding coverage to support up to billions of prefixes per bucket.
  + **Additional metrics aggregation**
    + **Prefix aggregation** – Collects metrics at the [prefix](using-prefixes.md) level. This setting specifies the prefixes aggregated as part of the default metrics report, which is displayed in the Storage Lens dashboard. Note that metrics that are applicable at the prefix level are available with **Prefix aggregation**, except for bucket-level settings and rule count metrics. Prefix-level metrics don't apply to the expanded prefixes metrics export and aren't published to CloudWatch.
    + **Storage Lens group aggregation** – Collects metrics at the Storage Lens group level. After you enable the advanced tier metrics and Storage Lens group aggregation, you can specify which Storage Lens groups to include or exclude from your Storage Lens dashboard. At least one Storage Lens group must be specified. Storage Lens groups that are specified must also reside within the designated home Region in the dashboard account. Storage Lens group-level metrics are not published to CloudWatch.

  All advanced metrics are collected daily. Data is available for querying for up to 15 months in the Amazon S3 console. For more information about the storage metrics that are aggregated by S3 Storage Lens, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

### Prefix delimiter
<a name="storage_lens_basics_prefix_delimiter"></a>

Prefix delimiters determine how Storage Lens counts prefix depth, by separating the hierarchical levels within object keys. You can only specify a single character to indicate each level within your prefixes. If the prefix delimiter is undefined, Amazon S3 uses "`/`" as the default delimiter.

**Note**  
When you're updating your Storage Lens dashboard configuration via API, the *delimiter* and the updated *prefix delimiter* must be defined in the same way, or you'll receive an error. The delimiter only applies to prefix-level metrics that are exported to the default metrics report. The prefix delimiter applies to all prefixes that are exported to the expanded prefixes metrics report.

### S3 Storage Lens and AWS Organizations
<a name="storage_lens_basics_organizations"></a>

AWS Organizations is an AWS service that helps you aggregate all of your AWS accounts under one organization hierarchy. Amazon S3 Storage Lens works with AWS Organizations to provide a single view of object storage and activity across your Amazon S3 storage.

For more information, see [Using Amazon S3 Storage Lens with AWS OrganizationsEnabling trusted access for S3 Storage Lens](storage_lens_with_organizations.md).
+ **Trusted access**

  Using your organization's management account, you must enable trusted access for S3 Storage Lens to aggregate storage metrics and usage data for all member accounts in your organization. You can then create dashboards or exports for your organization by using your management account or by giving delegated administrator access to other accounts in your organization. 

  You can disable trusted access for S3 Storage Lens at any time, which stops S3 Storage Lens from aggregating metrics for your organization.
+ **Delegated administrator**

  You can create dashboards and metrics for S3 Storage Lens for your organization by using your AWS Organizations management account, or by giving *delegated administrator* access to other accounts in your organization. You can deregister delegated administrators at any time. Deregistering a delegated administrator also automatically stops all organization-level dashboards created by that delegated administrator from aggregating new storage metrics.

For more information, see [Amazon S3 Storage Lens and AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/services-that-can-integrate-s3lens.html) in the *AWS Organizations User Guide*.

#### Amazon S3 Storage Lens service-linked roles
<a name="storage_lens_basics_service_linked_role"></a>

Along with AWS Organizations trusted access, Amazon S3 Storage Lens uses AWS Identity and Access Management (IAM) service-linked roles. A service-linked role is a unique type of IAM role that's linked directly to S3 Storage Lens. Service-linked roles are predefined by S3 Storage Lens and include all the permissions that it requires to collect daily storage and activity metrics from member accounts in your organization. 

For more information, see [Using service-linked roles for Amazon S3 Storage Lens](using-service-linked-roles.md).

# Amazon S3 Storage Lens metrics glossary
<a name="storage_lens_metrics_glossary"></a>

The Amazon S3 Storage Lens metrics glossary provides a complete list of free and advanced metrics for S3 Storage Lens.

S3 Storage Lens offers free metrics for all dashboards and configurations, with the option to upgrade to advanced metrics. 
+ **Free metrics** contain metrics that are relevant to your storage usage, such as the number of buckets and the objects in your account. Free metrics also include use-case based metrics, such as cost-optimization and data-protection metrics. All free metrics are collected daily, and data is available for queries for up to 14 days. 
+ **Advanced metrics** include all the metrics in free metrics along with additional metrics, such as advanced performance, advanced data protection, and advanced cost optimization metrics. Advanced metrics also include additional metric categories, such as activity metrics and detailed status-code metrics. Advanced metrics data is available for queries for 15 months. 

  There are additional charges when you use S3 Storage Lens with advanced metrics and recommendations. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). For more information about advanced metrics and recommendations features, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).
**Note**  
For Storage Lens groups, only free tier storage metrics are available. Advanced tier metrics are not available at the Storage Lens group level.

**Metric names**  
The **Metric name** column in the following table provides the name of each S3 Storage Lens in the S3 console. The **CloudWatch and export** column provides the name of each metric in Amazon CloudWatch and the metrics export file that you can configure in your S3 Storage Lens dashboard. 

**Derived metric formulas**  
Derived metrics are not available for the metrics export and the CloudWatch publishing option. However, you can use the metrics formulas shown in the **Derived metrics formula** column to compute them.

**Interpreting the Amazon S3 Storage Lens prefix symbols for metrics unit multiples (K, M, G, and so on)**  
S3 Storage Lens metrics unit multiples are written with prefix symbols. These prefix symbols match the International System of Units (SI) symbols that are standardized by the International Bureau of Weights and Measures (BIPM). These symbols are also used in the Unified Code for Units of Measure (UCUM). For more information, see [List of SI prefix symbols](https://www.bipm.org/en/measurement-units/si-prefixes). 

**Note**  
The unit of measurement for S3 storage bytes is in binary gigabytes (GB), where 1 GB is 230 bytes, 1 TB is 240 bytes, and 1 PB is 250 bytes. This unit of measurement is also known as a gibibyte (GiB), as defined by the International Electrotechnical Commission (IEC).
When an object reaches the end of its lifetime based on its lifecycle configuration, Amazon S3 queues the object for removal and removes it asynchronously. Therefore, there might be a delay between the expiration date and the date when Amazon S3 removes an object. S3 Storage Lens doesn't include metrics for objects that have expired but haven't been removed. For more information about expiration actions in S3 Lifecycle, see [Expiring objects](lifecycle-expire-general-considerations.md).
Amazon S3 stores metadata (object key, timestamps, etc.) for every object, which requires minimum storage even for 0KB data files. This is why 0KB objects appear in the (0KB-128KB] size range in S3 Storage Lens.
S3 Storage Lens provides best-effort tracking of cross-region data transfers, primarily focusing on requests from customer-managed resources like EC2 instances. Requests made through AWS PrivateLink or certain in-Region requests are unclassified.

 The following table shows the S3 Storage Lens metrics glossary. 


| Metric name | CloudWatch and export | Description | Tier1 | Category2 | Derived | Derived metric formula | Storage Lens groups | 
| --- | --- | --- | --- | --- | --- | --- | --- | 
| Total storage | StorageBytes | Total storage, inclusive of incomplete multipart uploads, object metadata, and delete markers | Free | Summary | N | - | Y | 
| Object count | ObjectCount | Total object count | Free | Summary | N | - | Y | 
| Average object size | - | Average object size | Free | Summary | Y | sum(StorageBytes)/sum(ObjectCount) | Y | 
| Active buckets | - | Number of buckets with storage > 0 bytes | Free | Summary | Y | - | Y | 
| Buckets | - | Number of buckets | Free | Summary | Y | - | Y | 
| Accounts | - | Number of accounts whose storage is in scope | Free | Summary | Y | - | Y | 
| Current version bytes | CurrentVersionStorageBytes | Number of bytes that are a current version of an object | Free | Cost optimization | N | - | Y | 
| % current version bytes | - | Percentage of bytes in scope that are current versions of objects | Free | Cost optimization | Y | sum(CurrentVersionStorageBytes)/sum(StorageBytes) | Y | 
| Current version object count | CurrentVersionObjectCount | Number of current version objects | Free | Data protection | N | - | Y | 
| % current version objects | - | Percentage of objects in scope that are a current version | Free | Cost optimization | Y | sum(CurrentVersionObjectCount)/sum(ObjectCount) | Y | 
| Noncurrent version bytes | NonCurrentVersionStorageBytes | Number of noncurrent version bytes | Free | Cost optimization | N | - | Y | 
| % noncurrent version bytes | - | Percentage of bytes in scope that are noncurrent versions | Free | Cost optimization | Y | sum(NonCurrentVersionStorageBytes)/sum(StorageBytes) | Y | 
| Noncurrent version object count | NonCurrentVersionObjectCount | Number of the noncurrent object versions | Free | Cost optimization | N | - | Y | 
| % noncurrent version objects | - | Percentage of objects in scope that are a noncurrent version | Free | Cost optimization | Y | sum(NonCurrentVersionObjectCount)/sum(ObjectCount) | Y | 
| Delete marker bytes | DeleteMarkerStorageBytes | Number of bytes in scope that are delete markers | Free | Cost optimization | N | - | Y | 
| % delete marker bytes | - | Percentage of bytes in scope that are delete markers | Free | Cost optimization | Y | sum(DeleteMarkerStorageBytes)/sum(StorageBytes) | Y | 
| Delete marker object count | DeleteMarkerObjectCount | Number of objects with a delete marker | Free | Cost optimization | N | - | Y | 
| % delete marker objects | - | Percentage of objects in scope with a delete marker | Free | Cost optimization | Y | sum(DeleteMarkerObjectCount)/sum(ObjectCount) | Y | 
| Incomplete multipart upload bytes | IncompleteMultipartUploadStorageBytes | Total bytes in scope for incomplete multipart uploads | Free | Cost optimization | N | - | Y | 
| % incomplete multipart upload bytes | - | Percentage of bytes in scope that are the result of incomplete multipart uploads | Free | Cost optimization | Y | sum(IncompleteMultipartUploadStorageBytes)/sum(StorageBytes) | Y | 
| Incomplete multipart upload object count | IncompleteMultipartUploadObjectCount | Number of objects in scope that are incomplete multipart uploads | Free | Cost optimization | N | - | Y | 
| % incomplete multipart upload objects | - | Percentage of objects in scope that are incomplete multipart uploads | Free | Cost optimization | Y | sum(IncompleteMultipartUploadObjectCount)/sum(ObjectCount) | Y | 
| Incomplete multipart upload storage bytes greater than 7 days old | IncompleteMPUStorageBytesOlderThan7Days | Total bytes in scope for incomplete multipart uploads that are more than 7 days old | Free | Cost optimization | N | - | Y | 
| % incomplete multipart upload storage bytes greater than 7 days old | - | Percentage of bytes for incomplete multipart uploads that are more than 7 days old | Free | Cost optimization | Y | sum(IncompleteMPUStorageBytesOlderThan7Days)/sum(StorageBytes) | Y | 
| Incomplete multipart upload object count greater than 7 days old | IncompleteMPUObjectCountOlderThan7Days | Number of objects that are incomplete multipart uploads more than 7 days old | Free | Cost optimization | N | - | Y | 
| % incomplete multipart upload object count greater than 7 days old | - | Percentage of objects that are incomplete multipart uploads more than 7 days old | Free | Cost optimization | Y | sum(IncompleteMPUObjectCountOlderThan7Days)/sum(ObjectCount) | Y | 
| Transition lifecycle rule count | TransitionLifecycleRuleCount | Number of lifecycle rules to transition objects to another storage class | Advanced | Cost optimization | N | - | N | 
| Average transition lifecycle rules per bucket | - | Average number of lifecycle rules to transition objects to another storage class | Advanced | Cost optimization | Y | sum(TransitionLifecycleRuleCount)/sum(DistinctNumberOfBuckets) | N | 
| Expiration lifecycle rule count | ExpirationLifecycleRuleCount | Number of lifecycle rules to expire objects | Advanced | Cost optimization | N | - | N | 
| Average expiration lifecycle rules per bucket | - | Average number of lifecycle rules to expire objects | Advanced | Cost optimization | Y | sum(ExpirationLifecycleRuleCount)/sum(DistinctNumberOfBuckets) | N | 
| Noncurrent version transition lifecycle rule count | NoncurrentVersionTransitionLifecycleRuleCount | Number of lifecycle rules to transition noncurrent object versions to another storage class | Advanced | Cost optimization | N |  | N | 
| Average noncurrent version transition lifecycle rules per bucket | - | Average number of lifecycle rules to transition noncurrent object versions to another storage class | Advanced | Cost optimization | Y | sum(NoncurrentVersionTransitionLifecycleRuleCount)/sum(DistinctNumberOfBuckets)  | N | 
| Noncurrent version expiration lifecycle rule count | NoncurrentVersionExpirationLifecycleRuleCount | Number of lifecycle rules to expire noncurrent object versions | Advanced | Cost optimization | N | - | N | 
| Average noncurrent version expiration lifecycle rules per bucket | - | Average number of lifecycle rules to expire noncurrent object versions | Advanced | Cost optimization | Y | sum(NoncurrentVersionExpirationLifecycleRuleCount)/sum(DistinctNumberOfBuckets)  | N | 
| Abort incomplete multipart upload lifecycle rule count | AbortIncompleteMPULifecycleRuleCount | Number of lifecycle rules to delete incomplete multipart uploads | Advanced | Cost optimization | N | - | N | 
| Average abort incomplete multipart upload lifecycle rules per bucket | - | Average number of lifecycle rules to delete incomplete multipart uploads | Advanced | Cost optimization | Y | sum(AbortIncompleteMPULifecycleRuleCount)/sum(DistinctNumberOfBuckets) | N | 
| Expired object delete marker lifecycle rule count | ExpiredObjectDeleteMarkerLifecycleRuleCount | Number of lifecycle rules to remove expired object delete markers | Advanced | Cost optimization | N | - | N | 
| Average expired object delete marker lifecycle rules per bucket | - | Average number of lifecycle rules to remove expired object delete markers | Advanced | Cost optimization | Y | sum(ExpiredObjectDeleteMarkerLifecycleRuleCount)/sum(DistinctNumberOfBuckets)  | N | 
| Total lifecycle rule count | TotalLifecycleRuleCount | Number of lifecycle rules | Advanced | Cost optimization | N | - | N | 
| Average lifecycle rule count per bucket | - | Average number of lifecycle rules | Advanced | Cost optimization | Y | sum(TotalLifecycleRuleCount)/sum(DistinctNumberOfBuckets) | N | 
| Encrypted bytes | EncryptedStorageBytes | Number of encrypted bytes | Free | Data protection | N | - | Y | 
| % encrypted bytes | - | Percentage of total bytes that are encrypted | Free | Data protection | Y | sum(EncryptedObjectCount)/sum(StorageBytes) | Y | 
| Encrypted object count | EncryptedObjectCount | Number of objects that are encrypted | Free | Data protection | N | - | Y | 
| % encrypted objects | - | Percentage of objects that are encrypted | Free | Data protection | Y | sum(EncryptedStorageBytes)/sum(ObjectCount) | Y | 
| Unencrypted bytes | UnencryptedStorageBytes | Number of bytes that are unencrypted | Free | Data protection | Y | sum(StorageBytes) - sum(EncryptedStorageBytes) | Y | 
| % unencrypted bytes | - | Percentage of bytes that are unencrypted | Free | Data protection | Y | sum(UnencryptedStorageBytes)/sum(StorageBytes) | Y | 
| Unencrypted object count | UnencryptedObjectCount | Number of objects that are unencrypted | Free | Data protection | Y | sum(ObjectCount) - sum(EncryptedObjectCount) | Y | 
| % unencrypted objects | - | Percentage of unencrypted objects | Free | Data protection | Y | sum(UnencryptedObjectCount)/sum(ObjectCount) | Y | 
| Replicated storage bytes source | ReplicatedStorageBytesSource | Number of bytes that are replicated from the source bucket | Free | Data protection | N | - | Y | 
| % replicated bytes source | - | Percentage of total bytes that are replicated from the source bucket | Free | Data protection | Y | sum(ReplicatedStorageBytesSource)/sum(StorageBytes) | Y | 
| Replicated object count source | ReplicatedObjectCountSource | Number of replicated objects from the source bucket | Free | Data protection | N | - | Y | 
| % replicated objects source | - | Percentage of total objects that are replicated from the source bucket | Free | Data protection | Y | sum(ReplicatedStorageObjectCount)/sum(ObjectCount) | Y | 
| Replicated storage bytes destination | ReplicatedStorageBytes | Number of bytes that are replicated to the destination bucket | Free | Data protection | N | - | N | 
| % replicated bytes destination | - | Percentage of total bytes that are replicated to the destination bucket | Free | Data protection | Y | sum(ReplicatedStorageBytes)/sum(StorageBytes) | Y | 
| Replicated object count destination | ReplicatedObjectCount | Number of objects that are replicated to the destination bucket | Free | Data protection | N | - | Y | 
| % replicated objects destination | - | Percentage of total objects that are replicated to the destination bucket | Free | Data protection | Y | sum(ReplicatedObjectCount)/sum(ObjectCount) | Y | 
| Object Lock bytes | ObjectLockEnabledStorageBytes | Number of Object Lock enabled storage bytes | Free | Data protection | N | sum(UnencryptedStorageBytes)/sum(ObjectLockEnabledStorageCount)-sum(ObjectLockEnabledStorageBytes) | Y | 
| % Object Lock bytes | - | Percentage of Object Lock enabled storage bytes | Free | Data protection | Y | sum(ObjectLockEnabledStorageBytes)/sum(StorageBytes) | Y | 
| Object Lock object count | ObjectLockEnabledObjectCount | Number of Object Lock objects | Free | Data protection | N | - | Y | 
| % Object Lock objects | - | Percentage of total objects that have Object Lock enabled | Free | Data protection | Y |  sum(ObjectLockEnabledObjectCount)/sum(ObjectCount) | Y | 
| Versioning-enabled bucket count | VersioningEnabledBucketCount | Number of buckets that have S3 Versioning enabled | Free | Data protection | N | - | N | 
| % versioning-enabled buckets | - | Percentage of buckets that have S3 Versioning enabled | Free | Data protection | Y | sum(VersioningEnabledBucketCount)/sum(DistinctNumberOfBuckets) | N | 
| MFA delete-enabled bucket count | MFADeleteEnabledBucketCount | Number of buckets that have MFA (multi-factor authentication) delete enabled | Free | Data protection | N | - | N | 
| % MFA delete-enabled buckets | - | Percentage of buckets that have MFA (multi-factor authentication) delete enabled | Free | Data protection | Y | sum(MFADeleteEnabledBucketCount)/sum(DistinctNumberOfBuckets) | N | 
| SSE-KMS enabled bucket count | SSEKMSEnabledBucketCount | Number of buckets that use server-side encryption with AWS Key Management Service keys (SSE-KMS) for default bucket encryption | Free | Data protection | N | - | N | 
| % SSE-KMS enabled buckets | - | Percentage of buckets that SSE-KMS for default bucket encryption | Free | Data protection | Y | sum(SSEKMSEnabledBucketCount)/sum(DistinctNumberOfBuckets) | N | 
| All unsupported signature requests | AllUnsupportedSignatureRequests | Total number of requests that use unsupported AWS signature versions | Advanced | Data protection | N | - | N | 
| % all unsupported signature requests | - | Percentage of requests that use unsupported AWS signature versions | Advanced | Data protection | Y | sum(AllUnsupportedSignatureRequests)/sum(AllRequests) | N | 
| All unsupported TLS requests | AllUnsupportedTLSRequests | Total number of requests that use unsupported Transport Layer Security (TLS) versions | Advanced | Data protection | N | - | N | 
| % all unsupported TLS requests | - | Percentage of requests that use unsupported TLS versions | Advanced | Data protection | Y | sum(AllUnsupportedTLSRequests)/sum(AllRequests) | N | 
| All SSE-KMS requests | AllSSEKMSRequests | Total number of requests that specify SSE-KMS | Advanced | Data protection | N | - | N | 
| % all SSE-KMS requests | - | Percentage of requests that specify SSE-KMS | Advanced | Data protection | Y | sum(AllSSEKMSRequests)/sum(AllRequests) | N | 
| Same-Region Replication rule count | SameRegionReplicationRuleCount | Number of replication rules for Same-Region Replication (SRR) | Advanced | Data protection | N | - | N | 
| Average Same-Region Replication rules per bucket | - | Average number of replication rules for SRR | Advanced | Data protection | Y | sum(SameRegionReplicationRuleCount)/sum(DistinctNumberOfBuckets) | N | 
| Cross-Region Replication rule count | CrossRegionReplicationRuleCount | Number of replication rules for Cross-Region Replication (CRR) | Advanced | Data protection | N | - | N | 
| Average Cross-Region Replication rules per bucket | - | Average number of replication rules for CRR | Advanced | Data protection | Y | sum(CrossRegionReplicationRuleCount)/sum(DistinctNumberOfBuckets) | N | 
| Same-account replication rule count | SameAccountReplicationRuleCount | Number of replication rules for replication within the same account | Advanced | Data protection | N | - | N | 
| Average same-account replication rules per bucket | - | Average number of replication rules for replication within the same account | Advanced | Data protection | Y | sum(SameAccountReplicationRuleCount)/sum(DistinctNumberOfBuckets) | N | 
| Cross-account replication rule count | CrossAccountReplicationRuleCount | Number of replication rules for cross-account replication | Advanced | Data protection | N | - | N | 
| Average cross-account replication rules per bucket | - | Average number of replication rules for cross-account replication | Advanced | Data protection | Y | sum(CrossAccountReplicationRuleCount)/sum(DistinctNumberOfBuckets) | N | 
| Invalid destination replication rule count | InvalidDestinationReplicationRuleCount | Number of replication rules with a replication destination that's not valid | Advanced | Data protection | N | - | N | 
| Average invalid destination replication rules per bucket | - | Average number of replication rules with a replication destination that's not valid | Advanced | Data protection | Y | sum(InvalidReplicationRuleCount)/sum(DistinctNumberOfBuckets) | N | 
| Total replication rule count | - | Total replication rule count | Advanced | Data protection | Y | - | N | 
| Average replication rule count per bucket | - | Average total replication rule count | Advanced | Data protection | Y | sum(all replication rule count metrics)/sum(DistinctNumberOfBuckets) | N | 
| Object Ownership bucket owner enforced bucket count | ObjectOwnershipBucketOwnerEnforcedBucketCount | Number of buckets that have access control lists (ACLs) disabled by using the bucket owner enforced setting for Object Ownership | Free | Access management | N | - | N | 
| % Object Ownership bucket owner enforced buckets | - | Percentage of buckets that have ACLs disabled by using the bucket owner enforced setting for Object Ownership | Free | Access management | Y | sum(ObjectOwnershipBucketOwnerEnforcedBucketCount)/sum(DistinctNumberOfBuckets)  | N | 
| Object Ownership bucket owner preferred bucket count | ObjectOwnershipBucketOwnerPreferredBucketCount | Number of buckets that use the bucket owner preferred setting for Object Ownership | Free | Access management | N | - | N | 
| % Object Ownership bucket owner preferred buckets | - | Percentage of buckets that use the bucket owner preferred setting for Object Ownership | Free | Access management | Y | sum(ObjectOwnershipBucketOwnerPreferredBucketCount)/sum(DistinctNumberOfBuckets)  | N | 
| Object Ownership object writer bucket count | ObjectOwnershipObjectWriterBucketCount | Number of buckets that use the object writer setting for Object Ownership | Free | Access management | N | - | N | 
| % Object Ownership object writer buckets | - | Percentage of buckets that use the object writer setting for Object Ownership | Free | Access management | Y | sum(ObjectOwnershipObjectWriterBucketCount)/sum(DistinctNumberOfBuckets) | N | 
| Transfer Acceleration enabled bucket count | TransferAccelerationEnabledBucketCount | Number of buckets that have Transfer Acceleration enabled | Free | Performance | N | - | N | 
| % Transfer Acceleration enabled buckets | - | Percentage of buckets that have Transfer Acceleration enabled | Free | Performance | Y | sum(TransferAccelerationEnabledBucketCount)/sum(DistinctNumberOfBuckets) | N | 
| Event Notification enabled bucket count | EventNotificationEnabledBucketCount | Number of buckets that have Event Notifications enabled | Free | Events | N |  | N | 
| % Event Notification enabled buckets | - | Percentage of buckets that have Event Notifications enabled | Free | Events | Y | sum(EventNotificationEnabledBucketCount)/sum(DistinctNumberOfBuckets) | N | 
| All requests | AllRequests |  Total number of requests made   | Advanced | Activity | N | - | N | 
| Get requests | GetRequests |  Total number of `GET` requests made  | Advanced | Activity | N | - | N | 
| Put requests | PutRequests |  Total number of `PUT` requests made  | Advanced | Activity | N | - | N | 
| Head requests | HeadRequests | Number of HEAD requests made | Advanced | Activity | N | - | N | 
| Delete requests | DeleteRequests | Number of DELETE requests made | Advanced | Activity | N | - | N | 
| List requests | ListRequests | Number of LIST requests made | Advanced | Activity | N | - | N | 
| Post requests | PostRequests | Number of POST requests made | Advanced | Activity | N | - | N | 
| Select requests | SelectRequests | Number of S3 Select requests | Advanced | Activity | N | - | N | 
| Select scanned bytes | SelectScannedBytes | Number of S3 Select bytes scanned | Advanced | Activity | N | - | N | 
| Select returned bytes | SelectReturnedBytes | Number of S3 Select bytes returned | Advanced | Activity | N | - | N | 
| Bytes downloaded | BytesDownloaded | Number of bytes downloaded | Advanced | Activity | N | - | N | 
| % retrieval rate | - | Percentage of bytes downloaded | Advanced | Activity | Y | sum(BytesDownloaded)/sum(StorageBytes) | N | 
| Bytes uploaded | BytesUploaded | Number of bytes uploaded | Advanced | Activity | N | - | N | 
| % ingest ratio | - | Percentage of bytes uploaded | Advanced | Activity | Y | sum(BytesUploaded)/sum(StorageBytes) | N | 
| 4xx errors | 4xxErrors | Number of HTTP 4xx status codes | Advanced | Activity | N | - | N | 
| 5xx errors | 5xxErrors | Number of HTTP 5xx status codes | Advanced | Activity | N | - | N | 
| Total errors | - | The sum of all 4xx and 5xx errors | Advanced | Activity | Y | sum(4xxErrors) \$1 sum(5xxErrors) | N | 
| % error rate | - |  Total number of 4xx and 5xx errors as a percentage of total requests  | Advanced | Activity | Y | sum(TotalErrors)/sum(TotalRequests) | N | 
| 200 OK status count | 200OKStatusCount | Number of 200 OK status codes | Advanced | Detailed status code | N | - | N | 
| % 200 OK status | - |  Total number of 200 OK status codes as a percentage of total requests  | Advanced | Detailed status code | Y | sum(200OKStatusCount)/sum(AllRequests) | N | 
| 206 Partial Content status count | 206PartialContentStatusCount | Number of 206 Partial Content status codes | Advanced | Detailed status code | N | - | N | 
| % 206 Partial Content status | - | Number of 206 Partial Content status codes as a percentage of total requests | Advanced | Detailed status code | Y | sum(206PartialContentStatusCount)/sum(AllRequests) | N | 
| 400 Bad Request error count |  400BadRequestErrorCount  | Number of 400 Bad Request status codes | Advanced | Detailed status code | N | - | N | 
| % 400 Bad Request errors | - | Number of 400 Bad Request status codes as a percentage of total requests | Advanced | Detailed status code | Y | sum(400BadRequestErrorCount)/sum(AllRequests) | N | 
| 403 Forbidden error count |  403ForbiddenErrorCount  | Number of 403 Forbidden status codes | Advanced | Detailed status code | N | - | N | 
| % 403 Forbidden errors | - | Number of 403 Forbidden status codes as a percentage of total requests | Advanced | Detailed status code | Y | sum(403ForbiddenErrorCount)/sum(AllRequests) | N | 
| 404 Not Found error count | 404NotFoundErrorCount | Number of 404 Not Found status codes | Advanced | Detailed status code | N | - | N | 
| % 404 Not Found errors | - | Number of 404 Not Found status codes as a percentage of total requests | Advanced | Detailed status code | Y | sum(404NotFoundErrorCount)/sum(AllRequests) | N | 
| 500 Internal Server Error count | 500InternalServerErrorCount | Number of 500 Internal Server Error status codes | Advanced | Detailed status code | N | - | N | 
| % 500 Internal Server Errors | - | Number of 500 Internal Server Error status codes as a percentage of total requests | Advanced | Detailed status code | Y | sum(500InternalServerErrorCount)/sum(AllRequests) | N | 
| 503 Service Unavailable error count | 503ServiceUnavailableErrorCount | Number of 503 Service Unavailable status codes | Advanced | Detailed status code | N | - | N | 
| % 503 Service Unavailable errors | - | Number of 503 Service Unavailable status codes as a percentage of total requests | Advanced | Detailed status code | Y | sum(503ServiceUnavailableErrorCount)/sum(AllRequests) | N | 

1 All free tier storage metrics are available at the Storage Lens group level. Advanced tier metrics are not available at the Storage Lens group level.

2 Rule count metrics and bucket settings metrics aren't available at the prefix level.

The following table shows the performance metrics available in S3 Storage Lens and their availability in CloudWatch:


| **Metric name** | **CloudWatch and export** | **Description** | **Tier** | **Category** | **Derived** | **Derived metric formula** | **Storage Lens groups** | 
| --- | --- | --- | --- | --- | --- | --- | --- | 
| Average First Byte Latency | AverageFirstByteLatency | Average per-request time between when an Amazon S3 bucket receives a complete request and when it starts returning the response, measured over the past 24 hours | Advanced | Performance | N | - | N | 
| Average Total Request Latency | AverageTotalRequestLatency | Average elapsed per-request time between the first byte received and the last byte sent to an Amazon S3 bucket, measured over the past 24 hours | Advanced | Performance | N | - | N | 
| Read 0KB request count | Read0KBRequestCount\$1 | Number of GetObject requests with data sizes of 0KB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 0KB to 128KB request count | Read0KBTo128KBRequestCount\$1 | Number of GetObject requests with data sizes greater than 0KB and up to 128KB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 128KB to 256KB request count | Read128KBTo256KBRequestCount\$1 | Number of GetObject requests with data sizes greater than 128KB and up to 256KB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 256KB to 512KB request count | Read256KBTo512KBRequestCount\$1 | Number of GetObject requests with data sizes greater than 256KB and up to 512KB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 512KB to 1MB request count | Read512KBTo1MBRequestCount\$1 | Number of GetObject requests with data sizes greater than 512KB and up to 1MB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 1MB to 2MB request count | Read1MBTo2MBRequestCount\$1 | Number of GetObject requests with data sizes greater than 1MB and up to 2MB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 2MB to 4MB request count | Read2MBTo4MBRequestCount\$1 | Number of GetObject requests with data sizes greater than 2MB and up to 4MB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 4MB to 8MB request count | Read4MBTo8MBRequestCount\$1 | Number of GetObject requests with data sizes greater than 4MB and up to 8MB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 8MB to 16MB request count | Read8MBTo16MBRequestCount\$1 | Number of GetObject requests with data sizes greater than 8MB and up to 16MB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 16MB to 32MB request count | Read16MBTo32MBRequestCount\$1 | Number of GetObject requests with data sizes greater than 16MB and up to 32MB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 32MB to 64MB request count | Read32MBTo64MBRequestCount\$1 | Number of GetObject requests with data sizes greater than 32MB and up to 64MB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 64MB to 128MB request count | Read64MBTo128MBRequestCount\$1 | Number of GetObject requests with data sizes greater than 64MB and up to 128MB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 128MB to 256MB request count | Read128MBTo256MBRequestCount\$1 | Number of GetObject requests with data sizes greater than 128MB and up to 256MB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 256MB to 512MB request count | Read256MBTo512MBRequestCount\$1 | Number of GetObject requests with data sizes greater than 256MB and up to 512MB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 512MB to 1GB request count | Read512MBTo1GBRequestCount\$1 | Number of GetObject requests with data sizes greater than 512MB and up to 1GB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 1GB to 2GB request count | Read1GBTo2GBRequestCount\$1 | Number of GetObject requests with data sizes greater than 1GB and up to 2GB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 2GB to 4GB request count | Read2GBTo4GBRequestCount\$1 | Number of GetObject requests with data sizes greater than 2GB and up to 4GB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Read 4GB\$1 request count | ReadLargerThan4GBRequestCount\$1 | Number of GetObject requests with data sizes greater than 4GB, including both range-based requests and whole object requests | Advanced | Performance | N | - | N | 
| Write 0KB request count | Write0KBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes of 0KB | Advanced | Performance | N | - | N | 
| Write 0KB to 128KB request count | Write0KBTo128KBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 0KB and up to 128KB | Advanced | Performance | N | - | N | 
| Write 128KB to 256KB request count | Write128KBTo256KBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 128KB and up to 256KB | Advanced | Performance | N | - | N | 
| Write 256KB to 512KB request count | Write256KBTo512KBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 256KB and up to 512KB | Advanced | Performance | N | - | N | 
| Write 512KB to 1MB request count | Write512KBTo1MBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 512KB and up to 1MB | Advanced | Performance | N | - | N | 
| Write 1MB to 2MB request count | Write1MBTo2MBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 1MB and up to 2MB | Advanced | Performance | N | - | N | 
| Write 2MB to 4MB request count | Write2MBTo4MBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 2MB and up to 4MB | Advanced | Performance | N | - | N | 
| Write 4MB to 8MB request count | Write4MBTo8MBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 4MB and up to 8MB | Advanced | Performance | N | - | N | 
| Write 8MB to 16MB request count | Write8MBTo16MBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 8MB and up to 16MB | Advanced | Performance | N | - | N | 
| Write 16MB to 32MB request count | Write16MBTo32MBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 16MB and up to 32MB | Advanced | Performance | N | - | N | 
| Write 32MB to 64MB request count | Write32MBTo64MBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 32MB and up to 64MB | Advanced | Performance | N | - | N | 
| Write 64MB to 128MB request count | Write64MBTo128MBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 64MB and up to 128MB | Advanced | Performance | N | - | N | 
| Write 128MB to 256MB request count | Write128MBTo256MBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 128MB and up to 256MB | Advanced | Performance | N | - | N | 
| Write 256MB to 512MB request count | Write256MBTo512MBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 256MB and up to 512MB | Advanced | Performance | N | - | N | 
| Write 512MB to 1GB request count | Write512MBTo1GBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 512MB and up to 1GB | Advanced | Performance | N | - | N | 
| Write 1GB to 2GB request count | Write1GBTo2GBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 1GB and up to 2GB | Advanced | Performance | N | - | N | 
| Write 2GB to 4GB request count | Write2GBTo4GBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 2GB and up to 4GB | Advanced | Performance | N | - | N | 
| Write 4GB\$1 request count | WriteLargerThan4GBRequestCount\$1 | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 4GB | Advanced | Performance | N | - | N | 
| Object 0KB count | Object0KBCount | Number of objects with sizes equal to 0KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 0KB to 128KB count | Object0KBTo128KBCount | Number of objects with sizes greater than 0KB and less than equal to 128KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 128KB to 256KB count | Object128KBTo256KBCount | Number of objects with sizes greater than 128KB and less than equal to 256KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 256KB to 512KB count | Object256KBTo512KBCount | Number of objects with sizes greater than 256KB and less than equal to 512KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 512KB to 1MB count | Object512KBTo1MBCount | Number of objects with sizes greater than 512KB and less than equal to 1MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 1MB to 2MB count | Object1MBTo2MBCount | Number of objects with sizes greater than 1MB and less than equal to 2MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 2MB to 4MB count | Object2MBTo4MBCount | Number of objects with sizes greater than 2MB and less than equal to 4MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 4MB to 8MB count | Object4MBTo8MBCount | Number of objects with sizes greater than 4MB and less than equal to 8MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 8MB to 16MB count | Object8MBTo16MBCount | Number of objects with sizes greater than 8MB and less than equal to 16MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 16MB to 32MB count | Object16MBTo32MBCount | Number of objects with sizes greater than 16MB and less than equal to 32MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 32MB to 64MB count | Object32MBTo64MBCount | Number of objects with sizes greater than 32MB and less than equal to 64MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 64MB to 128MB count | Object64MBTo128MBCount | Number of objects with sizes greater than 64MB and less than equal to 128MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 128MB to 256MB count | Object128MBTo256MBCount | Number of objects with sizes greater than 128MB and less than equal to 256MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 256MB to 512MB count | Object256MBTo512MBCount | Number of objects with sizes greater than 256MB and less than equal to 512MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 512MB to 1GB count | Object512MBTo1GBCount | Number of objects with sizes greater than 512MB and less than equal to 1GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 1GB to 2GB count | Object1GBTo2GBCount | Number of objects with sizes greater than 1GB and less than equal to 2GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 2GB to 4GB count | Object2GBTo4GBCount | Number of objects with sizes greater than 2GB and less than equal to 4GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Object 4GB\$1 count | ObjectLargerThan4GBCount | Number of objects with sizes greater than 4GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | Advanced | Performance | N | - | N | 
| Concurrent Put 503 error count | ConcurrentPut503ErrorCount | Number of 503 errors that are generated due to concurrent writes to the same object | Advanced | Performance | N | - | N | 
| % Concurrent Put 503 errors | - | Percentage of 503 errors that are generated due to concurrent writes to the same object | Advanced | Performance | Y | 100 \$1 ConcurrentPut503Errors / AllRequests | N | 
| Cross-Region request count | CrossRegionRequestCount | Number of requests that originate from a client in different Region than bucket's home Region | Advanced | Performance | N | - | N | 
| % Cross-Region requests | - | Percentage of requests that originate from a client in different Region than bucket's home Region | Advanced | Performance | Y | 100 \$1 CrossRegionRequestCount / AllRequests | N | 
| Cross-Region transferred bytes | CrossRegionTransferredBytes | Number of bytes that are transferred from calls in different Region than bucket's home Region | Advanced | Performance | N | - | N | 
| % Cross-Region transferred bytes | - | Percentage of bytes transferred that originate from calls in different Region that bucket's home Region | Advanced | Performance | Y | 100 \$1 CrossRegionBytes / (BytesDownloaded \$1 BytesUploaded) | N | 
| Cross-Region without replication request count | CrossRegionWithoutReplicationRequestCount | Number of requests that originate from a client in different Region than bucket's home Region, excluding cross-region replication requests | Advanced | Performance | N | - | N | 
| % Cross-Region without replication requests | - | Percentage of requests that originate from a client in different Region that bucket's home Region, excluding cross-region replication requests | Advanced | Performance | Y | 100 \$1 CrossRegionRequestWithoutReplicationCount / AllRequests | N | 
| Cross-Region without replication transferred bytes | CrossRegionWithoutReplicationTransferredBytes | Number of bytes that are transferred from calls in different Region than bucket's home Region, excluding cross-region replication bytes | Advanced | Performance | N | - | N | 
| % Cross-Region without replication transferred bytes | - | Number of requests that originate from a Region other than the bucket's home Region, excluding cross-region replication requests | Advanced | Performance | Y | 100 \$1 CrossRegionBytesWithoutReplication / (BytesDownloaded \$1 BytesUploaded) | N | 
| In-Region request count | InRegionRequestCount | Number of requests that originate from a client in same Region as bucket's home Region | Advanced | Performance | N | - | N | 
| % In-Region requests | - | Percentage of requests that originate from a client in same Region as bucket's home Region | Advanced | Performance | Y | 100 \$1 InRegionRequestCount / AllRequests | N | 
| In-Region transferred bytes | InRegionTransferredBytes | Number of bytes that are transferred from calls from same Region as bucket's home Region | Advanced | Performance | N | - | N | 
| % In-Region transferred bytes | - | Percentage of bytes transferred that originate from calls from same Region as bucket's home Region | Advanced | Performance | Y | 100 \$1 InRegionBytes / (BytesDownloaded \$1 BytesUploaded) | N | 
| Unique objects accessed count daily | UniqueObjectsAccessedDailyCount | Number of objects that were accessed at least once in last 24 hrs | Advanced | Performance | N | - | N | 
| % Unique objects accessed count daily | - | Percentage of objects that were accessed at least once in last 24 hrs | Advanced | Performance | Y | 100 \$1 UniqueObjectsAccessedDailyCount / ObjectCount | N | 

# Setting Amazon S3 Storage Lens permissions
<a name="storage_lens_iam_permissions"></a>

Amazon S3 Storage Lens requires new permissions in AWS Identity and Access Management (IAM) to authorize access to S3 Storage Lens actions. To grant these permissions, you can use an identity-based IAM policy. You can attach this policy to IAM users, groups, or roles to grant them permissions. Such permissions can include the ability to enable or disable S3 Storage Lens, or to access any S3 Storage Lens dashboard or configuration. 

The IAM user or role must belong to the account that created or owns the dashboard or configuration, unless both of the following conditions are true: 
+ Your account is a member of AWS Organizations.
+ You were given access to create organization-level dashboards by your management account as a delegated administrator.



**Note**  
You can't use your account's root user credentials to view Amazon S3 Storage Lens dashboards. To access S3 Storage Lens dashboards, you must grant the required IAM permissions to a new or existing IAM user. Then, sign in with those user credentials to access S3 Storage Lens dashboards. For more information, see [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*. 
Using S3 Storage Lens on the Amazon S3 console can require multiple permissions. For example, to edit a dashboard on the console, you need the following permissions:  
`s3:ListStorageLensConfigurations`
`s3:GetStorageLensConfiguration`
`s3:PutStorageLensConfiguration`

**Topics**
+ [Setting account permissions to use S3 Storage Lens](#storage_lens_iam_permissions_account)
+ [Setting account permissions to use S3 Storage Lens groups](#storage_lens_groups_permissions)
+ [Setting permissions to use S3 Storage Lens with AWS Organizations](#storage_lens_iam_permissions_organizations)

## Setting account permissions to use S3 Storage Lens
<a name="storage_lens_iam_permissions_account"></a>

To create and manage S3 Storage Lens dashboards and Storage Lens dashboard configurations, you must have the following permissions, depending on which actions you want to perform: 

 The following table shows Amazon S3 Storage Lens related IAM permissions. 


| Action | IAM permissions | 
| --- | --- | 
| Create or update an S3 Storage Lens dashboard in the Amazon S3 console. |  `s3:ListStorageLensConfigurations` `s3:GetStorageLensConfiguration` `s3:GetStorageLensConfigurationTagging` `s3:PutStorageLensConfiguration` `s3:PutStorageLensConfigurationTagging`  | 
| Get the tags of an S3 Storage Lens dashboard on the Amazon S3 console. |  `s3:ListStorageLensConfigurations` `s3:GetStorageLensConfigurationTagging`  | 
| View an S3 Storage Lens dashboard on the Amazon S3 console. |  `s3:ListStorageLensConfigurations` `s3:GetStorageLensConfiguration` `s3:GetStorageLensDashboard`  | 
| Delete an S3 Storage Lens dashboard on Amazon S3 console. |  `s3:ListStorageLensConfigurations` `s3:GetStorageLensConfiguration` `s3:DeleteStorageLensConfiguration`  | 
| Create or update an S3 Storage Lens configuration by using the AWS CLI or an AWS SDK. |  `s3:PutStorageLensConfiguration` `s3:PutStorageLensConfigurationTagging`  | 
| Get the tags of an S3 Storage Lens configuration by using the AWS CLI or an AWS SDK. |  `s3:GetStorageLensConfigurationTagging`  | 
| View an S3 Storage Lens configuration by using the AWS CLI or an AWS SDK. |  `s3:GetStorageLensConfiguration`  | 
| Delete an S3 Storage Lens configuration by using the AWS CLI or AWS SDK. |  `s3:DeleteStorageLensConfiguration`  | 

**Note**  
You can use resource tags in an IAM policy to manage permissions.
An IAM user or role with these permissions can see metrics from buckets and prefixes that they might not have direct permission to read or list objects from.
For S3 Storage Lens dashboards with prefix-level metrics enabled, if a selected prefix path matches with an object key, the dashboard might display the object key as another prefix.
For metrics exports, which are stored in a bucket in your account, permissions are granted by using the existing `s3:GetObject` permission in the IAM policy. Similarly, for an AWS Organizations entity, the organization's management account or delegated administrator accounts can use IAM policies to manage access permissions for organization-level dashboard and configurations.

## Setting account permissions to use S3 Storage Lens groups
<a name="storage_lens_groups_permissions"></a>

You can use S3 Storage Lens groups to understand the distribution of your storage within buckets based on prefix, suffix, object tag, object size, or object age. You can attach Storage Lens groups to your dashboards to view their aggregated metrics.

To work with Storage Lens groups, you need certain permissions. For more information, see [Storage Lens groups permissions](storage-lens-groups.md#storage-lens-group-permissions). 



## Setting permissions to use S3 Storage Lens with AWS Organizations
<a name="storage_lens_iam_permissions_organizations"></a>

You can use Amazon S3 Storage Lens to collect storage metrics and usage data for all accounts that are part of your AWS Organizations hierarchy. The following table shows the actions and permissions related to using S3 Storage Lens with Organizations.


| Action | IAM Permissions | 
| --- | --- | 
| Enable trusted access for S3 Storage Lens for your organization. |  `organizations:EnableAWSServiceAccess`  | 
| Disable trusted access for S3 Storage Lens for your organization. |  `organizations:DisableAWSServiceAccess`  | 
| Register a delegated administrator to create S3 Storage Lens dashboards or configurations for your organization. |  `organizations:RegisterDelegatedAdministrator`  | 
| Deregister a delegated administrator so that they can no longer create S3 Storage Lens dashboards or configurations for your organization. |  `organizations:DeregisterDelegatedAdministrator`  | 
|  Additional permissions to create S3 Storage Lens organization-wide configurations.  |  `organizations:DescribeOrganization` `organizations:ListAccounts` `organizations:ListAWSServiceAccessForOrganization` `organizations:ListDelegatedAdministrators` `iam:CreateServiceLinkedRole`  | 

# Working with Amazon S3 Storage Lens by using the console and API
<a name="S3LensExamples"></a>

Amazon S3 Storage Lens is a cloud-storage analytics feature that you can use to gain organization-wide visibility into object-storage usage and activity. You can use S3 Storage Lens metrics to generate summary insights, such as finding out how much storage you have across your entire organization or which are the fastest-growing buckets and prefixes. You can also use S3 Storage Lens metrics to identify cost-optimization opportunities, implement data-protection and security best practices, and improve the performance of application workloads. For example, you can identify buckets that don't have S3 Lifecycle rules to expire incomplete multipart uploads that are more than 7 days old. You can also identify buckets that aren't following data-protection best practices, such as using S3 Replication or S3 Versioning. S3 Storage Lens also analyzes metrics to deliver contextual recommendations that you can use to optimize storage costs and apply best practices for protecting your data. 

S3 Storage Lens aggregates your metrics and displays the information in the **Account snapshot** section on the Amazon S3 console **Buckets** page. S3 Storage Lens also provides an interactive dashboard that you can use to visualize insights and trends, flag outliers, and receive recommendations for optimizing storage costs and applying data protection best practices. Your dashboard has drill-down options to generate and visualize insights at the organization, account, AWS Region, storage class, bucket, prefix, or Storage Lens group level. You can also send a daily metrics report in CSV or Parquet format to a general purpose S3 bucket or export the metrics directly to an AWS-managed S3 table bucket. 

**Note**  
Storage Lens only aggregates metrics for [S3 general purpose buckets](UsingBucket.md).

The following sections contain examples of creating, updating, and viewing S3 Storage Lens configurations and performing operations related to the feature. If you are using S3 Storage Lens with AWS Organizations, these examples also cover those use cases. In the examples, replace any placeholder values.

**Topics**
+ [Create an Amazon S3 Storage Lens dashboard](storage_lens_creating_dashboard.md)
+ [Update an Amazon S3 Storage Lens dashboard](storage_lens_editing.md)
+ [Disable an Amazon S3 Storage Lens dashboard](storage_lens_disabling.md)
+ [Delete an Amazon S3 Storage Lens dashboard](storage_lens_deleting.md)
+ [List Amazon S3 Storage Lens dashboards](storage_lens_list_dashboard.md)
+ [View an Amazon S3 Storage Lens dashboard configuration details](storage_lens_viewing.md)
+ [Managing AWS resource tags with S3 Storage Lens](storage-lens-groups-manage-tags-dashboard.md)
+ [Helper files for using Amazon S3 Storage Lens](S3LensHelperFilesCLI.md)

# Create an Amazon S3 Storage Lens dashboard
<a name="storage_lens_creating_dashboard"></a>

You can create additional S3 Storage Lens custom dashboards that can be scoped to your organization in AWS Organizations or to specific AWS Regions or buckets within an account. 

**Note**  
Any updates to your dashboard configuration can take up to 48 hours to accurately display or visualize.

## Using the S3 console
<a name="storage_lens_console_creating"></a>

Use the following steps to create an Amazon S3 Storage Lens dashboard on the Amazon S3 console.

**Step 1: Configure general settings**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to switch to. 

1. In the left navigation pane, under **S3 Storage Lens**, choose **Dashboards**.

1. Choose **Create dashboard**.

1. On the **Dashboard** page, in the **General** section, do the following:

   1. View the **Home Region** for your dashboard. The home Region is the AWS Region where the configuration and metrics for this Storage Lens dashboard are stored.

   1. Enter a dashboard name. 

      Dashboard names must be fewer than 65 characters and must not contain special characters or spaces. 
**Note**  
You can't change this dashboard name after the dashboard is created.

   1. Choose **Enabled** to display updated daily metrics in your dashboard.

   1. (Optional) You can choose to add **Tags** to your dashboard. You can use tags to manage permissions for your dashboard and track costs for S3 Storage Lens. For more information, see [Controlling access to AWS resources using tags](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html) in the *IAM User Guide* and [Using AWS-generated tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/aws-tags.html) in the *AWS Billing User Guide*.
**Note**  
You can add up to 50 tags to your dashboard configuration.

1. Choose **Next** to save your changes and proceed.

**Step 2: Define the dashboard scope**

1. In the **Dashboard scope** section, choose the Regions and buckets that you want S3 Storage Lens to include or exclude in the dashboard.

1. Choose the buckets in your selected Regions that you want S3 Storage Lens to include or exclude. You can either include or exclude buckets, but not both. This option isn't available when you create organization-level dashboards.
**Note**  
You can either include or exclude Regions and buckets. This option is limited to Regions only when creating organization-level dashboards across member accounts in your organization. 
You can choose up to 50 buckets to include or exclude.

1. Choose **Next** to save your changes and proceed.

**Step 3: Choose your Storage Lens tier**

1. In the **Storage Lens tier** section, choose the tier of features that you want to aggregate for this dashboard.

   1. To include free metrics aggregated at the bucket level and available for queries for 14 days, choose **Free tier**.

   1. To enable advanced metrics, choose **Advanced tier**. These options include prefix or Storage Lens groups aggregation, Amazon CloudWatch publishing, the expanded prefixes report, and contextual recommendations. Data is available for queries for 15 months. Advanced metrics and recommendations have an additional cost. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

      For more information about advanced metrics and free metrics, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

1. Under **Advanced metric categories**, select the category of metrics that you want to enable:
   + **Activity metrics**
   + **Detailed status code metrics**
   + **Cost optimization metrics**
   + **Data protection metrics**
   + **Performance metrics**

   To preview which metrics are included in each category, use the drop-down arrow button below the metrics category checkbox list. For more information about metrics categories, see [Metrics categories](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_types). For a complete list of metrics, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

1. Choose or specify a **Prefix delimiter** to distinguish levels within each prefix. This value is used to identify each prefix level. The default value in Amazon S3 is the "`/`" character, but your storage structure might use other delimiter characters.

1. Choose **Next** to save your changes and proceed.

**Step 4: (Optional) Choose your metrics aggregation**

1. Under **Additional metrics aggregation**, choose which metrics you want to aggregate:
   + Prefix aggregation
   + Storage Lens group aggregation

1. If you've enabled **Prefix aggregation**, specify the minimum **Prefix threshold** for your dashboard and **Prefix depth**. Then, choose **Next** to save and proceed.
**Note**  
The **Prefix depth** setting determines how many hierarchical levels deep S3 Storage Lens will analyze your object prefixes, with a maximum limit of 10 levels. The **Prefix threshold** specifies the minimum percentage of total storage that a prefix must represent before it's included in Storage Lens metrics.

1. If you've enabled **Storage Lens group aggregation**, choose one of the following:
   + **Include Storage Lens groups**
   + **Exclude Storage Lens groups**

1. When you include Storage Lens groups in your aggregation, you can either **Include all Storage Lens groups in your home Region** or specify Storage Lens groups to include.

1. Choose **Next** to save your changes and proceed.

**Step 5: (Optional) Choose your metrics export and publishing settings**

1. Under **Metrics publishing**, choose **CloudWatch publishing** if you want to access your Storage Lens metrics in your CloudWatch dashboard.
**Note**  
Prefix-level metrics aren't available in CloudWatch.

1. Under **Metrics export**, choose which Storage Lens dashboard data you want exported daily:
   + **Default metrics report**
   + **Expanded prefixes metrics report**

1. (Optional) If you chose **Default metrics report**, in the **Default metrics report** settings, choose the bucket type. You can export the report to either a general purpose Amazon S3 bucket or AWS-managed S3 table bucket. Based on the selected bucket type, update the **General purpose bucket destination settings** or **Table bucket destination settings** options.
**Note**  
The **default metrics report** only includes prefixes within the set threshold and depth set in prefix aggregation settings.  
If you choose to specify an encryption key, you must choose an AWS KMS key (SSE-KMS) or Amazon S3 managed key (SSE-S3). If your destination bucket policy requires encryption, you must provide an encryption key for your metrics export. Without the encryption key, the export to S3 fails. For more information, see [Using an AWS KMS key to encrypt your metrics exports](storage_lens_encrypt_permissions.md).

1. (Optional) If you chose **Expanded prefixes metrics report**, in the **Expanded prefixes metrics report** settings, choose the bucket type. You can export the report to either a general purpose Amazon S3 bucket or a read-only S3 table bucket. Based on the selected bucket type, update the **General purpose bucket destination settings** or **Table bucket destination settings**.
**Note**  
The **Expanded prefixes metrics report** includes all prefixes up to prefix depth 50 in all selected buckets that are specified in your dashboard scope.  
If you choose to specify an encryption key, you must choose an AWS KMS key (SSE-KMS) or Amazon S3 managed key (SSE-S3). If your destination bucket policy requires encryption, you must provide an encryption key for your metrics export. Without the encryption key, the export to S3 fails. For more information, see [Using an AWS KMS key to encrypt your metrics exports](storage_lens_encrypt_permissions.md).

1. Choose **Next** to save your changes and proceed.

1. Review everything on the **Review and Create** page. If there are no additional changes, choose **Next** to save your changes and to create your dashboard.

**Step 6: Review your dashboard configuration and create your dashboard**

1. In the **General** section, review your settings. Choose **Edit** to make any changes.

1. In the **Dashboard scope** section, review your settings. Choose **Edit** to make any changes.

1. In the **Storage Lens tier** section, review your settings. Choose **Edit** to make any changes.

1. In the **Metrics aggregation** section, review your settings. Choose **Edit** to make any changes.

1. In the **Metrics export** section, review your settings. Choose **Edit** to make any changes.

1. After reviewing and confirming all your dashboard configuration settings, choose **Submit** to create your dashboard.

After you've successfully created your new Storage Lens dashboard, you can view your new dashboard listed under your Storage Lens **Dashboard** page.

## Using the AWS CLI
<a name="S3PutStorageLensConfigurationTagsCLI"></a>

**Example**  
The following example command creates a Amazon S3 Storage Lens configuration with tags. To use these examples, replace the `user input placeholders` with your own information.  

```
aws s3control put-storage-lens-configuration --account-id=111122223333 --config-id=example-dashboard-configuration-id --region=us-east-1 --storage-lens-configuration=file://./config.json --tags=file://./tags.json
```

**Example**  
The following example command creates a Amazon S3 Storage Lens configuration without tags. To use these examples, replace the `user input placeholders` with your own information.  

```
aws s3control put-storage-lens-configuration --account-id=222222222222 --config-id=your-configuration-id --region=us-east-1 --storage-lens-configuration=file://./config.json
```

## Using the AWS SDK for Java
<a name="S3CreateandUpdateStorageLensConfigurationJava"></a>

**Example – Create and update an Amazon S3 Storage Lens configuration**  
The following example creates and updates an Amazon S3 Storage Lens configuration in SDK for Java:  

```
package aws.example.s3control;

import software.amazon.awssdk.awscore.exception.AwsServiceException;
import software.amazon.awssdk.core.exception.SdkClientException;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.AccountLevel;
import software.amazon.awssdk.services.s3control.model.ActivityMetrics;
import software.amazon.awssdk.services.s3control.model.AdvancedCostOptimizationMetrics;
import software.amazon.awssdk.services.s3control.model.AdvancedDataProtectionMetrics;
import software.amazon.awssdk.services.s3control.model.AdvancedPerformanceMetrics;
import software.amazon.awssdk.services.s3control.model.BucketLevel;
import software.amazon.awssdk.services.s3control.model.CloudWatchMetrics;
import software.amazon.awssdk.services.s3control.model.DetailedStatusCodesMetrics;
import software.amazon.awssdk.services.s3control.model.Format;
import software.amazon.awssdk.services.s3control.model.Include;
import software.amazon.awssdk.services.s3control.model.OutputSchemaVersion;
import software.amazon.awssdk.services.s3control.model.PrefixLevel;
import software.amazon.awssdk.services.s3control.model.PrefixLevelStorageMetrics;
import software.amazon.awssdk.services.s3control.model.PutStorageLensConfigurationRequest;
import software.amazon.awssdk.services.s3control.model.S3BucketDestination;
import software.amazon.awssdk.services.s3control.model.SSES3;
import software.amazon.awssdk.services.s3control.model.SelectionCriteria;
import software.amazon.awssdk.services.s3control.model.StorageLensAwsOrg;
import software.amazon.awssdk.services.s3control.model.StorageLensConfiguration;
import software.amazon.awssdk.services.s3control.model.StorageLensDataExport;
import software.amazon.awssdk.services.s3control.model.StorageLensDataExportEncryption;
import software.amazon.awssdk.services.s3control.model.StorageLensExpandedPrefixesDataExport;
import software.amazon.awssdk.services.s3control.model.StorageLensTableDestination;
import software.amazon.awssdk.services.s3control.model.StorageLensTag;

import java.util.Arrays;
import java.util.List;

public class CreateAndUpdateDashboard {

    public static void main(String[] args) {
        String configurationId = "ConfigurationId";
        String sourceAccountId = "111122223333";
        String exportAccountId = "Destination Account ID";
        String exportBucketArn = "arn:aws:s3:::destBucketName"; // The destination bucket for your metrics export must be in the same Region as your S3 Storage Lens configuration.
        String awsOrgARN = "arn:aws:organizations::123456789012:organization/o-abcdefgh";
        Format exportFormat = Format.CSV;

        try {
            SelectionCriteria selectionCriteria = SelectionCriteria.builder()
                    .delimiter("/")
                    .maxDepth(5)
                    .minStorageBytesPercentage(10.0)
                    .build();

            PrefixLevelStorageMetrics prefixStorageMetrics = PrefixLevelStorageMetrics.builder()
                    .isEnabled(true)
                    .selectionCriteria(selectionCriteria)
                    .build();

            BucketLevel bucketLevel = BucketLevel.builder()
                    .activityMetrics(ActivityMetrics.builder().isEnabled(true).build())
                    .advancedCostOptimizationMetrics(AdvancedCostOptimizationMetrics.builder().isEnabled(true).build())
                    .advancedDataProtectionMetrics(AdvancedDataProtectionMetrics.builder().isEnabled(true).build())
                    .advancedPerformanceMetrics(AdvancedPerformanceMetrics.builder().isEnabled(true).build())
                    .detailedStatusCodesMetrics(DetailedStatusCodesMetrics.builder().isEnabled(true).build())
                    .prefixLevel(PrefixLevel.builder().storageMetrics(prefixStorageMetrics).build())
                    .build();

            AccountLevel accountLevel = AccountLevel.builder()
                    .activityMetrics(ActivityMetrics.builder().isEnabled(true).build())
                    .advancedCostOptimizationMetrics(AdvancedCostOptimizationMetrics.builder().isEnabled(true).build())
                    .advancedPerformanceMetrics(AdvancedPerformanceMetrics.builder().isEnabled(true).build())
                    .advancedDataProtectionMetrics(AdvancedDataProtectionMetrics.builder().isEnabled(true).build())
                    .detailedStatusCodesMetrics(DetailedStatusCodesMetrics.builder().isEnabled(true).build())
                    .bucketLevel(bucketLevel)
                    .build();

            Include include = Include.builder()
                    .buckets(Arrays.asList("arn:aws:s3:::bucketName"))
                    .regions(Arrays.asList("us-west-2"))
                    .build();

            StorageLensDataExportEncryption exportEncryption = StorageLensDataExportEncryption.builder()
                    .sses3(SSES3.builder().build())
                    .build();

            S3BucketDestination s3BucketDestination = S3BucketDestination.builder()
                    .accountId(exportAccountId)
                    .arn(exportBucketArn)
                    .encryption(exportEncryption)
                    .format(exportFormat)
                    .outputSchemaVersion(OutputSchemaVersion.V_1)
                    .prefix("Prefix")
                    .build();

            StorageLensTableDestination s3TablesDestination = StorageLensTableDestination.builder()
                    .encryption(exportEncryption)
                    .isEnabled(true)
                    .build();

            CloudWatchMetrics cloudWatchMetrics = CloudWatchMetrics.builder()
                    .isEnabled(true)
                    .build();

            StorageLensDataExport dataExport = StorageLensDataExport.builder()
                    .cloudWatchMetrics(cloudWatchMetrics)
                    .s3BucketDestination(s3BucketDestination)
                    .storageLensTableDestination(s3TablesDestination)
                    .build();

            StorageLensAwsOrg awsOrg = StorageLensAwsOrg.builder()
                    .arn(awsOrgARN)
                    .build();

            StorageLensExpandedPrefixesDataExport expandedPrefixesDataExport = StorageLensExpandedPrefixesDataExport.builder()
                    .s3BucketDestination(s3BucketDestination)
                    .storageLensTableDestination(s3TablesDestination)
                    .build();

            StorageLensConfiguration configuration = StorageLensConfiguration.builder()
                    .id(configurationId)
                    .accountLevel(accountLevel)
                    .include(include)
                    .dataExport(dataExport)
                    .awsOrg(awsOrg)
                    .expandedPrefixesDataExport(expandedPrefixesDataExport)
                    .prefixDelimiter("/")
                    .isEnabled(true)
                    .build();

            List<StorageLensTag> tags = Arrays.asList(
                    StorageLensTag.builder().key("key-1").value("value-1").build(),
                    StorageLensTag.builder().key("key-2").value("value-2").build()
            );

            S3ControlClient s3ControlClient = S3ControlClient.builder()
                    .region(Region.US_WEST_2)
                    .credentialsProvider(ProfileCredentialsProvider.create())
                    .build();

            s3ControlClient.putStorageLensConfiguration(PutStorageLensConfigurationRequest.builder()
                    .accountId(sourceAccountId)
                    .configId(configurationId)
                    .storageLensConfiguration(configuration)
                    .tags(tags)
                    .build()
            );

        } catch (AwsServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

For access to S3 Storage Lens groups or expanded prefixes, you must upgrade your dashboard to use the advanced tier. Additional charges apply. For more information about the free and advanced tiers, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection). For more information about S3 Storage Lens groups, see [Working with S3 Storage Lens groups to filter and aggregate metrics](storage-lens-groups-overview.md). 

# Update an Amazon S3 Storage Lens dashboard
<a name="storage_lens_editing"></a>

 The Amazon S3 Storage Lens default dashboard is `default-account-dashboard`. This dashboard is preconfigured by Amazon S3 to help you visualize summarized insights and trends for your entire account's aggregated free and advanced metrics on the console. You can't modify the default dashboard's configuration scope, but you can upgrade the metrics selection from the free metrics to the paid advanced metrics and recommendations, configure the optional metrics export, or even disable the default dashboard. The default dashboard can't be deleted, and can only be disabled. For more information, see [Using the S3 console](storage_lens_console_deleting.md).

## Using the S3 console
<a name="storage_lens_console_editing"></a>

Use the following steps to update an Amazon S3 Storage Lens dashboard on the Amazon S3 console.

**Step 1: Update your dashboard and configure your general settings**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens, Dashboards**.

1. Choose the dashboard that you want to edit.

1. Choose **View dashboard configuration**.

1. Choose **Edit**. You can now review the dashboard configuration, step by step. To make changes to any of the steps, you can click directly on the step using the left navigation. For instructions on how to update those steps,
**Note**  
You can't change the following:  
The dashboard name
The home Region

1. On the **Dashboard** page, in the **General** section, you can make changes to the following:
   + Choose **Enabled** or **Disabled** to update whether you're receiving daily metrics in your dashboard.
   + (Optional) You can choose to add **Tags** to your dashboard. You can use tags to manage permissions for your dashboard and track costs for S3 Storage Lens. For more information, see [Controlling access to AWS resources using tags](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html) in the *IAM User Guide* and [Using AWS-generated tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/aws-tags.html) in the *AWS Billing User Guide*.
**Note**  
You can add up to 50 tags to your dashboard configuration.

1. Choose **Next** to save your changes and proceed.

**Step 2: Update the dashboard scope**

1. In the **Dashboard scope** section, update the Regions and buckets that you want S3 Storage Lens to include or exclude in the dashboard.
**Note**  
You can either include or exclude Regions and buckets. This option is limited to Regions only when creating organization-level dashboards across member accounts in your organization. 
You can choose up to 50 buckets to include or exclude.

1. Choose the buckets in your selected Regions that you want S3 Storage Lens to include or exclude. You can either include or exclude buckets, but not both. This option isn't available when you create organization-level dashboards.
**Note**  
You can either include or exclude Regions and buckets. This option is limited to Regions only when creating organization-level dashboards across member accounts in your organization.
You can choose up to 50 buckets to include or exclude.

1. Choose **Next** to save your changes and proceed.

**Step 3: Update your Storage Lens tier Configure the metrics selection**

1. In the **Storage Lens tier** **Metrics selection** section, update the tier of metrics that you want to aggregate for this dashboard.
**Note**  
If you're updating from the **Free tier** to the **Advanced tier**, you'll need to update your **Metrics aggregation** settings. To update your **Metrics aggregation settings**, see **Step 4: Update your metrics aggregation**.
If you're updating your Storage Lens tier from the **Advanced tier** to the **Free tier**, you won't need to update any **Metrics aggregation** settings. The **Metrics aggregation** feature only applies to **Advanced tier** metric categories.

1. To include free metrics aggregated at the bucket level and available for queries for 14 days, choose **Free tier**.

1. To enable advanced metrics, choose **Advanced tier**. These options include prefix aggregation, Amazon CloudWatch publishing, and contextual recommendations. Data is available for queries for 15 months. Advanced metrics and recommendations have an additional cost. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

   For more information about advanced metrics and free metrics, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

1. Under **Advanced metric categories**, choose the category of metrics that you want to enable:
   + **Activity metrics**
   + **Detailed status code metrics**
   + **Cost optimization metrics**
   + **Data protection metrics**
   + **Performance metrics**

   To preview which metrics are included in each category, use the drop-down arrow button below the metrics category checkbox list. For more information about metrics categories, see [Metrics categories](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_types). For a complete list of metrics, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

1. Choose or specify a **Prefix delimiter** to distinguish levels within each prefix. This value is used to identify each prefix level. The default value in Amazon S3 is the "`/`" character, but your storage structure might use other delimiter characters.

1. Choose **Next** to save your changes and proceed.

**Step 4: (Optional) Update your metrics aggregation**

1. Under **Additional metrics aggregation**, update which metrics you want to aggregate by choosing one of the following:
   + Prefix aggregation
   + Storage Lens group aggregation

1. If you've enabled **Prefix aggregation**, specify the minimum **Prefix threshold** for your dashboard and **Prefix depth**. Then, choose **Next** to save and proceed.

1. If you've enabled **Storage Lens group aggregation**, choose one of the following:
   + **Include Storage Lens groups**
   + **Exclude Storage Lens groups**

1. When you include Storage Lens groups in your aggregation, you can either **Include all Storage Lens groups in your home Region** or specify Storage Lens groups to include.

1. Choose **Next** to save your changes and proceed.

**Step 5: (Optional) Update your metrics export and publishing settings**

1. Under **Metrics publishing**, choose **CloudWatch publishing** if you want to access your Storage Lens metrics in your CloudWatch dashboard.
**Note**  
Prefix-level metrics aren't available in CloudWatch.

1. Under **Metrics export**, choose which Storage Lens dashboard data you want exported daily:
   + **Default metrics report**
   + **Expanded prefixes metrics report**

1. (Optional) If you chose **Default metrics report**, in the **Default metrics report** settings, choose the bucket type. You can export the report to either a general purpose S3 bucket or a read-only S3 table bucket. Based on the selected bucket type, update the **General purpose bucket destination settings** or **Table bucket destination settings** options.
**Note**  
The **default metrics report** only includes prefixes within the set threshold and depth set in prefix aggregation settings. If your prefix aggregation isn't already configured, the threshold includes up to the 100 largest prefixes by size.
If you choose to specify an encryption key, you must choose an AWS KMS key (SSE-KMS) or Amazon S3 managed key (SSE-S3). If your destination bucket policy requires encryption, you must provide an encryption key for your metrics export. Without the encryption key, the export to S3 fails. For more information, see [Using an AWS KMS key to encrypt your metrics exports](storage_lens_encrypt_permissions.md).

1. Choose **Next** to save your changes and proceed.

1. (Optional) If you chose **Expanded prefixes metrics report**, in the **Expanded prefixes metrics report** settings, choose the bucket type. You can export the report to either a general purpose S3 bucket or a read-only S3 table bucket. Based on the selected bucket type, update the **General purpose bucket destination settings** or **Table bucket destination settings**.
**Note**  
The **Expanded prefixes metrics report** includes prefixes in all buckets that are specified in your dashboard scope.
If you choose to specify an encryption key, you must choose an AWS KMS key (SSE-KMS) or Amazon S3 managed key (SSE-S3). If your destination bucket policy requires encryption, you must provide an encryption key for your metrics export. Without the encryption key, the export to S3 fails. For more information, see [Using an AWS KMS key to encrypt your metrics exports](storage_lens_encrypt_permissions.md).

1. Choose **Next** to save your changes and proceed.

**Step 6: Review and update your dashboard configuration**

1. In the **General** section, review your settings. Choose **Edit** to make any changes.

1. In the **Dashboard scope** section, review your settings. Choose **Edit** to make any changes.

1. In the **Storage Lens tier** section, review your settings. Choose **Edit** to make any changes.

1. In the **Metrics aggregation** section, review your settings. Choose **Edit** to make any changes.

1. In the **Metrics export** section, review your settings. Choose **Edit** to make any changes.

1. After reviewing and confirming all your dashboard configuration settings, choose **Submit** to update your dashboard.

After you've successfully updated your new Storage Lens dashboard, you can view your updated dashboard configuration listed under your Storage Lens **Dashboard** page.

## Using the AWS CLI
<a name="S3PutStorageLensConfigurationTagsCLI"></a>

**Example**  
The following example command updates a Amazon S3 Storage Lens dashboard configuration. To use these examples, replace the `user input placeholders` with your own information.  

```
aws s3control put-storage-lens-configuration --account-id=111122223333 --config-id=example-dashboard-configuration-id --region=us-east-1 --storage-lens-configuration=file://./config.json --tags=file://./tags.json
```

## Using the AWS SDK for Java
<a name="S3UpdateStorageLensConfigurationAdvancedJava"></a>

**Example – Update a Amazon S3 Storage Lens configuration with advanced metrics and recommendations**  
The following examples shows you how to update the default S3 Storage Lens configuration with advanced metrics and recommendations in SDK for Java:  

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3control.AWSS3Control;
import com.amazonaws.services.s3control.AWSS3ControlClient;
import com.amazonaws.services.s3control.model.AccountLevel;
import com.amazonaws.services.s3control.model.ActivityMetrics;
import com.amazonaws.services.s3control.model.BucketLevel;
import com.amazonaws.services.s3control.model.Format;
import com.amazonaws.services.s3control.model.Include;
import com.amazonaws.services.s3control.model.OutputSchemaVersion;
import com.amazonaws.services.s3control.model.PrefixLevel;
import com.amazonaws.services.s3control.model.PrefixLevelStorageMetrics;
import com.amazonaws.services.s3control.model.PutStorageLensConfigurationRequest;
import com.amazonaws.services.s3control.model.S3BucketDestination;
import com.amazonaws.services.s3control.model.SSES3;
import com.amazonaws.services.s3control.model.SelectionCriteria;
import com.amazonaws.services.s3control.model.StorageLensAwsOrg;
import com.amazonaws.services.s3control.model.StorageLensConfiguration;
import com.amazonaws.services.s3control.model.StorageLensDataExport;
import com.amazonaws.services.s3control.model.StorageLensDataExportEncryption;
import com.amazonaws.services.s3control.model.StorageLensTag;

import java.util.Arrays;
import java.util.List;

import static com.amazonaws.regions.Regions.US_WEST_2;

public class UpdateDefaultConfigWithPaidFeatures {

    public static void main(String[] args) {
        String configurationId = "default-account-dashboard"; // This configuration ID cannot be modified.
        String sourceAccountId = "111122223333";

        try {
            SelectionCriteria selectionCriteria = new SelectionCriteria()
                    .withDelimiter("/")
                    .withMaxDepth(5)
                    .withMinStorageBytesPercentage(10.0);
            PrefixLevelStorageMetrics prefixStorageMetrics = new PrefixLevelStorageMetrics()
                    .withIsEnabled(true)
                    .withSelectionCriteria(selectionCriteria);
            BucketLevel bucketLevel = new BucketLevel()
                    .withActivityMetrics(new ActivityMetrics().withIsEnabled(true))
                    .withPrefixLevel(new PrefixLevel().withStorageMetrics(prefixStorageMetrics));
            AccountLevel accountLevel = new AccountLevel()
                    .withActivityMetrics(new ActivityMetrics().withIsEnabled(true))
                    .withBucketLevel(bucketLevel);

            StorageLensConfiguration configuration = new StorageLensConfiguration()
                    .withId(configurationId)
                    .withAccountLevel(accountLevel)
                    .withIsEnabled(true);

            AWSS3Control s3ControlClient = AWSS3ControlClient.builder()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(US_WEST_2)
                    .build();

            s3ControlClient.putStorageLensConfiguration(new PutStorageLensConfigurationRequest()
                    .withAccountId(sourceAccountId)
                    .withConfigId(configurationId)
                    .withStorageLensConfiguration(configuration)
            );

        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

For access to S3 Storage Lens groups or expanded prefixes, you must upgrade your dashboard to use the advanced tier. Additional charges apply. For more information about the free and advanced tiers, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection). For more information about S3 Storage Lens groups, see [Working with S3 Storage Lens groups to filter and aggregate metrics](storage-lens-groups-overview.md). 

# Disable an Amazon S3 Storage Lens dashboard
<a name="storage_lens_disabling"></a>

You can disable an Amazon S3 Storage Lens dashboard from the Amazon S3 console. Disabling a dashboard prevents it from generating metrics in the future. A disabled dashboard still retains its configuration information, so that it can be easily resumed when re-enabled. A disabled dashboard retains its historical data until it's no longer available for queries.

# Using the S3 console
<a name="storage_lens_console_disabling"></a>

Use the following steps to disable an Amazon S3 Storage Lens dashboard on the Amazon S3 console.

**To disable an Amazon S3 Storage Lens dashboard**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to disable, and then choose **Disable** at the top of the list.

1. On the confirmation page, confirm that you want to disable the dashboard by entering the name of dashboard in the text field, and then choose **Confirm**.

# Delete an Amazon S3 Storage Lens dashboard
<a name="storage_lens_deleting"></a>

You can't delete the default dashboard. However, you can disable it. Before deleting a dashboard that you've created, consider the following:
+ As an alternative to deleting a dashboard, you can *disable* the dashboard so that it is available to be re-enabled in the future. For more information, see [Using the S3 console](storage_lens_console_disabling.md).
+ Deleting the dashboard deletes all the configuration settings that are associated with it.
+ Deleting a dashboard makes all the historic metrics data unavailable. This historical data is still retained for 15 months. If you want to access this data again, create a dashboard with the same name in the same home Region as the one that was deleted. 

# Using the S3 console
<a name="storage_lens_console_deleting"></a>

You can delete an Amazon S3 Storage Lens dashboard from the Amazon S3 console. However, deleting a dashboard prevents it from generating metrics in the future.

**Deleting an Amazon S3 Storage Lens dashboard**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to delete, and then choose **Delete** at the top of the list.

1. On the **Delete dashboards** page, confirm that you want to delete the dashboard by entering the name of dashboard in the text field. Then choose **Confirm**. 

# Using the AWS CLI
<a name="storage_lens_cli_deleting"></a>

**Example**  
 The following example deletes a S3 Storage Lens configuration. To use these examples, replace the `user input placeholders` with your own information.  

```
aws s3control delete-storage-lens-configuration --account-id=222222222222 --region=us-east-1 --config-id=your-configuration-id
```

## Using the AWS SDK for Java
<a name="S3DeleteStorageLensConfigurationJava"></a>

**Example – Delete an Amazon S3 Storage Lens dashboard configuration**  
The following example shows you how to delete an S3 Storage Lens configuration using SDK for Java:  

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3control.AWSS3Control;
import com.amazonaws.services.s3control.AWSS3ControlClient;
import com.amazonaws.services.s3control.model.DeleteStorageLensConfigurationRequest;

import static com.amazonaws.regions.Regions.US_WEST_2;

public class DeleteDashboard {

    public static void main(String[] args) {
        String configurationId = "ConfigurationId";
        String sourceAccountId = "111122223333";
        try {
            AWSS3Control s3ControlClient = AWSS3ControlClient.builder()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(US_WEST_2)
                    .build();

            s3ControlClient.deleteStorageLensConfiguration(new DeleteStorageLensConfigurationRequest()
                    .withAccountId(sourceAccountId)
                    .withConfigId(configurationId)
            );
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

# List Amazon S3 Storage Lens dashboards
<a name="storage_lens_list_dashboard"></a>

 

# Using the S3 console
<a name="storage_lens_console_listing"></a>

**To list S3 Storage Lens dashboards**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, navigate to **Storage Lens**.

1. Choose **Dashboards**. You can now view the dashboards in your AWS account.

## Using the AWS CLI
<a name="S3ListStorageLensConfigurationsCLI"></a>

**Example**  
The following example command lists the S3 Storage Lens dashboards in your AWS account. To use these examples, replace the `user input placeholders` with your own information.  

```
aws s3control list-storage-lens-configurations --account-id=222222222222 --region=us-east-1 --next-token=abcdefghij1234
```

**Example**  
The following example lists S3 Storage Lens configurations without a next token. To use these examples, replace the `user input placeholders` with your own information.  

```
aws s3control list-storage-lens-configurations --account-id=222222222222 --region=us-east-1
```

## Using the AWS SDK for Java
<a name="S3ListStorageLensConfigurationsJava"></a>

**Example – List S3 Storage Lens dashboard configurations**  
The following examples shows you how to list S3 Storage Lens configurations in SDK for Java. To use this example, replace the `user input placeholders` with your own information." to each example description.  

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3control.AWSS3Control;
import com.amazonaws.services.s3control.AWSS3ControlClient;
import com.amazonaws.services.s3control.model.ListStorageLensConfigurationEntry;
import com.amazonaws.services.s3control.model.ListStorageLensConfigurationsRequest;

import java.util.List;

import static com.amazonaws.regions.Regions.US_WEST_2;

public class ListDashboard {

    public static void main(String[] args) {
        String sourceAccountId = "111122223333";
        String nextToken = "nextToken";

        try {
            AWSS3Control s3ControlClient = AWSS3ControlClient.builder()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(US_WEST_2)
                    .build();

            final List<ListStorageLensConfigurationEntry> configurations =
                    s3ControlClient.listStorageLensConfigurations(new ListStorageLensConfigurationsRequest()
                            .withAccountId(sourceAccountId)
                            .withNextToken(nextToken)
                    ).getStorageLensConfigurationList();

            System.out.println(configurations.toString());
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

# View an Amazon S3 Storage Lens dashboard configuration details
<a name="storage_lens_viewing"></a>

You can view a Amazon S3 Storage Lens dashboard from the Amazon S3 console, AWS CLI, and SDK for Java.

# Using the S3 console
<a name="storage_lens_console_viewing"></a>

**To view S3 Storage Lens dashboard configuration details**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. On the left navigation pane, navigate to **Storage Lens**.

1. Choose **Dashboards**.

1. From the **Dashboards** list, click on the dashboard that you want to view. You can now view the details of your Storage Lens dashboard.

## Using the AWS CLI
<a name="S3ListStorageLensConfigurationsCLI"></a>

**Example**  
The following example retrieves an S3 Storage Lens configuration so that you can view the configuration details. To use these examples, replace the `user input placeholders` with your own information.  

```
aws s3control get-storage-lens-configuration --account-id=222222222222 --config-id=your-configuration-id --region=us-east-1
```

## Using the AWS SDK for Java
<a name="S3GetStorageLensConfigurationJava"></a>

**Example – Retrieve and view an S3 Storage Lens configuration**  
The following example shows you how to retrieve an S3 Storage Lens configuration in SDK for Java so that you can view the configuration details. To use this example, replace the `user input placeholders` with your own information.  

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3control.AWSS3Control;
import com.amazonaws.services.s3control.AWSS3ControlClient;
import com.amazonaws.services.s3control.model.GetStorageLensConfigurationRequest;
import com.amazonaws.services.s3control.model.GetStorageLensConfigurationResult;
import com.amazonaws.services.s3control.model.StorageLensConfiguration;

import static com.amazonaws.regions.Regions.US_WEST_2;

public class GetDashboard {

    public static void main(String[] args) {
        String configurationId = "ConfigurationId";
        String sourceAccountId = "111122223333";

        try {
            AWSS3Control s3ControlClient = AWSS3ControlClient.builder()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(US_WEST_2)
                    .build();

            final StorageLensConfiguration configuration =
                    s3ControlClient.getStorageLensConfiguration(new GetStorageLensConfigurationRequest()
                            .withAccountId(sourceAccountId)
                            .withConfigId(configurationId)
                    ).getStorageLensConfiguration();

            System.out.println(configuration.toString());
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

# Managing AWS resource tags with S3 Storage Lens
<a name="storage-lens-groups-manage-tags-dashboard"></a>

Each Amazon S3 Storage Lens dashboard is counted as an AWS resource with its own Amazon Resource Name (ARN). Therefore, when you configure your Storage Lens dashboard, you can optionally add AWS resource tags to the dashboard. You can add up to 50 tags for each Storage Lens dashboard. To create a Storage Lens dashboard with tags, you must have the following [S3 Storage Lens permissions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens_iam_permissions.html):
+ `s3:ListStorageLensConfigurations`
+ `s3:GetStorageLensConfiguration`
+ `s3:GetStorageLensConfigurationTagging`
+ `s3:PutStorageLensConfiguration`
+ ` s3:PutStorageLensConfigurationTagging`

You can use AWS resource tags to categorize resources according to department, line of business, or project. This is useful when you have many resources of the same type. By applying tags, you can quickly identify a specific S3 Storage Lens dashboard based on the tags that you've assigned to it. You can also use tags to track and allocate costs.

In addition, when you add an AWS resource tag to your Storage Lens dashboard, you activate [attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html). ABAC is an authorization strategy that defines permissions based on attributes such as tags. You can also use conditions that specify resource tags in your IAM policies to [control access to AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-resources).

You can edit tag keys and values, and you can remove tags from a resource at any time. Also, be aware of the following limitations:
+ Tag keys and tag values are case sensitive.
+ If you add a tag that has the same key as an existing tag on that resource, the new value overwrites the old value.
+ If you delete a resource, any tags for the resource are also deleted. 
+ Don't include private or sensitive data in your AWS resource tags.
+ System tags (with tag keys that begin with `aws:`) aren't supported.
+ The length of each tag key can't exceed 128 characters. The length of each tag value can't exceed 256 characters.

The following examples demonstrate how to use AWS resource tags with Storage Lens dashboard.

**Topics**
+ [Add AWS resource tags to a Storage Lens dashboard](storage-lens-add-tags.md)
+ [Retrieve AWS resource tags for a Storage Lens dashboard](storage-lens-get-tags.md)
+ [Updating Storage Lens dashboard tags](storage-lens-update-tags.md)
+ [Deleting AWS resource tags from a S3 Storage Lens dashboard](storage-lens-dashboard-delete-tags.md)

# Add AWS resource tags to a Storage Lens dashboard
<a name="storage-lens-add-tags"></a>

The following examples demonstrate how to add AWS resource tags to an S3 Storage Lens dashboard. You can add resource tags by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and AWS SDK for Java.

## Using the S3 console
<a name="storage-lens-add-tags-console"></a>

**To add AWS resource tags to a Storage Lens dashboard**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, navigate to **Storage Lens** on the left navigation panel.

1. Choose **Dashboards**.

1. Choose the radio button for the Storage Lens dashboard that you want to update. Then, choose **Edit**.

1. Under **General**, choose **Add tag**.

1. On the **Add tag** page, add the new key-value pair.
**Note**  
Adding a new tag with the same key as an existing tag overwrites the previous tag value.

1. (Optional) To add more than one new tag, choose **Add tag** again to continue adding new entries. You can add up to 50 AWS resource tags to your Storage Lens dashboard.

1. (Optional) If you want to remove a newly added entry, choose **Remove** next to the tag that you want to remove.

1. Choose **Save changes**.

## Using the AWS CLI
<a name="storage-lens-add-tags-cli"></a>

**Example**  
The following example command adds tags to a S3 Storage Lens dashboard configuration. To use these examples, replace the `user input placeholders` with your own information.  

```
aws s3control put-storage-lens-configuration-tagging --account-id=222222222222 --region=us-east-1 --config-id=your-configuration-id --tags=file://./tags.json
```

## Using the AWS SDK for Java
<a name="storage-lens-add-tags-sdk-java"></a>

The following example adds tags to an Amazon S3 Storage Lens configuration in SDK for Java. To use this example, replace the `user input placeholders` with your own information.

**Example – Add tags to an S3 Storage Lens configuration**  

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3control.AWSS3Control;
import com.amazonaws.services.s3control.AWSS3ControlClient;
import com.amazonaws.services.s3control.model.PutStorageLensConfigurationTaggingRequest;
import com.amazonaws.services.s3control.model.StorageLensTag;

import java.util.Arrays;
import java.util.List;

import static com.amazonaws.regions.Regions.US_WEST_2;

public class PutDashboardTagging {

    public static void main(String[] args) {
        String configurationId = "ConfigurationId";
        String sourceAccountId = "111122223333";

        try {
            List<StorageLensTag> tags = Arrays.asList(
                    new StorageLensTag().withKey("key-1").withValue("value-1"),
                    new StorageLensTag().withKey("key-2").withValue("value-2")
            );

            AWSS3Control s3ControlClient = AWSS3ControlClient.builder()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(US_WEST_2)
                    .build();

            s3ControlClient.putStorageLensConfigurationTagging(new PutStorageLensConfigurationTaggingRequest()
                    .withAccountId(sourceAccountId)
                    .withConfigId(configurationId)
                    .withTags(tags)
            );
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

# Retrieve AWS resource tags for a Storage Lens dashboard
<a name="storage-lens-get-tags"></a>

The following examples demonstrate how to retrieve AWS resource tags for a S3 Storage Lens dashboard. You can get resource tags by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and AWS SDK for Java.

# Using the S3 console
<a name="storage-lens-get-tags-console"></a>

**To retrieve the AWS resource tags for a Storage Lens dashboard**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, navigate to **Storage Lens**.

1. Choose **Dashboards**.

1. Choose the radio button for the Storage Lens dashboard configuration that you want to view. Then, choose **View dashboard configuration**.

1. Under **Tags**, review the tags associated with the dashboard.

1. (Optional) If you want to add a new tag, choose **Edit**. Then, choose **Add tag**. On the **Add tag** page, add the new key-value pair.
**Note**  
Adding a new tag with the same key as an existing tag overwrites the previous tag value.

1. (Optional) If you want to remove a newly added entry, choose **Remove** next to the tag that you want to remove.

1. Choose **Save changes**.

## Using the AWS CLI
<a name="storage-lens-get-tags-cli"></a>

**Example**  
The following example command retrieves tags for a S3 Storage Lens dashboard configuration. To use these examples, replace the `user input placeholders` with your own information.  

```
aws s3control get-storage-lens-configuration-tagging --account-id=222222222222 --region=us-east-1 --config-id=your-configuration-id --tags=file://./tags.json
```

## Using the AWS SDK for Java
<a name="S3GetStorageLensConfigurationTaggingJava"></a>

**Example – Get tags for an S3 Storage Lens dashboard configuration**  
The following example shows you how to retrieve tags for an S3 Storage Lens dashboard configuration in SDK for Java. To use this example, replace the `user input placeholders` with your own information.  

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3control.AWSS3Control;
import com.amazonaws.services.s3control.AWSS3ControlClient;
import com.amazonaws.services.s3control.model.DeleteStorageLensConfigurationRequest;
import com.amazonaws.services.s3control.model.GetStorageLensConfigurationTaggingRequest;
import com.amazonaws.services.s3control.model.StorageLensTag;

import java.util.List;

import static com.amazonaws.regions.Regions.US_WEST_2;

public class GetDashboardTagging {

    public static void main(String[] args) {
        String configurationId = "ConfigurationId";
        String sourceAccountId = "111122223333";
        try {
            AWSS3Control s3ControlClient = AWSS3ControlClient.builder()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(US_WEST_2)
                    .build();

            final List<StorageLensTag> s3Tags = s3ControlClient
                    .getStorageLensConfigurationTagging(new GetStorageLensConfigurationTaggingRequest()
                            .withAccountId(sourceAccountId)
                            .withConfigId(configurationId)
                    ).getTags();

            System.out.println(s3Tags.toString());
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

# Updating Storage Lens dashboard tags
<a name="storage-lens-update-tags"></a>

The following examples demonstrate how to update Storage Lens dashboard tags by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and AWS SDK for Java.

## Using the S3 console
<a name="storage-lens-dashboard-update-tags-console"></a>

**To update an AWS resource tag for a Storage Lens dashboard**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, navigate to **Storage Lens**.

1. Choose **Dashboards**.

1. Choose the radio button for the Storage Lens dashboard configuration that you want to view. Then, choose **View dashboard configuration**.

1. Under **Tags**, review the tags associated with the dashboard.

1. (Optional) If you want to add a new tag, choose **Edit**. Then, choose **Add tag**. On the **Add tag** page, add the new key-value pair.
**Note**  
Adding a new tag with the same key as an existing tag overwrites the previous tag value.

1. (Optional) If you want to remove a newly added entry, choose **Remove** next to the tag that you want to remove.

1. Choose **Save changes**.

## Using the AWS CLI
<a name="storage-lens-dashboard-update-tags-cli"></a>

**Example**  
The following example command adds or replaces tags on an existing Amazon S3 Storage Lens dashboard configuration. To use these examples, replace the `user input placeholders` with your own information.  

```
aws s3control put-storage-lens-configuration-tagging --account-id=111122223333 --config-id=example-dashboard-configuration-id --region=us-east-1 --config-id=your-configuration-id
```

## Using the AWS SDK for Java
<a name="storage-lens-dashboard-update-tags-sdk-java"></a>

The following AWS SDK for Java example updates the AWS resource tags on an existing Storage Lens dashboard. To use this example, replace the `user input placeholders` with your own information.

**Example – Update tags on an existing Storage Lens dashboard configuration**  

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3control.AWSS3Control;
import com.amazonaws.services.s3control.AWSS3ControlClient;
import com.amazonaws.services.s3control.model.PutStorageLensConfigurationTaggingRequest;
import com.amazonaws.services.s3control.model.StorageLensTag;

import java.util.Arrays;
import java.util.List;

import static com.amazonaws.regions.Regions.US_WEST_2;

public class PutDashboardTagging {

    public static void main(String[] args) {
        String configurationId = "ConfigurationId";
        String sourceAccountId = "111122223333";

        try {
            List<StorageLensTag> tags = Arrays.asList(
                    new StorageLensTag().withKey("key-1").withValue("value-1"),
                    new StorageLensTag().withKey("key-2").withValue("value-2")
            );

            AWSS3Control s3ControlClient = AWSS3ControlClient.builder()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(US_WEST_2)
                    .build();

            s3ControlClient.putStorageLensConfigurationTagging(new PutStorageLensConfigurationTaggingRequest()
                    .withAccountId(sourceAccountId)
                    .withConfigId(configurationId)
                    .withTags(tags)
            );
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

# Deleting AWS resource tags from a S3 Storage Lens dashboard
<a name="storage-lens-dashboard-delete-tags"></a>

The following examples demonstrate how to delete AWS resource tags from an existing Storage Lens dashboard. You can delete tags by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and AWS SDK for Java.

## Using the S3 console
<a name="storage-lens-groups-delete-tags-console"></a>

**To delete AWS resource tags from an existing Storage Lens dashboard**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, navigate to **Storage Lens**.

1. Choose **Dashboards**.

1. Choose the radio button for the Storage Lens dashboard configuration that you want to view. Then, choose **View dashboard configuration**.

1. Under **Tags**, review the tags associated with the dashboard.

1. Choose **Remove** next to the tag that you want to remove.

1. Choose **Save changes**.

## Using the AWS CLI
<a name="storage-lens-dashboard-delete-tags-cli"></a>

The following AWS CLI command deletes AWS resource tags from an existing Storage Lens dashboard. To use this example command, replace the `user input placeholders` with your own information.

**Example**  

```
aws s3control delete-storage-lens-configuration-tagging --account-id=222222222222 --config-id=your-configuration-id --region=us-east-1
```

## Using the AWS SDK for Java
<a name="storage-lens-dashboard-delete-tags-sdk-java"></a>

The following AWS SDK for Java example deletes an AWS resource tag from the Storage Lens dashboard using the Amazon Resource Name (ARN) that you specify in account `111122223333`. To use this example, replace the `user input placeholders` with your own information.

**Example – Delete tags for an S3 Storage Lens dashboard configuration**  

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3control.AWSS3Control;
import com.amazonaws.services.s3control.AWSS3ControlClient;
import com.amazonaws.services.s3control.model.DeleteStorageLensConfigurationTaggingRequest;

import static com.amazonaws.regions.Regions.US_WEST_2;

public class DeleteDashboardTagging {

    public static void main(String[] args) {
        String configurationId = "ConfigurationId";
        String sourceAccountId = "111122223333";
        try {
            AWSS3Control s3ControlClient = AWSS3ControlClient.builder()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(US_WEST_2)
                    .build();

            s3ControlClient.deleteStorageLensConfigurationTagging(new DeleteStorageLensConfigurationTaggingRequest()
                    .withAccountId(sourceAccountId)
                    .withConfigId(configurationId)
            );
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

# Helper files for using Amazon S3 Storage Lens
<a name="S3LensHelperFilesCLI"></a>

Use the following JSON files and its key inputs for your examples.

## S3 Storage Lens example configuration in JSON
<a name="S3LensHelperFilesSampleConfigurationCLI"></a>

**Example `config.json`**  
The `config.json` file contains the details of a S3 Storage Lens Organizations-level *advanced metrics and recommendations* configuration. To use the following example, replace the `user input placeholders` with your own information.  
Additional charges apply for advanced metrics and recommendations. For more information, see [advanced metrics and recommendations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens_basics_metrics_recommendations.html#storage_lens_basics_metrics_selection).

```
{
  "Id": "SampleS3StorageLensConfiguration", //Use this property to identify your S3 Storage Lens configuration.
  "AwsOrg": { //Use this property when enabling S3 Storage Lens for AWS Organizations.
    "Arn": "arn:aws:organizations::123456789012:organization/o-abcdefgh"
  },
  "AccountLevel": {
    "ActivityMetrics": {
      "IsEnabled":true
    },
    "AdvancedCostOptimizationMetrics": {
      "IsEnabled":true
    },
    "AdvancedDataProtectionMetrics": {
      "IsEnabled":true
    },
    "DetailedStatusCodesMetrics": {
      "IsEnabled":true
    },
    "BucketLevel": {
      "ActivityMetrics": {
        "IsEnabled":true
      },
      "AdvancedDataProtectionMetrics": {
      "IsEnabled":true
      },
      "AdvancedCostOptimizationMetrics": {
        "IsEnabled":true
      },
      "DetailedStatusCodesMetrics": {
        "IsEnabled":true
      },
      "PrefixLevel":{
        "StorageMetrics":{
          "IsEnabled":true,
          "SelectionCriteria":{
            "MaxDepth":5,
            "MinStorageBytesPercentage":1.25,
            "Delimiter":"/"
          }
        }
      }
    }
  },
  "Exclude": { //Replace with "Include" if you prefer to include Regions.
    "Regions": [
      "eu-west-1"
    ],
    "Buckets": [ //This attribute is not supported for AWS Organizations-level configurations.
      "arn:aws:s3:::amzn-s3-demo-source-bucket"
    ]
  },
  "IsEnabled": true, //Whether the configuration is enabled
  "DataExport": { //Details about the metrics export
    "S3BucketDestination": {
      "OutputSchemaVersion": "V_1",
      "Format": "CSV", //You can add "Parquet" if you prefer.
      "AccountId": "111122223333",
      "Arn": "arn:aws:s3:::
amzn-s3-demo-destination-bucket", // The destination bucket for your metrics export must be in the same Region as your S3 Storage Lens configuration. 
      "Prefix": "prefix-for-your-export-destination",
      "Encryption": {
        "SSES3": {}
      }
    },
    "CloudWatchMetrics": {
      "IsEnabled": true
    }
  }
}
```

## S3 Storage Lens example configuration with Storage Lens groups in JSON
<a name="StorageLensGroupsHelperFilesCLI"></a>

**Example `config.json`**  

The `config.json` file contains the details that you want to apply to your Storage Lens configuration when using Storage Lens groups. To use the example, replace the `user input placeholders` with your own information.

To attach all Storage Lens groups to your dashboard, update your Storage Lens configuration with the following syntax:

```
{
  "Id": "ExampleS3StorageLensConfiguration",
  "AccountLevel": {
    "ActivityMetrics": {
      "IsEnabled":true
    },
    "AdvancedCostOptimizationMetrics": {
      "IsEnabled":true
    },
    "AdvancedDataProtectionMetrics": {
      "IsEnabled":true
    },
    "BucketLevel": {
      "ActivityMetrics": {
      "IsEnabled":true
      },
    "StorageLensGroupLevel": {},
  "IsEnabled": true
}
```

To include only two Storage Lens groups in your Storage Lens dashboard configuration (*slg-1* and *slg-2*), use the following syntax:

```
{
  "Id": "ExampleS3StorageLensConfiguration",
  "AccountLevel": {
    "ActivityMetrics": {
      "IsEnabled":true
    },
    "AdvancedCostOptimizationMetrics": {
      "IsEnabled":true
    },
    "AdvancedDataProtectionMetrics": {
      "IsEnabled":true
    },
    "BucketLevel": {
      "ActivityMetrics": {
      "IsEnabled":true
      },
   "StorageLensGroupLevel": {
        "SelectionCriteria": {
            "Include": [
                "arn:aws:s3:us-east-1:111122223333:storage-lens-group/slg-1",
                "arn:aws:s3:us-east-1:444455556666:storage-lens-group/slg-2"
            ]
    },
  "IsEnabled": true
}
```

To exclude only certain Storage Lens groups from being attached to your dashboard configuration, use the following syntax:

```
{
  "Id": "ExampleS3StorageLensConfiguration",
  "AccountLevel": {
    "ActivityMetrics": {
      "IsEnabled":true
    },
    "AdvancedCostOptimizationMetrics": {
      "IsEnabled":true
    },
    "AdvancedDataProtectionMetrics": {
      "IsEnabled":true
    },
    "BucketLevel": {
      "ActivityMetrics": {
      "IsEnabled":true
      },
   "StorageLensGroupLevel": {
        "SelectionCriteria": {
            "Exclude": [
                "arn:aws:s3:us-east-1:111122223333:storage-lens-group/slg-1",
                "arn:aws:s3:us-east-1:444455556666:storage-lens-group/slg-2"
            ]
    },
  "IsEnabled": true
}
```

## S3 Storage Lens example tags configuration in JSON
<a name="S3LensHelperFilesSampleConfigurationTagsCLI"></a>

**Example `tags.json`**  
The `tags.json` file contains the tags that you want to apply to your S3 Storage Lens configuration. To use this example, replace the `user input placeholders` with your own information.  

```
[
    {
        "Key": "key1",
        "Value": "value1"
    },
    {
        "Key": "key2",
        "Value": "value2"
    }
]
```

## S3 Storage Lens example configuration IAM permissions
<a name="S3LensHelperFilesSampleConfigurationIAMPermissionsCLI"></a>

**Example `permissions.json` – Specific dashboard name**  
This example policy shows an S3 Storage Lens IAM `permissions.json` file with a specific dashboard name specified. Replace *`value1`*, `us-east-1`, `your-dashboard-name`, and `example-account-id` with your own values.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetStorageLensConfiguration",
                "s3:DeleteStorageLensConfiguration",
                "s3:PutStorageLensConfiguration"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/key1": "value1"
                }
            },
            "Resource": "arn:aws:s3:us-east-1:111122223333:storage-lens/your-dashboard-name"
        }
    ]
}
```

**Example `permissions.json` – No specific dashboard name**  
This example policy shows an S3 Storage Lens IAM `permissions.json` file without a specific dashboard name specified. Replace *`value1`*, `us-east-1`, and `example-account-id` with your own values.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetStorageLensConfiguration",
                "s3:DeleteStorageLensConfiguration",
                "s3:PutStorageLensConfiguration"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/key1": "value1"
                }
            },
            "Resource": "arn:aws:s3:us-east-1:111122223333:storage-lens/*"
        }
    ]
}
```

# Viewing metrics with Amazon S3 Storage Lens
<a name="storage_lens_view_metrics"></a>

S3 Storage Lens aggregates your metrics and displays the information in the **Account snapshot** section on the Amazon S3 console **Buckets** page. S3 Storage Lens also provides an interactive dashboard that you can use to visualize insights and trends, flag outliers, and receive recommendations for optimizing storage costs and applying data protection best practices. Your dashboard has drill-down options to generate and visualize insights at the organization, account, AWS Region, storage class, bucket, prefix, or Storage Lens group level. You can also send a daily metrics report in CSV or Parquet format to a general purpose S3 bucket or export the metrics directly to an AWS-managed S3 table bucket.

By default, all dashboards are configured with free metrics, which include metrics that you can use to understand usage and activity across your S3 storage, optimize your storage costs, and implement data-protection and access-management best practices. Free metrics are aggregated down to the bucket level. With free metrics, data is available for queries for up to 14 days.

Advanced metrics and recommendations include the following additional features that you can use to gain further insight into usage and activity across your storage and best practices for optimizing your storage:
+ Contextual recommendations (available only in the dashboard)
+ Advanced metrics (including activity metrics aggregated by bucket)
+ Prefix aggregation
+ Storage Lens group aggregation
+ Storage Lens group aggregation
+ Amazon CloudWatch publishing

Advanced metrics data is available for queries for 15 months. There are additional charges for using S3 Storage Lens with advanced metrics. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing). For more information about free and advanced metrics, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

**Topics**
+ [Viewing S3 Storage Lens metrics on the dashboards](storage_lens_view_metrics_dashboard.md)
+ [Viewing Amazon S3 Storage Lens metrics using a data export](storage_lens_view_metrics_export.md)
+ [Monitor S3 Storage Lens metrics in CloudWatch](storage_lens_view_metrics_cloudwatch.md)
+ [Amazon S3 Storage Lens metrics use cases](storage-lens-use-cases.md)

# Viewing S3 Storage Lens metrics on the dashboards
<a name="storage_lens_view_metrics_dashboard"></a>

In the Amazon S3 console, S3 Storage Lens provides an interactive default dashboard that you can use to visualize insights and trends in your data. You can also use this dashboard to flag outliers and receive recommendations for optimizing storage costs and applying data-protection best practices. Your dashboard has drill-down options to generate insights at the account, bucket, AWS Region, prefix, or Storage Lens group level. If you've enabled S3 Storage Lens to work with AWS Organizations, you can also generate insights at the organization level (such as data for all accounts that are part of your AWS Organizations hierarchy). The dashboard always loads for the latest date that has metrics available.

The S3 Storage Lens default dashboard on the console is named **default-account-dashboard**. Amazon S3 pre-configures this dashboard to visualize the summarized insights and trends for your entire account and updates them daily in the S3 console. You can't modify the configuration scope of the default dashboard, but you can upgrade the metrics selection from the free metrics to the paid advanced metrics and recommendations. With advanced metrics and recommendations, you can access additional metrics and features. These features include advanced metric categories, prefix-level aggregation, contextual recommendations, and Amazon CloudWatch publishing.

You can disable the default dashboard, but you can't delete it. If you disable your default dashboard, it is no longer updated. You also will no longer receive any new daily metrics in S3 Storage Lens or in the **Account snapshot** section on the **Buckets** page. You can still see historic data in the default dashboard until the 14-day period for data queries expires. This period is 15 months if you've enabled advanced metrics and recommendations. To access this data, you can re-enable the default dashboard within the expiration period.

You can create additional S3 Storage Lens dashboards and scope them by AWS Regions, S3 buckets, or accounts. You can also scope your dashboards by organization if you've enabled Storage Lens to work with AWS Organizations. When you create or edit an S3 Storage Lens dashboard, you define your dashboard scope and metrics selection. 

 

You can disable or delete any additional dashboards that you create. 
+ If you disable a dashboard, it is no longer updated, and you will no longer receive any new daily metrics. You can still see historic data for free metrics until the 14-day expiration period. If you enabled advanced metrics and recommendations for that dashboard, this period is 15 months. To access this data, you can re-enable the dashboard within the expiration period. 
+ If you delete your dashboard, you lose all your dashboard configuration settings. You will no longer receive any new daily metrics, and you also lose access to the historical data associated with that dashboard. If you want to access the historic data for a deleted dashboard, you must create another dashboard with the same name in the same home Region.

**Topics**
+ [Viewing an Amazon S3 Storage Lens dashboard](#storage_lens_console_viewing)
+ [Understanding your S3 Storage Lens dashboard](#storage_lens_console_viewing_dashboard)

## Viewing an Amazon S3 Storage Lens dashboard
<a name="storage_lens_console_viewing"></a>

The following procedure shows how to view an S3 Storage Lens dashboard in the S3 console. For use-case based walkthroughs that show how to use your dashboard to optimize costs, implement best practices, and improve the performance of applications that access your S3 buckets, see [Amazon S3 Storage Lens metrics use cases](storage-lens-use-cases.md).

**Note**  
You can't use your account's root user credentials to view Amazon S3 Storage Lens dashboards. To access S3 Storage Lens dashboards, you must grant the required AWS Identity and Access Management (IAM) permissions to a new or existing IAM user. Then, sign in with those user credentials to access S3 Storage Lens dashboards. For more information, see [Setting Amazon S3 Storage Lens permissions](storage_lens_iam_permissions.md) and [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

   Your dashboard opens in S3 Storage Lens. The **Snapshot for *date*** section shows the latest date that S3 Storage Lens has collected metrics for. Your dashboard always loads the latest date that has metrics available.

1. (Optional) To change the date for your S3 Storage Lens dashboard, in the top-right date selector, choose a new date.

1. (Optional) To apply temporary filters to further limit the scope of your dashboard data, do the following:

   1. Expand the **Filters** section.

   1. To filter by specific accounts, AWS Regions, storage classes, buckets, prefixes, or Storage Lens groups, choose the options to filter by.
**Note**  
The **Prefixes** filter and the **Storage Lens groups** filter can’t be applied at the same time.

   1. To update a filter, choose **Apply**.

   1. To remove a filter, click on the **X** next to the filter.

1. In any section in your S3 Storage Lens dashboard, to see data for a specific metric, for **Metric**, choose the metric name.

1. In any chart or visualization in your S3 Storage Lens dashboard, you can drill down into deeper levels of aggregation by using the **Accounts**, **AWS Regions**, **Storage classes**, **Buckets**, **Prefixes**, or **Storage Lens groups** tabs. For an example, see [Uncover cold Amazon S3 buckets](storage-lens-optimize-storage.md#uncover-cold-buckets).

## Understanding your S3 Storage Lens dashboard
<a name="storage_lens_console_viewing_dashboard"></a>

Your S3 Storage Lens dashboard has a primary **Overview** tab, and up to five additional tabs that represent each aggregation level:
+ **Accounts**
+ **AWS Regions**
+ **Storage classes**
+ **Buckets**
+ **Prefixes**
+ **Storage Lens groups**

On the **Overview** tab, your dashboard data is aggregated into three different sections: **Snapshot for *date***, **Trends and distributions**, and **Top N overview**. 

For more information about your S3 Storage Lens dashboard, see the following sections.

### Snapshot
<a name="storage-lens-snapshot"></a>

The **Snapshot for *date*** section shows summary metrics that S3 Storage Lens has aggregated for the date selected. These summary metrics include the following metrics:
+ **Total storage** – The total amount of storage used in bytes.
+ **Object count** – The total number of objects in your AWS account.
+ **Average object size** – The average object size.
+ **Active buckets** – The total number of active buckets in active usage with storage > 0 bytes in your account.
+ **Accounts** – The number of accounts whose storage is in scope. This value is **1** unless you are using AWS Organizations and your S3 Storage Lens has trusted access with a valid service-linked role. For more information, see [Using service-linked roles for Amazon S3 Storage Lens](using-service-linked-roles.md). 
+ **Buckets** – The total number of buckets in your account.

**Metric data**  
For each metric that appears in the snapshot, you can see the following data:
+ **Metric name** – The name of the metric.
+ **Metric category** – The category that the metric is organized into.
+ **Total for *date*** – The total count for the date selected.
+ **% change** – The percentage change from the last snapshot date.
+ **30-day trend** – A trend-line showing the changes for the metric over a 30-day period.
+ **Recommendation** – A contextual recommendation based on the data that's provided in the snapshot. Recommendations are available with advanced metrics and recommendations. For more information, see [Recommendations](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_recommendations).

**Metrics categories**  
You can optionally update your dashboard **Snapshot for *date*** section to display metrics for other categories. If you want to see snapshot data for additional metrics, you can choose from the following **Metrics categories**:
+ **Cost optimization** 
+ **Data protection**
+ **Activity** (available with advanced metrics)
+ **Access management**
+ **Performance**
+ **Events**

The **Snapshot for *date*** section displays only a selection of metrics for each category. To see all metrics for a specific category, choose the metric in the **Trends and distributions** or **Top N overview** sections. For more information about metric categories, see [Metrics categories](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_types). For a complete list of S3 Storage Lens metrics, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

### Trends and distributions
<a name="storage-lens-trends"></a>

The second section of the **Overview** tab is **Trends and distributions**. In the **Trends and distributions** section, you can choose two metrics to compare over a date range that you define. The **Trends and distributions** section shows the relationship between two metrics over time. This section displays charts that you can use to see the **Storage class** and **Region** distribution between the two trends that you are tracking. You can optionally drill down into a data point in one of the charts for deeper analysis.

 For a walkthrough that uses the **Trends and distributions** section, see [Identify buckets that don't use server-side encryption with AWS KMS for default encryption (SSE-KMS)](storage-lens-data-protection.md#storage-lens-sse-kms).

### Top N overview
<a name="storage-lens-top-n"></a>

The third section of the S3 Storage Lens dashboard is **Top N overview** (sorted in ascending or descending order). This section displays your selected metrics across the top number of accounts, AWS Regions, buckets, prefixes, or Storage Lens groups. If you enabled S3 Storage Lens to work with AWS Organizations, you can also see your selected metrics across your organization.

For a walkthrough that uses the **Top N overview** section, see [Identify your largest S3 buckets](storage-lens-optimize-storage.md#identify-largest-s3-buckets).

### Drill down and analyze by options
<a name="storage-lens-drill-down"></a>

To provide a fluid experience for analysis, the S3 Storage Lens dashboard provides an action menu, which appears when you choose any chart value. To use this menu, choose any chart value to see the associated metrics values, and then choose from two options in the box that appears:
+ The **Drill down** action applies the selected value as a filter across all tabs of your dashboard. You can then drill down into that value for deeper analysis.
+ The **Analyze by** action takes you to the **Dimension** tab that you select and applies that tab value as a filter. These tabs include **Accounts**, **AWS Regions**, **Storage classes**, **Buckets**, **Prefixes** (for dashboards that have **Advanced metrics** and **Prefix aggregation** enabled), and **Storage Lens groups** (for dashboards that have **Advanced metrics** and **Storage Lens group aggregation** enabled). With **Analyze by**, you can view the data in the context of the new dimension for deeper analysis.

The **Drill down** and **Analyze by** actions might be disabled if the outcome would yield illogical results or would not have any value. Both the **Drill down** and **Analyze by** actions apply filters on top of any existing filters across all tabs of the dashboard. You can also remove the filters as needed.

### Tabs
<a name="storage-lens-dimension-tabs"></a>

The dimension-level tabs provide a detailed view of all values within a particular dimension. For example, the **AWS Regions** tab shows metrics for all AWS Regions, and the **Buckets** tab shows metrics for all buckets. Each dimension tab contains an identical layout consisting of four sections:
+ A trend chart that displays your top *N* items within the dimension over the last 30 days for the selected metric. By default, this chart displays the top 10 items, but you can decrease it to at least 3 items or increase it up to 50 items.
+ A histogram chart that shows a vertical bar chart for the selected date and metric. If you have a large number of items to display in this chart, you might need to scroll horizontally.
+ A bubble analysis chart that plots all items within the dimension. This chart represents the first metric on the x axis and the second metric on the y axis. The third metric is represented by the size of the bubble. 
+ A metric grid view that contains each item in the dimension listed in rows. The columns represent each available metric, arranged in metrics category tabs for easier navigation. 

# Viewing Amazon S3 Storage Lens metrics using a data export
<a name="storage_lens_view_metrics_export"></a>

Amazon S3 Storage Lens metrics are generated daily in CSV or Apache Parquet-formatted metrics export files and placed in an S3 general purpose bucket in your account. From there, you can ingest the metrics export into the analytics tools of your choice, such as Amazon Quick and Amazon Athena, where you can analyze storage usage and activity trends. You can also send daily metric exports to an AWS-managed S3 table bucket for immediate querying, using AWS analytics services or third-party tools.

**Topics**
+ [Using an AWS KMS key to encrypt your metrics exports](storage_lens_encrypt_permissions.md)
+ [What is an S3 Storage Lens export manifest?](storage_lens_whatis_metrics_export_manifest.md)
+ [Understanding the Amazon S3 Storage Lens export schemas](storage_lens_understanding_metrics_export_schema.md)

# Using an AWS KMS key to encrypt your metrics exports
<a name="storage_lens_encrypt_permissions"></a>

To grant Amazon S3 Storage Lens permission to encrypt your metrics exports by using a customer managed key, you must use a key policy. To update your key policy so that you can use a KMS key to encrypt your S3 Storage Lens metrics exports, follow these steps. 

**To grant S3 Storage Lens permissions to encrypt data by using your KMS key**

1. Sign into the AWS Management Console by using the AWS account that owns the customer managed key.

1. Open the AWS KMS console at [https://console.aws.amazon.com/kms](https://console.aws.amazon.com/kms).

1. To change the AWS Region, use the **Region selector** in the upper-right corner of the page.

1. In the left navigation pane, choose **Customer managed keys**. 

1. Under **Customer managed keys**, choose the key that you want to use to encrypt the metrics exports. AWS KMS keys are Region-specific and must be in the same Region as the metrics export destination S3 bucket.

1. Under **Key policy**, choose **Switch to policy view**. 

1. To update the key policy, choose **Edit**. 

1. Under **Edit key policy**, add the following key policy to the existing key policy. To use this policy, replace the ` user input placeholders ` with your information.

   ```
   {
       "Sid": "Allow Amazon S3 Storage Lens use of the KMS key",
        "Effect": "Allow",
       "Principal": {
           "Service": "storage-lens.s3.amazonaws.com"
       },
       "Action": [
           "kms:GenerateDataKey"
       ],
       "Resource": "*",
       "Condition": {
          "StringEquals": {
              "aws:SourceArn": "arn:aws:s3:us-east-1:        source-account-id:storage-lens/your-dashboard-name",
              "aws:SourceAccount": "source-account-id"
           }
        }
   }
   ```

1. Choose **Save changes**. 

For more information about creating customer managed keys and using key policies, see the following topics in the *AWS Key Management Service Developer Guide*: 
+  [Create a KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) 
+  [Key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) 

You can also use the AWS KMS `PUT` key policy API operation ([https://docs.aws.amazon.com/kms/latest/APIReference/API_PutKeyPolicy.html](https://docs.aws.amazon.com/kms/latest/APIReference/API_PutKeyPolicy.html)) to copy the key policy to the customer managed keys that you want to use to encrypt the metrics exports by using the REST API, AWS CLI, and SDKs.

# What is an S3 Storage Lens export manifest?
<a name="storage_lens_whatis_metrics_export_manifest"></a>

S3 Storage Lens daily metrics exports in general-purpose buckets may be split into multiple files due to the large amount of data aggregated. The manifest file `manifest.json` describes where the metrics export files for that day are located. Whenever a new export is delivered, it's accompanied by a new manifest. Each manifest contained in the `manifest.json` file provides metadata and other basic information about the export. 

The manifest information includes the following properties:
+  `sourceAccountId` – The account ID of the configuration owner.
+  `configId` – A unique identifier for the dashboard.
+  `destinationBucket` – The destination bucket Amazon Resource Name (ARN) that the metrics export is placed in.
+  `reportVersion` – The version of the export.
+  `reportDate` – The date of the report.
+  `reportFormat` – The format of the report.
+  `reportSchema` – The schema of the report.
+  `reportFiles` – The actual list of the export report files that are in the destination bucket.

Manifest destination path example:

```
user-defined-prefix/StorageLens/111122223333/example-dashboard-configuration-id/V_1/manifests/dt=2025-03-18/manifest.json
```

The following example shows a `manifest.json` file for a CSV-formatted Storage Lens default metrics report:

```
{  
   "sourceAccountId": "111122223333",  
   "configId": "example-dashboard-configuration-id",  
   "destinationBucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket",  
   "reportVersion": "V_1",  
   "reportDate": "2025-07-15",  
   "reportFormat": "CSV",  
   "reportSchema": "version_number,configuration_id,report_date,aws_account_number,aws_region,storage_class,record_type,record_value,bucket_name,metric_name,metric_value",  
   "reportFiles": [  
        {  
            "key": "DestinationPrefix/StorageLens/111122223333/example-dashboard-configuration-id/V_1/reports/dt=2025-07-15/12345678-1234-1234-1234-123456789012.csv",  
            "size": 1603959,  
            "md5Checksum": "2177e775870def72b8d84febe1ad3574"  
        }  
   ]  
}
```

The following example shows a `manifest.json` file for a CSV-formatted Storage Lens expanded prefixes metrics report:

```
{  
   "sourceAccountId": "111122223333",  
   "configId": "example-dashboard-configuration-id",  
   "destinationBucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket",   
   "reportVersion": "V_1",  
   "reportDate": "2025-11-03",  
   "reportFormat": "CSV",  
   "reportSchema": "version_number,configuration_id,report_date,aws_account_number,aws_region,storage_class,record_type,record_value,bucket_name,metric_name,metric_value",  
   "reportFiles": [  
        {  
            "key": "DestinationPrefix/StorageLensExpandedPrefixes/111122223333/example-dashboard-configuration-id/V_1/reports/dt=2025-11-03/EXAMPLE1234-56ab-78cd-90ef-EXAMPLE11111.csv",  
            "size": 1603959,  
            "md5Checksum": "2177e775870def72b8d84febe1ad3574"  
        }  
      ]  
}
```

The following example shows a `manifest.json` file for a Parquet-formatted Storage Lens default metrics report:

```
{  
   "sourceAccountId": "111122223333",  
   "configId": "example-dashboard-configuration-id",  
   "destinationBucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket",  
   "reportVersion": "V_1",  
   "reportDate": "2025-11-03",  
   "reportFormat": "Parquet",  
   "reportSchema": "message s3.storage.lens { required string version_number; required string configuration_id; required string report_date; required string aws_account_number; required string aws_region; required string storage_class; required string record_type; required string record_value; required string bucket_name; required string metric_name; required long metric_value; }",  
   "reportFiles": [  
      {  
         "key": "DestinationPrefix/StorageLens/111122223333/example-dashboard-configuration-id/V_1/reports/dt=2025-11-03/bd23de7c-b46a-4cf4-bcc5-b21aac5be0f5.par",  
         "size": 14714,  
         "md5Checksum": "b5c741ee0251cd99b90b3e8eff50b944"  
      }  
   ]  
}
```

The following example shows a `manifest.json` file for a Parquet-formatted Storage Lens expanded prefixes metrics report:

```
{  
   "sourceAccountId": "111122223333",  
   "configId": "example-dashboard-configuration-id",  
   "destinationBucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket",  
   "reportVersion": "V_1",  
   "reportDate": "2025-11-03",  
   "reportFormat": "Parquet",  
   "reportSchema": "message s3.storage.lens { required string version_number; required string configuration_id; required string report_date; required string aws_account_number; required string aws_region; required string storage_class; required string record_type; required string record_value; required string bucket_name; required string metric_name; required long metric_value; }",  
   "reportFiles": [  
      {  
         "key": "DestinationPrefix/StorageLensExpandedPrefixes/111122223333/example-dashboard-configuration-id/V_1/reports/dt=2025-11-03/bd23de7c-b46a-4cf4-bcc5-b21aac5be0f5.par",  
         "size": 14714,  
         "md5Checksum": "b5c741ee0251cd99b90b3e8eff50b944"  
      }  
   ]  
}
```

You can configure your metrics export to be generated as part of your dashboard configuration in the Amazon S3 console or by using the Amazon S3 REST API, AWS CLI, and SDKs.

# Understanding the Amazon S3 Storage Lens export schemas
<a name="storage_lens_understanding_metrics_export_schema"></a>

S3 Storage Lens export schemas vary depending on your export destination. Choose the appropriate schema based on whether you're exporting to S3 general purpose buckets or S3 tables.

**Topics**
+ [Export schema for S3 general purpose buckets](#storage_lens_general_purpose_bucket_schema)
+ [Export schemas for S3 tables](#storage_lens_s3_tables_schema)

## Export schema for S3 general purpose buckets
<a name="storage_lens_general_purpose_bucket_schema"></a>

The following table contains the schema of your S3 Storage Lens metrics export when exporting to S3 general purpose buckets.


| Attribute name  | Data type | Column name | Description | 
| --- | --- | --- | --- | 
|  VersionNumber  | String |  version\$1number  | The version of the S3 Storage Lens metrics being used. | 
|  ConfigurationId  | String |  configuration\$1id  | The  configuration\$1id of your S3 Storage Lens configuration. | 
|  ReportDate  | String  |  report\$1date  | The date that the metrics were tracked. | 
|  AwsAccountNumber  |  String  |  aws\$1account\$1number  | Your AWS account number. | 
|  AwsRegion  |  String  |  aws\$1region  | The AWS Region for which the metrics are being tracked. | 
|  StorageClass  |  String  |  storage\$1class  | The storage class of the bucket in question. | 
|  RecordType  |  ENUM  |  record\$1type  |  The type of artifact that is being reported (ACCOUNT, BUCKET, or PREFIX).  | 
|  RecordValue  |  String  |  record\$1value  | The value of the RecordType artifact.  The `record_value` is URL-encoded.   | 
|  BucketName  |  String  |  bucket\$1name  | The name of the bucket that is being reported. | 
|  MetricName  |  String  |  metric\$1name  | The name of the metric that is being reported. | 
|  MetricValue  |  Long  |  metric\$1value  | The value of the metric that is being reported. | 

### Example of an S3 Storage Lens metrics export
<a name="storage_lens_sample_metrics_export"></a>

The following is an example of an S3 Storage Lens metrics export based on this schema. 

**Note**  
You can identify metrics for Storage Lens groups by looking for the `STORAGE_LENS_GROUP_BUCKET` or `STORAGE_LENS_GROUP_ACCOUNT` values in the `record_type` column. The `record_value` column will display the Amazon Resource Name (ARN) for the Storage Lens group, for example, `arn:aws:s3:us-east-1:123456789012:storage-lens-group/slg-1`. 

![\[An example S3 Storage Lens metrics export file.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/sample_storage_lens_export.png)


The following is an example of an S3 Storage Lens metrics export with Storage Lens groups data.

![\[An example S3 Storage Lens metrics export file with Storage Lens groups data.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/StorageLensGroups_metricsexport.png)


## Export schemas for S3 tables
<a name="storage_lens_s3_tables_schema"></a>

When exporting S3 Storage Lens metrics to S3 tables, the data is organized into three separate table schemas: storage metrics, bucket property metrics, and activity metrics.

**Topics**
+ [Storage metrics table schema](#storage_lens_s3_tables_storage_metrics)
+ [Bucket property metrics table schema](#storage_lens_s3_tables_bucket_property_metrics)
+ [Activity metrics table schema](#storage_lens_s3_tables_activity_metrics)

### Storage metrics table schema
<a name="storage_lens_s3_tables_storage_metrics"></a>


| Name | Type | Description | 
| --- | --- | --- | 
|  version\$1number  | string | Version identifier of the schema of the table | 
|  configuration\$1id  | string | S3 Storage Lens configuration name | 
|  report\$1time  | timestamptz | Date the S3 Storage Lens report refers to | 
|  aws\$1account\$1id  | string | Account id the entry refers to | 
|  aws\$1region  | string | Region | 
|  storage\$1class  | string | Storage Class | 
|  record\$1type  | string | Type of record, related to what is the level of aggregation of data. Values: ACCOUNT, BUCKET, PREFIX, LENS GROUP.  | 
|  record\$1value  | string | Disambiguator for record types that have more than one record under them. It is used to reference the prefix | 
|  bucket\$1name  | string | Bucket name | 
|  object\$1count  | long | Number of objects stored for the current referenced item | 
|  storage\$1bytes  | DECIMAL(38,0) | Number of bytes stored for the current referenced item | 
|  bucket\$1key\$1sse\$1kms\$1object\$1count  | long | Number of objects encrypted with a customer managed key stored for the current referenced item | 
|  bucket\$1key\$1sse\$1kms\$1storage\$1bytes  | DECIMAL(38,0) | Number of bytes encrypted with a customer managed key stored for the current referenced item | 
|  current\$1version\$1object\$1count  | long | Number of current version objects stored for the current referenced item | 
|  current\$1version\$1storage\$1bytes  | DECIMAL(38,0) | Number of current version bytes stored for the current referenced item | 
|  delete\$1marker\$1object\$1count  | long | Number of delete marker objects stored for the current referenced item | 
|  delete\$1marker\$1storage\$1bytes  | DECIMAL(38,0) | Number of delete marker bytes stored for the current referenced item | 
|  encrypted\$1object\$1count  | long | Number of encrypted objects stored for the current referenced item | 
|  encrypted\$1storage\$1bytes  | DECIMAL(38,0) | Number of encrypted bytes stored for the current referenced item | 
|  incomplete\$1mpu\$1object\$1older\$1than\$17\$1days\$1count  | long | Number of incomplete multipart upload objects older than 7 days stored for the current referenced item | 
|  incomplete\$1mpu\$1storage\$1older\$1than\$17\$1days\$1bytes  | DECIMAL(38,0) | Number of incomplete multipart upload bytes stored older than 7 days for the current referenced item | 
|  incomplete\$1mpu\$1object\$1count  | long | Number of incomplete multipart upload objects stored for the current referenced item | 
|  incomplete\$1mpu\$1storage\$1bytes  | DECIMAL(38,0) | Number of incomplete multipart upload bytes stored for the current referenced item | 
|  non\$1current\$1version\$1object\$1count  | long | Number of non-current version objects stored for the current referenced item | 
|  non\$1current\$1version\$1storage\$1bytes  | DECIMAL(38,0) | Number of non-current version bytes stored for the current referenced item | 
|  object\$1lock\$1enabled\$1object\$1count  | long | Number of objects stored for for objects with lock enabled in the current referenced item | 
|  object\$1lock\$1enabled\$1storage\$1bytes  | DECIMAL(38,0) | Number of bytes stored for objects with lock enabled in the current referenced item | 
|  replicated\$1object\$1count  | long | Number of objects replicated for the current referenced item | 
|  replicated\$1storage\$1bytes  | DECIMAL(38,0) | Number of bytes replicated for the current referenced item | 
|  replicated\$1object\$1source\$1count  | long | Number of objects replicated as source stored for the current referenced item | 
|  replicated\$1storage\$1source\$1bytes  | DECIMAL(38,0) | Number of bytes replicated as source for the current referenced item | 
|  sse\$1kms\$1object\$1count  | long | Number of objects encrypted with SSE key stored for the current referenced item | 
|  sse\$1kms\$1storage\$1bytes  | DECIMAL(38,0) | Number of bytes encrypted with SSE key stored for the current referenced item | 
|  object\$10kb\$1count  | long | Number of objects with sizes equal to 0KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$10kb\$1to\$1128kb\$1count  | long | Number of objects with sizes greater than 0KB and less than equal to 128KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1128kb\$1to\$1256kb\$1count  | long | Number of objects with sizes greater than 128KB and less than equal to 256KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1256kb\$1to\$1512kb\$1count  | long | Number of objects with sizes greater than 256KB and less than equal to 512KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1512kb\$1to\$11mb\$1count  | long | Number of objects with sizes greater than 512KB and less than equal to 1MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$11mb\$1to\$12mb\$1count  | long | Number of objects with sizes greater than 1MB and less than equal to 2MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$12mb\$1to\$14mb\$1count  | long | Number of objects with sizes greater than 2MB and less than equal to 4MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$14mb\$1to\$18mb\$1count  | long | Number of objects with sizes greater than 4MB and less than equal to 8MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$18mb\$1to\$116mb\$1count  | long | Number of objects with sizes greater than 8MB and less than equal to 16MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$116mb\$1to\$132mb\$1count  | long | Number of objects with sizes greater than 16MB and less than equal to 32MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$132mb\$1to\$164mb\$1count  | long | Number of objects with sizes greater than 32MB and less than equal to 64MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$164mb\$1to\$1128mb\$1count  | long | Number of objects with sizes greater than 64MB and less than equal to 128MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1128mb\$1to\$1256mb\$1count  | long | Number of objects sizes greater than 128MB and less than equal to 256MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1256mb\$1to\$1512mb\$1count  | long | Number of objects sizes greater than 256MB and less than equal to 512MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1512mb\$1to\$11gb\$1count  | long | Number of objects sizes greater than 512MB and less than equal to 1GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$11gb\$1to\$12gb\$1count  | long | Number of objects sizes greater than 1GB and less than equal to 2GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$12gb\$1to\$14gb\$1count  | long | Number of objects sizes greater than 2GB and less than equal to 4GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1larger\$1than\$14gb\$1count  | long | Number of objects sizes greater than 4GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 

### Bucket property metrics table schema
<a name="storage_lens_s3_tables_bucket_property_metrics"></a>


| Name | Type | Description | 
| --- | --- | --- | 
|  version\$1number  | string | Version identifier of the schema of the table | 
|  configuration\$1id  | string | S3 Storage Lens configuration name | 
|  report\$1time  | timestamptz | Date the S3 Storage Lens report refers to | 
|  aws\$1account\$1id  | string | Account id the entry refers to | 
|  record\$1type  | string | Type of record, related to what is the level of aggregation of data. Values: ACCOUNT, BUCKET, PREFIX, LENS GROUP.  | 
|  record\$1value  | string | Disambiguator for record types that have more than one record under them. It is used to reference the prefix. | 
|  aws\$1region  | string | Region | 
|  storage\$1class  | string | Storage Class | 
|  bucket\$1name  | string | Bucket name | 
|  versioning\$1enabled\$1bucket\$1count  | long | Number of buckets with versioning enabled for the current referenced item | 
|  mfa\$1delete\$1enabled\$1bucket\$1count  | long | Number of buckets with MFA delete enabled for the current referenced item | 
|  sse\$1kms\$1enabled\$1bucket\$1count  | long | Number of buckets with KMS enabled for the current referenced item | 
|  object\$1ownership\$1bucket\$1owner\$1enforced\$1bucket\$1count  | long | Number of buckets with Object Ownership bucket owner enforced for the current referenced item | 
|  object\$1ownership\$1bucket\$1owner\$1preferred\$1bucket\$1count  | long | Number of buckets with Object Ownership bucket owner preferred for the current referenced item | 
|  object\$1ownership\$1object\$1writer\$1bucket\$1count  | long | Number of buckets with Object Ownership object writer for the current referenced item | 
|  transfer\$1acceleration\$1enabled\$1bucket\$1count  | long | Number of buckets with transfer acceleration enabled for the current referenced item | 
|  event\$1notification\$1enabled\$1bucket\$1count  | long | Number of buckets with event notification enabled for the current referenced item | 
|  transition\$1lifecycle\$1rule\$1count  | long | Number of transition lifecycle rules for the current referenced item | 
|  expiration\$1lifecycle\$1rule\$1count  | long | Number of expiration lifecycle rules for the current referenced item | 
|  non\$1current\$1version\$1transition\$1lifecycle\$1rule\$1count  | long | Number of noncurrent version transition lifecycle rules for the current referenced item | 
|  non\$1current\$1version\$1expiration\$1lifecycle\$1rule\$1count  | long | Number of noncurrent version expiration lifecycle rules for the current referenced item | 
|  abort\$1incomplete\$1multipart\$1upload\$1lifecycle\$1rule\$1count  | long | Number of abort incomplete multipart upload lifecycle rules for the current referenced item | 
|  expired\$1object\$1delete\$1marker\$1lifecycle\$1rule\$1count  | long | Number of expire object delete marker lifecycle rules for the current referenced item | 
|  same\$1region\$1replication\$1rule\$1count  | long | Number of Same-Region Replication rule count for the current referenced item | 
|  cross\$1region\$1replication\$1rule\$1count  | long | Number of Cross-Region Replication rule count for the current referenced item | 
|  same\$1account\$1replication\$1rule\$1count  | long | Number of Same-account replication rule count for the current referenced item | 
|  cross\$1account\$1replication\$1rule\$1count  | long | Number of Cross-account replication rule count for the current referenced item | 
|  invalid\$1destination\$1replication\$1rule\$1count  | long | Number of buckets with Invalid destination replication for the current referenced item | 

### Activity metrics table schema
<a name="storage_lens_s3_tables_activity_metrics"></a>


| Name | Type | Description | 
| --- | --- | --- | 
|  version\$1number  | string | Version identifier of the schema of the table | 
|  configuration\$1id  | string | S3 Storage Lens configuration name | 
|  report\$1time  | timestamptz | Date the S3 Storage Lens report refers to | 
|  aws\$1account\$1id  | string | Account id the entry refers to | 
|  aws\$1region  | string | Region | 
|  storage\$1class  | string | Storage Class | 
|  record\$1type  | string | Type of record, related to what is the level of aggregation of data. Values: ACCOUNT, BUCKET, PREFIX.  | 
|  record\$1value  | string | Disambiguator for record types that have more than one record under them. It is used to reference the prefix | 
|  bucket\$1name  | string | Bucket name | 
|  all\$1request\$1count  | long | Number of \$1all\$1 requests for the current referenced item | 
|  all\$1sse\$1kms\$1encrypted\$1request\$1count  | long | Number of KMS encrypted requests for the current referenced item | 
|  all\$1unsupported\$1sig\$1request\$1count  | long | Number of unsupported sig requests for the current referenced item | 
|  all\$1unsupported\$1tls\$1request\$1count  | long | Number of unsupported TLS requests for the current referenced item | 
|  bad\$1request\$1error\$1400\$1count  | long | Number of 400 bad request errors for the current referenced item | 
|  delete\$1request\$1count  | long | Number of delete requests for the current referenced item | 
|  downloaded\$1bytes  | decimal(0,0) | Number of downloaded bytes for the current referenced item | 
|  error\$14xx\$1count  | long | Number of 4xx errors for the current referenced item | 
|  error\$15xx\$1count  | long | Number of 5xx errors for the current referenced item | 
|  forbidden\$1error\$1403\$1count  | long | Number of 403 forbidden errors for the current referenced item | 
|  get\$1request\$1count  | long | Number of get requests for the current referenced item | 
|  head\$1request\$1count  | long | Number of head requests for the current referenced item | 
|  internal\$1server\$1error\$1500\$1count  | long | Number of 500 internal server errors for the current referenced item | 
|  list\$1request\$1count  | long | Number of list requests for the current referenced item | 
|  not\$1found\$1error\$1404\$1count  | long | Number of 404 not found errors for the current referenced item | 
|  ok\$1status\$1200\$1count  | long | Number of 200 OK requests for the current referenced item | 
|  partial\$1content\$1status\$1206\$1count  | long | Number of 206 partial content requests for the current referenced item | 
|  post\$1request\$1count  | long | Number of post requests for the current referenced item | 
|  put\$1request\$1count  | long | Number of put requests for the current referenced item | 
|  select\$1request\$1count  | long | Number of select requests for the current referenced item | 
|  select\$1returned\$1bytes  | decimal(0,0) | Number of bytes returned by select requests for the current referenced item | 
|  select\$1scanned\$1bytes  | decimal(0,0) | Number of bytes scanned by select requests for the current referenced item | 
|  service\$1unavailable\$1error\$1503\$1count  | long | Number of 503 service unavailable errors for the current referenced item | 
|  uploaded\$1bytes  | decimal(0,0) | Number of uploaded bytes for the current referenced item | 
|  average\$1first\$1byte\$1latency  | long | Average per-request time between when an S3 bucket receives a complete request and when it starts returning the response, measured over the past 24 hours | 
|  average\$1total\$1request\$1latency  | long | Average elapsed per-request time between the first byte received and the last byte sent to an S3 bucket, measured over the past 24 hours | 
|  read\$10kb\$1request\$1count  | long | Number of GetObject requests with data sizes of 0KB, including both range-based requests and whole object requests | 
|  read\$10kb\$1to\$1128kb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 0KB and up to 128KB, including both range-based requests and whole object requests | 
|  read\$1128kb\$1to\$1256kb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 128KB and up to 256KB, including both range-based requests and whole object requests | 
|  read\$1256kb\$1to\$1512kb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 256KB and up to 512KB, including both range-based requests and whole object requests | 
|  read\$1512kb\$1to\$11mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 512KB and up to 1MB, including both range-based requests and whole object requests | 
|  read\$11mb\$1to\$12mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 1MB and up to 2MB, including both range-based requests and whole object requests | 
|  read\$12mb\$1to\$14mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 2MB and up to 4MB, including both range-based requests and whole object requests | 
|  read\$14mb\$1to\$18mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 4MB and up to 8MB, including both range-based requests and whole object requests | 
|  read\$18mb\$1to\$116mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 8MB and up to 16MB, including both range-based requests and whole object requests | 
|  read\$116mb\$1to\$132mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 16MB and up to 32MB, including both range-based requests and whole object requests | 
|  read\$132mb\$1to\$164mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 32MB and up to 64MB, including both range-based requests and whole object requests | 
|  read\$164mb\$1to\$1128mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 64MB and up to 128MB, including both range-based requests and whole object requests | 
|  read\$1128mb\$1to\$1256mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 128MB and up to 256MB, including both range-based requests and whole object requests | 
|  read\$1256mb\$1to\$1512mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 256MB and up to 512MB, including both range-based requests and whole object requests | 
|  read\$1512mb\$1to\$11gb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 512MB and up to 1GB, including both range-based requests and whole object requests | 
|  read\$11gb\$1to\$12gb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 1GB and up to 2GB, including both range-based requests and whole object requests | 
|  read\$12gb\$1to\$14gb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 2GB and up to 4GB, including both range-based requests and whole object requests | 
|  read\$1larger\$1than\$14gb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 4GB, including both range-based requests and whole object requests | 
|  write\$10kb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes of 0KB | 
|  write\$10kb\$1to\$1128kb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 0KB and up to 128KB | 
|  write\$1128kb\$1to\$1256kb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 128KB and up to 256KB | 
|  write\$1256kb\$1to\$1512kb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 256KB and up to 512KB | 
|  write\$1512kb\$1to\$11mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 512KB and up to 1MB | 
|  write\$11mb\$1to\$12mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 1MB and up to 2MB | 
|  write\$12mb\$1to\$14mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 2MB and up to 4MB | 
|  write\$14mb\$1to\$18mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 4MB and up to 8MB | 
|  write\$18mb\$1to\$116mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 8MB and up to 16MB | 
|  write\$116mb\$1to\$132mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 16MB and up to 32MB | 
|  write\$132mb\$1to\$164mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 32MB and up to 64MB | 
|  write\$164mb\$1to\$1128mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 64MB and up to 128MB | 
|  write\$1128mb\$1to\$1256mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 128MB and up to 256MB | 
|  write\$1256mb\$1to\$1512mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 256MB and up to 512MB | 
|  write\$1512mb\$1to\$11gb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 512MB and up to 1GB | 
|  write\$11gb\$1to\$12gb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 1GB and up to 2GB | 
|  write\$12gb\$1to\$14gb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 2GB and up to 4GB | 
|  write\$1larger\$1than\$14gb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 4GB | 
|  concurrent\$1put\$1503\$1error\$1count  | long | Number of 503 errors that are generated due to concurrent writes to the same object | 
|  cross\$1region\$1request\$1count  | long | Number of requests that originate from a client in different Region than bucket's home Region | 
|  cross\$1region\$1transferred\$1bytes  | decimal(0,0) | Number of bytes that are transferred from calls in different Region than bucket's home Region | 
|  cross\$1region\$1without\$1replication\$1request\$1count  | long | Number of requests that originate from a client in different Region than bucket's home Region, excluding cross-region replication requests | 
|  cross\$1region\$1without\$1replication\$1transferred\$1bytes  | decimal(0,0) | Number of bytes that are transferred from calls in different Region than bucket's home Region, excluding cross-region replication bytes | 
|  inregion\$1request\$1count  | long | Number of requests that originate from a client in same Region as bucket's home Region | 
|  inregion\$1transferred\$1bytes  | decimal(0,0) | Number of bytes that are transferred from calls from same Region as bucket's home Region | 
|  unique\$1objects\$1accessed\$1daily\$1count  | long | Number of objects that were accessed at least once in last 24 hrs | 

# Monitor S3 Storage Lens metrics in CloudWatch
<a name="storage_lens_view_metrics_cloudwatch"></a>

You can publish S3 Storage Lens metrics to Amazon CloudWatch to create a unified view of your operational health in [CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html). You can also use CloudWatch features, such as alarms and triggered actions, metric math, and anomaly detection, to monitor and take action on S3 Storage Lens metrics. In addition, CloudWatch API operations enable applications, including third-party providers, to access your S3 Storage Lens metrics. For more information about CloudWatch features, see the [Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html).

You can enable the CloudWatch publishing option for new or existing dashboard configurations by using the Amazon S3 console, Amazon S3 REST API, AWS CLI, and AWS SDKs. Dashboards that are upgraded to S3 Storage Lens advanced metrics and recommendations can use the CloudWatch publishing option. For S3 Storage Lens advanced metrics and recommendations pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). No additional CloudWatch metrics publishing charges apply; however, other CloudWatch charges, such as dashboards, alarms, and API calls, do apply. For more information, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/). 

S3 Storage Lens metrics are published to CloudWatch in the account that owns the S3 Storage Lens configuration. After you enable the CloudWatch publishing option within advanced metrics, you can access account-level and bucket-level metrics by configuration ID, account, bucket (for bucket-level metrics only), Region, and storage class in CloudWatch. Prefix-level metrics are not available in CloudWatch.

**Note**  
S3 Storage Lens metrics are daily metrics and are published to CloudWatch once per day. When you query S3 Storage Lens metrics in CloudWatch, the period for the query must be 1 day (86400 seconds). After your daily S3 Storage Lens metrics appear in your S3 Storage Lens dashboard in the Amazon S3 console, it can take a few hours for these same metrics to appear in CloudWatch. When you enable the CloudWatch publishing option for S3 Storage Lens metrics for the first time, it can take up to 24 hours for your metrics to publish to CloudWatch. 

After you enable the CloudWatch publishing option, you can use the following CloudWatch features to monitor and analyze your S3 Storage LensStorage Lens data:
+ [Dashboards](storage-lens-cloudwatch-monitoring-cloudwatch.md#storage-lens-cloudwatch-monitoring-cloudwatch-dashboards) – Use CloudWatch dashboards to create customized S3 Storage Lens dashboards. Share your CloudWatch dashboard with people who don't have direct access to your AWS account, across teams, with stakeholders, and with people external to your organizations. 
+ [Alarms and triggered actions](storage-lens-cloudwatch-monitoring-cloudwatch.md#storage-lens-cloudwatch-monitoring-cloudwatch-alarms) – Configure alarms that watch metrics and take action when a threshold is breached. For example, you can configure an alarm that sends an Amazon SNS notification when the **Incomplete Multipart Upload Bytes** metric exceeds 1 GB for three consecutive days. 
+ [Anomaly detection](storage-lens-cloudwatch-monitoring-cloudwatch.md#storage-lens-cloudwatch-monitoring-cloudwatch-alarms) – Enable anomaly detection to continuously analyze metrics, determine normal baselines, and surface anomalies. You can create an anomaly detection alarm based on the expected value of a metric. For example, you can monitor anomalies for the **Object Lock Enabled Bytes** metric to detect unauthorized removal of Object Lock settings.
+ [Metric math](storage-lens-cloudwatch-monitoring-cloudwatch.md#storage-lens-cloudwatch-monitoring-cloudwatch-metric-math) – You can also use metric math to query multiple S3 Storage Lens metrics and use math expressions to create new time series based on these metrics. For example, you can create a new metric to get the average object size by dividing `StorageBytes` by `ObjectCount`.

For more information about the CloudWatch publishing option for S3 Storage Lens metrics, see the following topics.

**Topics**
+ [S3 Storage Lens metrics and dimensions](storage-lens-cloudwatch-metrics-dimensions.md)
+ [Enabling CloudWatch publishing for S3 Storage Lens](storage-lens-cloudwatch-enable-publish-option.md)
+ [Working with S3 Storage Lens metrics in CloudWatch](storage-lens-cloudwatch-monitoring-cloudwatch.md)

# S3 Storage Lens metrics and dimensions
<a name="storage-lens-cloudwatch-metrics-dimensions"></a>

To send S3 Storage Lens metrics to CloudWatch, you must enable the CloudWatch publishing option within S3 Storage Lens advanced metrics. After advanced metrics are enabled, you can use [CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html) to monitor S3 Storage Lens metrics alongside other application metrics and create a unified view of your operational health. You can use dimensions to filter your S3 Storage Lens metrics in CloudWatch by organization, account, bucket, storage class, Region, and metrics configuration ID.

S3 Storage Lens metrics are published to CloudWatch in the account that owns the S3 Storage Lens configuration. After you enable the CloudWatch publishing option within advanced metrics, you can access account-level and bucket-level metrics by configuration ID, account, bucket (for bucket-level metrics only), Region, and storage class in CloudWatch. Prefix-level metrics are not available in CloudWatch.

**Note**  
S3 Storage Lens metrics are daily metrics and are published to CloudWatch once per day. When you query S3 Storage Lens metrics in CloudWatch, the period for the query must be 1 day (86400 seconds). After your daily S3 Storage Lens metrics appear in your S3 Storage Lens dashboard in the Amazon S3 console, it can take a few hours for these same metrics to appear in CloudWatch. When you enable the CloudWatch publishing option for S3 Storage Lens metrics for the first time, it can take up to 24 hours for your metrics to publish to CloudWatch. 

For more information about S3 Storage Lens metrics and dimensions in CloudWatch, see the following topics.

**Topics**
+ [Metrics](#storage-lens-cloudwatch-metrics)
+ [Dimensions](#storage-lens-cloudwatch-dimensions)

## Metrics
<a name="storage-lens-cloudwatch-metrics"></a>

S3 Storage Lens metrics are available as metrics within CloudWatch. S3 Storage Lens metrics are published to the `AWS/S3/Storage-Lens` namespace. This namespace is only for S3 Storage Lens metrics. Amazon S3 bucket, request, and replication metrics are published to the `AWS/S3` namespace. 

S3 Storage Lens metrics are published to CloudWatch in the account that owns the S3 Storage Lens configuration. After you enable the CloudWatch publishing option within advanced metrics, you can access account-level and bucket-level metrics by configuration ID, account, bucket (for bucket-level metrics only), Region, and storage class in CloudWatch. Prefix-level metrics are not available in CloudWatch.

In S3 Storage Lens, metrics are aggregated and stored only in the designated home Region. S3 Storage Lens metrics are also published to CloudWatch in the home Region that you specify in the S3 Storage Lens configuration. 

For a complete list of S3 Storage Lens metrics, including a list of those metrics available in CloudWatch, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

**Note**  
The valid statistic for S3 Storage Lens metrics in CloudWatch is Average. For more information about statistics in CloudWatch, see [ CloudWatch statistics definitions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Statistics-definitions.html) in the *Amazon CloudWatch User Guide*.

### Granularity of S3 Storage Lens metrics in CloudWatch
<a name="storage-lens-cloudwatch-metrics-granularity"></a>

S3 Storage Lens offers metrics at organization, account, bucket, and prefix granularity. S3 Storage Lens publishes organization, account, and bucket-level S3 Storage Lens metrics to CloudWatch. Prefix-level S3 Storage Lens metrics are not available in CloudWatch.

For more information about the granularity of S3 Storage Lens metrics available in CloudWatch, see the following list:
+ **Organization** – Metrics aggregated across the member accounts in your organization. S3 Storage Lens publishes metrics for member accounts to CloudWatch in the management account. 
  + **Organization and account** – Metrics for the member accounts in your organization. 
  + **Organization and bucket** – Metrics for Amazon S3 buckets in the member accounts of your organization.
+ **Account** (Non-organization level) – Metrics aggregated across the buckets in your account. 
+ **Bucket** (Non-organization level) – Metrics for a specific bucket. In CloudWatch, S3 Storage Lens publishes these metrics to the AWS account that created the S3 Storage Lens configuration. S3 Storage Lens publishes these metrics only for non-organization configurations.

## Dimensions
<a name="storage-lens-cloudwatch-dimensions"></a>

When S3 Storage Lens sends data to CloudWatch, dimensions are attached to each metric. Dimensions are categories that describe the characteristics of metrics. You can use dimensions to filter the results that CloudWatch returns. 

For example, all S3 Storage Lens metrics in CloudWatch have the `configuration_id` dimension. You can use this dimension to differentiate between metrics associated with a specific S3 Storage Lens configuration. The `organization_id` identifies organization-level metrics. For more information about dimensions in CloudWatch, see [Dimensions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Dimension) in the *CloudWatch User Guide*. 

Different dimensions are available for S3 Storage Lens metrics depending on the granularity of the metrics. For example, you can use the `organization_id` dimension to filter organization-level metrics by the AWS Organizations ID. However, you can't use this dimension for bucket and account-level metrics. For more information, see [Filtering metrics using dimensions](storage-lens-cloudwatch-monitoring-cloudwatch.md#storage-lens-cloudwatch-monitoring-cloudwatch-dimensions).

To see which dimensions are available for your S3 Storage Lens configuration, see the following table.


|  **Dimension**  |  **Description**  |  **Bucket**  | **Account** |  **Organization**  |  **Organization and bucket**  |  **Organization and account**  | 
| --- | --- | --- | --- | --- | --- | --- | 
| configuration\$1id |  The dashboard name for the S3 Storage Lens configuration reported in the metrics  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | 
| metrics\$1version |  The version of the S3 Storage Lens metrics. The metrics version has a fixed value of `1.0`.  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | 
| organization\$1id |  The AWS Organizations ID for the metrics  | ![\[No\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-no.png)  | ![\[No\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-no.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | 
| aws\$1account\$1number | The AWS account that's associated with the metrics | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[No\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-no.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | 
| aws\$1region | The AWS Region for the metrics | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | 
| bucket\$1name |  The name of the S3 bucket that's reported in the metrics  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[No\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-no.png)  | ![\[No\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-no.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[No\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-no.png)  | 
| storage\$1class |  The storage class for the bucket that's reported in the metrics  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png)  | 
| record\$1type |  The granularity of the metrics: ORGANIZATION, ACCOUNT, BUCKET  | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png) BUCKET | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png) ACCOUNT | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png) BUCKET | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png) ACCOUNT | ![\[Yes\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/icon-yes.png) ORGANIZATION | 

# Enabling CloudWatch publishing for S3 Storage Lens
<a name="storage-lens-cloudwatch-enable-publish-option"></a>

You can publish S3 Storage Lens metrics to Amazon CloudWatch to create a unified view of your operational health in [CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html). You can also use CloudWatch features, such as alarms and triggered actions, metric math, and anomaly detection, to monitor and take action on S3 Storage Lens metrics. In addition, CloudWatch API operations enable applications, including third-party providers, to access your S3 Storage Lens metrics. For more information about CloudWatch features, see the [Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html).

S3 Storage Lens metrics are published to CloudWatch in the account that owns the S3 Storage Lens configuration. After you enable the CloudWatch publishing option within advanced metrics, you can access account-level and bucket-level metrics by configuration ID, account, bucket (for bucket-level metrics only), Region, and storage class in CloudWatch. Prefix-level metrics are not available in CloudWatch.

You can enable CloudWatch support for new or existing dashboard configurations by using the S3 console, Amazon S3 REST APIs, AWS CLI, and AWS SDKs. The CloudWatch publishing option is available for dashboards that are upgraded to S3 Storage Lens advanced metrics and recommendations. For S3 Storage Lens advanced metrics and recommendations pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). No additional CloudWatch metrics publishing charges apply; however, other CloudWatch charges, such as dashboards, alarms, and API calls, do apply.

To enable the CloudWatch publishing option for S3 Storage Lens metrics, see the following topics.

**Note**  
S3 Storage Lens metrics are daily metrics and are published to CloudWatch once per day. When you query S3 Storage Lens metrics in CloudWatch, the period for the query must be 1 day (86400 seconds). After your daily S3 Storage Lens metrics appear in your S3 Storage Lens dashboard in the Amazon S3 console, it can take a few hours for these same metrics to appear in CloudWatch. When you enable the CloudWatch publishing option for S3 Storage Lens metrics for the first time, it can take up to 24 hours for your metrics to publish to CloudWatch.   
Currently, S3 Storage Lens metrics cannot be consumed through CloudWatch streams. 

## Using the S3 console
<a name="storage-lens-cloudwatch-enable-publish-console"></a>

When you update an S3 Storage Lens dashboard, you can't change the dashboard name or home Region. You also can't change the scope of the default dashboard, which is scoped to your entire account's storage.

**To update an S3 Storage Lens dashboard to enable CloudWatch publishing**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **S3 Storage Lens**, **Dashboards**.

1. Choose the dashboard that you want to edit, and then choose **Edit.**

1. Under **Metrics selection**, choose **Advanced metrics and recommendations**.

   Advanced metrics and recommendations are available for an additional charge. Advanced metrics and recommendations include a 15-month period for data queries, usage metrics aggregated at the prefix level, activity metrics aggregated by bucket, the CloudWatch publishing option, and contextual recommendations that help you optimize storage costs and apply data-protection best practices. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

1. Under **Select Advanced metrics and recommendations features**, select **CloudWatch publishing**.
**Important**  
If your configuration enables prefix aggregation for usage metrics, prefix-level metrics will not be published to CloudWatch. Only bucket, account, and organization-level S3 Storage Lens metrics are published to CloudWatch.

1. Choose **Save changes**.

**To create a new S3 Storage Lens dashboard that enables CloudWatch support**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**. 

1. Choose **Create dashboard**. 

1. Under **General**, define the following configuration options:

   1. For **Dashboard name**, enter your dashboard name.

      Dashboard names must be fewer than 65 characters and must not contain special characters or spaces. You can't change the dashboard name after you create your dashboard.

   1. Choose the **Home Region ** for your dashboard.

      Metrics for all Regions included in this dashboard scope are stored centrally in the designated home Region. In CloudWatch, S3 Storage Lens metrics are also available in the home Region. You can't change the home Region after you create your dashboard.

1. (Optional) To add tags, choose **Add tag** and enter the tag **Key** and **Value**.
**Note**  
You can add up to 50 tags to your dashboard configuration.

1. Define the scope for your configuration:

   1. If you're creating an organization-level configuration, choose the accounts to include in the configuration: **Include all accounts in your configuration** or **Limit the scope to your signed-in account**.
**Note**  
When you create an organization-level configuration that includes all accounts, you can include or exclude only Regions, not buckets.

   1. Choose the Regions and buckets that you want S3 Storage Lens to include in the dashboard configuration by doing the following:
      + To include all Regions, choose **Include Regions and buckets**.
      + To include specific Regions, clear **Include all Regions**. Under **Choose Regions to include**, choose the Regions that you want S3 Storage Lens to include in the dashboard.
      + To include specific buckets, clear **Include all buckets**. Under **Choose buckets to include**, choose the buckets that you want S3 Storage Lens to include in the dashboard. 
**Note**  
You can choose up to 50 buckets.

1. For **Metrics selection**, choose **Advanced metrics and recommendations**.

   For more information about advanced metrics and recommendations pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). 

1. Under **Advanced metrics and recommendations features**, select the options that you want to enable:
   + **Advanced metrics** 
   + **CloudWatch publishing**
**Important**  
If you enable prefix aggregation for your S3 Storage Lens configuration, prefix-level metrics will not be published to CloudWatch. Only bucket, account, and organization-level S3 Storage Lens metrics are published to CloudWatch.
   + **Prefix aggregation**
**Note**  
For more information about advanced metrics and recommendations features, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

1. If you enabled **Advanced metrics**, select the **Advanced metrics categories** that you want to display in your S3 Storage Lens dashboard:
   + **Activity metrics**
   + **Detailed status code metrics**
   + **Advanced cost optimization metrics**
   + **Advanced data protection metrics**

   For more information about metrics categories, see [Metrics categories](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_types). For a complete list of metrics, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

1. (Optional) Configure your metrics export.

   For more information about how to configure a metrics export, see step [Using the S3 console](storage_lens_creating_dashboard.md#storage_lens_console_creating).

1. Choose **Create dashboard**.

## Using the AWS CLI
<a name="storage-lens-cloudwatch-enable-publish-cli"></a>

The following AWS CLI example enables the CloudWatch publishing option by using a S3 Storage Lens organization-level advanced metrics and recommendations configuration. To use this example, replace the `user input placeholders` with your own information.

```
aws s3control put-storage-lens-configuration --account-id=555555555555 --config-id=your-configuration-id --region=us-east-1 --storage-lens-configuration=file://./config.json

config.json
{
  "Id": "SampleS3StorageLensConfiguration",
  "AwsOrg": {
    "Arn": "arn:aws:organizations::123456789012:organization/o-abcdefgh"
  },
  "AccountLevel": {
    "ActivityMetrics": {
      "IsEnabled":true
    },
    "AdvancedCostOptimizationMetrics": {
      "IsEnabled":true
    },
    "AdvancedDataProtectionMetrics": {
      "IsEnabled":true
    },
    "DetailedStatusCodesMetrics": {
      "IsEnabled":true
    },
    "BucketLevel": {
      "ActivityMetrics": {
        "IsEnabled":true
      },
      "AdvancedCostOptimizationMetrics": {
        "IsEnabled":true
      },
      "DetailedStatusCodesMetrics": {
        "IsEnabled":true
      },
      "PrefixLevel":{
        "StorageMetrics":{
          "IsEnabled":true,
          "SelectionCriteria":{
            "MaxDepth":5,
            "MinStorageBytesPercentage":1.25,
            "Delimiter":"/"
          }
        }
      }
    }
  },
  "Exclude": {
    "Regions": [
      "eu-west-1"
    ],
    "Buckets": [
      "arn:aws:s3:::amzn-s3-demo-source-bucket "
    ]
  },
  "IsEnabled": true,
  "DataExport": {
    "S3BucketDestination": {
      "OutputSchemaVersion": "V_1",
      "Format": "CSV",
      "AccountId": "111122223333",
      "Arn": "arn:aws:s3:::amzn-s3-demo-destination-bucket",
      "Prefix": "prefix-for-your-export-destination",
      "Encryption": {
        "SSES3": {}
      }
    },
    "CloudWatchMetrics": {
      "IsEnabled": true
    }
  }
}
```

## Using the AWS SDK for Java
<a name="storage-lens-cloudwatch-enable-publish-sdk"></a>

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3control.AWSS3Control;
import com.amazonaws.services.s3control.AWSS3ControlClient;
import com.amazonaws.services.s3control.model.AccountLevel;
import com.amazonaws.services.s3control.model.ActivityMetrics;
import com.amazonaws.services.s3control.model.BucketLevel;
import com.amazonaws.services.s3control.model.CloudWatchMetrics;
import com.amazonaws.services.s3control.model.Format;
import com.amazonaws.services.s3control.model.Include;
import com.amazonaws.services.s3control.model.OutputSchemaVersion;
import com.amazonaws.services.s3control.model.PrefixLevel;
import com.amazonaws.services.s3control.model.PrefixLevelStorageMetrics;
import com.amazonaws.services.s3control.model.PutStorageLensConfigurationRequest;
import com.amazonaws.services.s3control.model.S3BucketDestination;
import com.amazonaws.services.s3control.model.SSES3;
import com.amazonaws.services.s3control.model.SelectionCriteria;
import com.amazonaws.services.s3control.model.StorageLensAwsOrg;
import com.amazonaws.services.s3control.model.StorageLensConfiguration;
import com.amazonaws.services.s3control.model.StorageLensDataExport;
import com.amazonaws.services.s3control.model.StorageLensDataExportEncryption;
import com.amazonaws.services.s3control.model.StorageLensTag;

import java.util.Arrays;
import java.util.List;

import static com.amazonaws.regions.Regions.US_WEST_2;

public class CreateAndUpdateDashboard {

    public static void main(String[] args) {
        String configurationId = "ConfigurationId";
        String sourceAccountId = "Source Account ID";
        String exportAccountId = "Destination Account ID";
        String exportBucketArn = "arn:aws:s3:::amzn-s3-demo-destination-bucket"; // The destination bucket for your metrics export must be in the same Region as your S3 Storage Lens configuration.
        String awsOrgARN = "arn:aws:organizations::123456789012:organization/o-abcdefgh";
        Format exportFormat = Format.CSV;

        try {
            SelectionCriteria selectionCriteria = new SelectionCriteria()
                    .withDelimiter("/")
                    .withMaxDepth(5)
                    .withMinStorageBytesPercentage(10.0);
            PrefixLevelStorageMetrics prefixStorageMetrics = new PrefixLevelStorageMetrics()
                    .withIsEnabled(true)
                    .withSelectionCriteria(selectionCriteria);
            BucketLevel bucketLevel = new BucketLevel()
                    .withActivityMetrics(new ActivityMetrics().withIsEnabled(true))
                    .withAdvancedCostOptimizationMetrics(new AdvancedCostOptimizationMetrics().withIsEnabled(true))
                    .withAdvancedDataProtectionMetrics(new AdvancedDataProtectionMetrics().withIsEnabled(true))
                    .withDetailedStatusCodesMetrics(new DetailedStatusCodesMetrics().withIsEnabled(true))
                    .withPrefixLevel(new PrefixLevel().withStorageMetrics(prefixStorageMetrics));
            AccountLevel accountLevel = new AccountLevel()
                    .withActivityMetrics(new ActivityMetrics().withIsEnabled(true))
                    .withAdvancedCostOptimizationMetrics(new AdvancedCostOptimizationMetrics().withIsEnabled(true))
                    .withAdvancedDataProtectionMetrics(new AdvancedDataProtectionMetrics().withIsEnabled(true))
                    .withDetailedStatusCodesMetrics(new DetailedStatusCodesMetrics().withIsEnabled(true))
                    .withBucketLevel(bucketLevel);

            Include include = new Include()
                    .withBuckets(Arrays.asList("arn:aws:s3:::amzn-s3-demo-bucket"))
                    .withRegions(Arrays.asList("us-west-2"));

            StorageLensDataExportEncryption exportEncryption = new StorageLensDataExportEncryption()
                    .withSSES3(new SSES3());
            S3BucketDestination s3BucketDestination = new S3BucketDestination()
                    .withAccountId(exportAccountId)
                    .withArn(exportBucketArn)
                    .withEncryption(exportEncryption)
                    .withFormat(exportFormat)
                    .withOutputSchemaVersion(OutputSchemaVersion.V_1)
                    .withPrefix("Prefix");
            CloudWatchMetrics cloudWatchMetrics = new CloudWatchMetrics()
                    .withIsEnabled(true);
            StorageLensDataExport dataExport = new StorageLensDataExport()
                    .withCloudWatchMetrics(cloudWatchMetrics)
                    .withS3BucketDestination(s3BucketDestination);

            StorageLensAwsOrg awsOrg = new StorageLensAwsOrg()
                    .withArn(awsOrgARN);

            StorageLensConfiguration configuration = new StorageLensConfiguration()
                    .withId(configurationId)
                    .withAccountLevel(accountLevel)
                    .withInclude(include)
                    .withDataExport(dataExport)
                    .withAwsOrg(awsOrg)
                    .withIsEnabled(true);

            List<StorageLensTag> tags = Arrays.asList(
                    new StorageLensTag().withKey("key-1").withValue("value-1"),
                    new StorageLensTag().withKey("key-2").withValue("value-2")
            );

            AWSS3Control s3ControlClient = AWSS3ControlClient.builder()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(US_WEST_2)
                    .build();

            s3ControlClient.putStorageLensConfiguration(new PutStorageLensConfigurationRequest()
                    .withAccountId(sourceAccountId)
                    .withConfigId(configurationId)
                    .withStorageLensConfiguration(configuration)
                    .withTags(tags)
            );
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

## Using the REST API
<a name="storage-lens-cloudwatch-enable-publish-api"></a>

To enable the CloudWatch publishing option by using the Amazon S3 REST API, you can use [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutStorageLensConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutStorageLensConfiguration.html).

**Next steps**  
After you enable the CloudWatch publishing option, you can access your S3 Storage Lens metrics in CloudWatch. You also can leverage CloudWatch features to monitor and analyze your S3 Storage Lens data in CloudWatch. For more information, see the following topics:
+ [S3 Storage Lens metrics and dimensions](storage-lens-cloudwatch-metrics-dimensions.md)
+ [Working with S3 Storage Lens metrics in CloudWatch](storage-lens-cloudwatch-monitoring-cloudwatch.md)

# Working with S3 Storage Lens metrics in CloudWatch
<a name="storage-lens-cloudwatch-monitoring-cloudwatch"></a>

You can publish S3 Storage Lens metrics to Amazon CloudWatch to create a unified view of your operational health in [CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html). You can also use CloudWatch features, such as alarms and triggered actions, metric math, and anomaly detection, to monitor and take action on S3 Storage Lens metrics. In addition, CloudWatch API operations enable applications, including third-party providers, to access your S3 Storage Lens metrics. For more information about CloudWatch features, see the [Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html).

You can enable the CloudWatch publishing option for new or existing dashboard configurations by using the Amazon S3 console, Amazon S3 REST APIs, AWS CLI, and AWS SDKs. The CloudWatch publishing option is available for dashboards that are upgraded to S3 Storage Lens advanced metrics and recommendations. For S3 Storage Lens advanced metrics and recommendations pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). No additional CloudWatch metrics publishing charges apply; however, other CloudWatch charges, such as dashboards, alarms, and API calls, do apply. For more information, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/). 

S3 Storage Lens metrics are published to CloudWatch in the account that owns the S3 Storage Lens configuration. After you enable the CloudWatch publishing option within advanced metrics, you can access account-level and bucket-level metrics by configuration ID, account, bucket (for bucket-level metrics only), Region, and storage class in CloudWatch. Prefix-level metrics are not available in CloudWatch.

**Note**  
S3 Storage Lens metrics are daily metrics and are published to CloudWatch once per day. When you query S3 Storage Lens metrics in CloudWatch, the period for the query must be 1 day (86400 seconds). After your daily S3 Storage Lens metrics appear in your S3 Storage Lens dashboard in the Amazon S3 console, it can take a few hours for these same metrics to appear in CloudWatch. When you enable the CloudWatch publishing option for S3 Storage Lens metrics for the first time, it can take up to 24 hours for your metrics to publish to CloudWatch.   
Currently, S3 Storage Lens metrics cannot be consumed through CloudWatch streams. 

For more information about working with S3 Storage Lens metrics in CloudWatch, see the following topics.

**Topics**
+ [Working with CloudWatch dashboards](#storage-lens-cloudwatch-monitoring-cloudwatch-dashboards)
+ [Setting alarms, triggering actions, and using anomaly detection](#storage-lens-cloudwatch-monitoring-cloudwatch-alarms)
+ [Filtering metrics using dimensions](#storage-lens-cloudwatch-monitoring-cloudwatch-dimensions)
+ [Calculating new metrics with metric math](#storage-lens-cloudwatch-monitoring-cloudwatch-metric-math)
+ [Using search expressions in graphs](#storage-lens-cloudwatch-monitoring-cloudwatch-search-expressions)

## Working with CloudWatch dashboards
<a name="storage-lens-cloudwatch-monitoring-cloudwatch-dashboards"></a>

You can use CloudWatch dashboards to monitor S3 Storage Lens metrics alongside other application metrics and create a unified view of your operational health. Dashboards are customizable home pages in the CloudWatch console that you can use to monitor your resources in a single view. 

CloudWatch has broad permissions control that doesn't support limiting access to a specific set of metrics or dimensions. Users in your account or organization who have access to CloudWatch will have access to metrics for all S3 Storage Lens configurations where the CloudWatch support option is enabled. You can't manage permissions for specific dashboards as you can in S3 Storage Lens. For more information about CloudWatch permissions, see [Managing access permissions to your CloudWatch resources](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/iam-access-control-overview-cw.html) in the *Amazon CloudWatch User Guide*.

For more information about using CloudWatch dashboards and configuring permissions, see [Using Amazon CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html) and [Sharing CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-dashboard-sharing.html) in the *Amazon CloudWatch User Guide*.

## Setting alarms, triggering actions, and using anomaly detection
<a name="storage-lens-cloudwatch-monitoring-cloudwatch-alarms"></a>

You can configure CloudWatch alarms that watch S3 Storage Lens metrics in CloudWatch and take action when a threshold is breached. For example, you can configure an alarm that sends an Amazon SNS notification when the **Incomplete Multipart Upload Bytes** metric exceeds 1 GB for three consecutive days.

You can also enable anomaly detection to continuously analyze your S3 Storage Lens metrics, determine normal baselines, and surface anomalies. You can create an anomaly detection alarm based on a metric's expected value. For example, you can monitor anomalies for the **Object Lock Enabled Bytes** metric to detect unauthorized removal of Object Lock settings.

For more information and examples, see [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html) and [Creating an alarm from a metric on a graph](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create_alarm_metric_graph.html) in the *Amazon CloudWatch User Guide*.

## Filtering metrics using dimensions
<a name="storage-lens-cloudwatch-monitoring-cloudwatch-dimensions"></a>

You can use dimensions to filter S3 Storage Lens metrics in the CloudWatch console. For example, you can filter by `configuration_id`, `aws_account_number`, `aws_region`, `bucket_name`, and more.

S3 Storage Lens supports multiple dashboard configurations per account. This means that different configurations can include the same bucket. When these metrics are published to CloudWatch, the bucket will have duplicate metrics within CloudWatch. To view metrics only for a specific S3 Storage Lens configuration in CloudWatch, you can use the `configuration_id` dimension. When you filter by `configuration_id`, you see only the metrics that are associated with the configuration that you identify.

For more information about filtering by configuration ID, see [Searching for available metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/finding_metrics_with_cloudwatch.html) in the *Amazon CloudWatch User Guide*.

## Calculating new metrics with metric math
<a name="storage-lens-cloudwatch-monitoring-cloudwatch-metric-math"></a>

You can use metric math to query multiple S3 Storage Lens metrics and use math expressions to create new time series based on these metrics. For example, you can create a new metric for unencrypted objects by subtracting Encrypted Objects from Object Count. You can also create a metric to get the average object size by dividing `StorageBytes` by `ObjectCount` or the number bytes accessed on one day by dividing `BytesDownloaded` by `StorageBytes`.

For more information, see [Using metric math](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html) in the *Amazon CloudWatch User Guide*.

## Using search expressions in graphs
<a name="storage-lens-cloudwatch-monitoring-cloudwatch-search-expressions"></a>

With S3 Storage Lens metrics, you can create a search expression. For example, you can create a search expression for all metrics that are named **IncompleteMultipartUploadStorageBytes** and add `SUM` to the expression. With this search expression, you can see your total incomplete multipart upload bytes across all dimensions of your storage in a single metric.

This example shows the syntax that you would use to create a search expression for all metrics named **IncompleteMultipartUploadStorageBytes**.

```
SUM(SEARCH('{AWS/S3/Storage-Lens,aws_account_number,aws_region,configuration_id,metrics_version,record_type,storage_class} MetricName="IncompleteMultipartUploadStorageBytes"', 'Average',86400))
```

For more information about this syntax, see [CloudWatch search expression syntax](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/search-expression-syntax.html) in the *Amazon CloudWatch User Guide*. To create a CloudWatch graph with a search expression, see [Creating a CloudWatch graph with a search expression](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-search-expression.html)in the *Amazon CloudWatch User Guide*.

# Amazon S3 Storage Lens metrics use cases
<a name="storage-lens-use-cases"></a>

You can use your Amazon S3 Storage Lens dashboard to visualize insights and trends, flag outliers, and receive recommendations. S3 Storage Lens metrics are organized into categories that align with key use cases. You can use these metrics to do the following: 
+ Identify cost-optimization opportunities
+ Apply data-protection best practices
+ Apply access-management best practices
+ Improve the performance of application workloads

For example, with cost-optimization metrics, you can identify opportunities to reduce your Amazon S3 storage costs. You can identify buckets with multipart uploads that are more than 7-days old or buckets that are accumulating noncurrent versions.

Similarly, you can use data-protection metrics to identify buckets that aren't following data-protection best practices within your organization. For example, you can identify buckets that don’t use AWS Key Management Service keys (SSE-KMS) for default encryption or don't have S3 Versioning enabled. 

With S3 Storage Lens access-management metrics, you can identify bucket settings for S3 Object Ownership so that you can migrate access control list (ACL) permissions to bucket policies and disable ACLs.

If you have [S3 Storage Lens advanced metrics](storage_lens_basics_metrics_recommendations.md) enabled, you can use detailed status-code metrics to get counts for successful or failed requests that you can use to troubleshoot access or performance issues. 

With advanced metrics, you can also access additional cost-optimization and data-protection metrics that you can use to identify opportunities to further reduce your overall S3 storage costs and better align with best practices for protecting your data. For example, advanced cost-optimization metrics include lifecycle rule counts that you can use to identify buckets that don't have lifecycle rules to expire incomplete multipart uploads that are more than 7 days old. Advanced data-protection metrics include replication rule counts.

For more information about metrics categories, see [Metrics categories](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_types). For a complete list of S3 Storage Lens metrics, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

**Topics**
+ [Using Amazon S3 Storage Lens to optimize your storage costs](storage-lens-optimize-storage.md)
+ [Using S3 Storage Lens to protect your data](storage-lens-data-protection.md)
+ [Using S3 Storage Lens to audit Object Ownership settings](storage-lens-access-management.md)
+ [Using S3 Storage Lens metrics to improve performance](storage-lens-detailed-status-code.md)

# Using Amazon S3 Storage Lens to optimize your storage costs
<a name="storage-lens-optimize-storage"></a>

You can use S3 Storage Lens cost-optimization metrics to reduce the overall cost of your S3 storage. Cost-optimization metrics can help you confirm that you've configured Amazon S3 cost effectively and according to best practices. For example, you can identify the following cost-optimization opportunities: 
+ Buckets with incomplete multipart uploads older than 7 days
+ Buckets that are accumulating numerous noncurrent versions
+ Buckets that don't have lifecycle rules to abort incomplete multipart uploads
+ Buckets that don't have lifecycle rules to expire noncurrent versions objects
+ Buckets that don't have lifecycle rules to transition objects to a different storage class

You can then use this data to add additional lifecycle rules to your buckets. 

The following examples show how you can use cost- optimization metrics in your S3 Storage Lens dashboard to optimize your storage costs.

**Topics**
+ [Identify your largest S3 buckets](#identify-largest-s3-buckets)
+ [Uncover cold Amazon S3 buckets](#uncover-cold-buckets)
+ [Locate incomplete multipart uploads](#locate-incomplete-mpu)
+ [Reduce the number of noncurrent versions retained](#reduce-noncurrent-versions-retained)
+ [Identify buckets that don't have lifecycle rules and review lifecycle rule counts](#identify-missing-lifecycle-rules)

## Identify your largest S3 buckets
<a name="identify-largest-s3-buckets"></a>

You pay for storing objects in S3 buckets. The rate that you're charged depends on your objects' sizes, how long you store the objects, and their storage classes. With S3 Storage Lens, you get a centralized view of all the buckets in your account. To see all the buckets in all of your organization's accounts, you can configure an AWS Organizations-level S3 Storage Lens dashboard. From this dashboard view, you can identify your largest buckets.

### Step 1: Identify your largest buckets
<a name="optimize-storage-identify-largest-buckets"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

   When the dashboard opens, you can see the latest date that S3 Storage Lens has collected metrics for. Your dashboard always loads to the latest date that has metrics available.

1. To see a ranking of your largest buckets by the **Total storage** metric for a selected date range, scroll down to the **Top N overview for *date*** section.

   You can toggle the sort order to show the smallest buckets. You can also adjust the **Metric** selection to rank your buckets by any of the available metrics. The **Top N overview for *date*** section also shows the percentage change from the prior day or week and a spark-line to visualize the trend. This trend is a 14-day trend for free metrics and a 30-day trend for advanced metrics and recommendations.
**Note**  
With S3 Storage Lens advanced metrics and recommendations, metrics are available for queries for 15 months. For more information, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

1. For more detailed insights about your buckets, scroll up to the top of the page, and then choose the **Bucket** tab. 

   On the **Bucket** tab, you can see details such as the recent growth rate, the average object size, the largest prefixes, and the number of objects.

### Step 2: Navigate to your buckets and investigate
<a name="optimize-storage-investigate"></a>

After you've identified your largest S3 buckets, you can navigate to each bucket within the S3 console to view the objects in the bucket, understand its associated workload, and identify its internal owners. You can contact the bucket owners to find out whether the growth is expected or whether the growth needs further monitoring and control.

## Uncover cold Amazon S3 buckets
<a name="uncover-cold-buckets"></a>

If you have [S3 Storage Lens advanced metrics](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection) enabled, you can use [activity metrics](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_types) to understand how cold your S3 buckets are. A "cold" bucket is one whose storage is no longer accessed (or very rarely accessed). This lack of activity typically indicates that the bucket's objects aren't frequently accessed.

Activity metrics, such as **GET Requests** and **Download Bytes**, indicate how often your buckets are accessed each day. To understand the consistency of the access pattern and to spot buckets that are no longer being accessed at all, you can trend this data over several months. The **Retrieval rate** metric, which is computed as **Download bytes / Total storage**, indicates the proportion of storage in a bucket that is accessed daily.

**Note**  
Download bytes are duplicated in cases where the same object is downloaded multiple times during the day.

**Prerequisite**  
To see activity metrics in your S3 Storage Lens dashboard, you must enable S3 Storage Lens **Advanced metrics and recommendations** and then select **Activity metrics**. For more information, see [Using the S3 console](storage_lens_editing.md#storage_lens_console_editing).

### Step 1: Identify active buckets
<a name="storage-lens-identify-active-buckets"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

1. Choose the **Bucket** tab, and then scroll down to the **Bubble analysis by buckets for *date*** section.

   In the **Bubble analysis by buckets for *date*** section, you can plot your buckets on multiple dimensions by using any three metrics to represent the **X-axis**, **Y-axis**, and **Size** of the bubble. 

1. To find buckets that have gone cold, for **X-axis**, **Y-axis**, and **Size**, choose the **Total storage**, **% retrieval rate**, and **Average object size** metrics.

1. In the **Bubble analysis by buckets for *date*** section, locate any buckets with retrieval rates of zero (or near zero) and a larger relative storage size, and choose the bubble that represents the bucket. 

   A box will appear with choices for more granular insights. Do one of the following:

   1. To update the **Bucket** tab to display metrics only for the selected bucket, choose **Drill down**, and then choose **Apply**. 

   1. To aggregate your bucket-level data to by account, AWS Region, storage class, or bucket, choose **Analyze by** and then make a choice for **Dimension**. For example, to aggregate by storage class, choose **Storage class** for **Dimension**.

   To find buckets that have gone cold, do a bubble analysis using the **Total storage**, **% retrieval rate**, and **Average object size** metrics. Look for any buckets with retrieval rates of zero (or near zero) and a larger relative storage size. 

   The **Bucket** tab of your dashboard updates to display data for your selected aggregation or filter. If you aggregated by storage class or another dimension, that new tab opens in your dashboard (for example, the **Storage class** tab). 

### Step 2: Investigate cold buckets
<a name="storage-lens-investigate-buckets"></a>

From here, you can identify the owners of cold buckets in your account or organization and find out if that storage is still needed. You can then optimize costs by configuring [lifecycle expiration configurations](object-lifecycle-mgmt.md) for these buckets or archiving the data in one of the [Amazon Glacier storage classes](https://docs.aws.amazon.com/amazonglacier/latest/dev/introduction.html). 

To avoid the problem of cold buckets going forward, you can [automatically transition your data by using S3 Lifecycle configurations](lifecycle-configuration-examples.md) for your buckets, or you can enable [auto-archiving with S3 Intelligent-Tiering](archived-objects.md).

You can also use step 1 to identify hot buckets. Then, you can ensure that these buckets use the correct [S3 storage class](storage-class-intro.md) to ensure that they serve their requests most effectively in terms of performance and cost.

## Locate incomplete multipart uploads
<a name="locate-incomplete-mpu"></a>

You can use multipart uploads to upload very large objects (up to 50 TB) as a set of parts for improved throughput and quicker recovery from network issues. In cases where the multipart upload process doesn't finish, the incomplete parts remain in the bucket (in an unusable state). These incomplete parts incur storage costs until the upload process is finished, or until the incomplete parts are removed. For more information, see [Uploading and copying objects using multipart upload in Amazon S3](mpuoverview.md).

With S3 Storage Lens, you can identify the number of incomplete multipart upload bytes in your account or across your entire organization, including incomplete multipart uploads that are more than 7 days old. For a complete list of incomplete multipart upload metrics, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md). 

As a best practice, we recommend configuring lifecycle rules to expire incomplete multipart uploads that are older than a specific number of days. When you create your lifecycle rule to expire incomplete multipart uploads, we recommend 7 days as a good starting point. 

### Step 1: Review overall trends for incomplete multipart uploads
<a name="locate-incomplete-mpu-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

1. In the **Snapshot for *date*** section, under **Metrics categories**, choose **Cost optimization**.

   The **Snapshot for *date*** section updates to display **Cost optimization** metrics, which include **Incomplete multipart upload bytes greater than 7 days old**. 

   In any chart in your S3 Storage Lens dashboard, you can see metrics for incomplete multipart uploads. You can use these metrics to further assess the impact of incomplete multipart upload bytes on your storage, including their contribution to overall growth trends. You can also drill down to deeper levels of aggregation, using the **Account**, **AWS Region**, **Bucket**, or **Storage class** tabs for a deeper analysis of your data. For an example, see [Uncover cold Amazon S3 buckets](#uncover-cold-buckets).

### Step 2: Identify buckets that have the most incomplete multipart upload bytes but don't have lifecycle rules to abort incomplete multipart uploads
<a name="locate-incomplete-mpu-step2"></a>

**Prerequisite**  
To see the **Abort incomplete multipart upload lifecycle rule count** metric in your S3 Storage Lens dashboard, you must enable S3 Storage Lens **Advanced metrics and recommendations**, and then select **Advanced cost optimization metrics**. For more information, see [Using the S3 console](storage_lens_editing.md#storage_lens_console_editing).

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

1. To identify specific buckets that are accumulating incomplete multipart uploads greater than 7 days old, go to the **Top N overview for *date*** section. 

   By default, the **Top N overview for *date*** section displays metrics for the top 3 buckets. You can increase or decrease the number of buckets in the **Top N** field. The **Top N overview for *date*** section also shows the percentage change from the prior day or week and a spark-line to visualize the trend. (This trend is a 14-day trend for free metrics and a 30-day trend for advanced metrics and recommendations.) 
**Note**  
With S3 Storage Lens advanced metrics and recommendations, metrics are available for queries for 15 months. For more information, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

1. For **Metric**, choose **Incomplete multipart upload bytes greater than 7 days old** in the **Cost optimization** category.

   Under **Top *number* buckets**, you can see the buckets with the most incomplete multipart upload storage bytes that are greater than 7 days old.

1. To view more detailed bucket-level metrics for incomplete multipart uploads, scroll to the top of the page, and then choose the **Bucket** tab.

1. Scroll down to the **Buckets** section. For **Metrics categories**, select **Cost optimization**. Then clear **Summary**.

   The **Buckets** list updates to display all the available **Cost optimization** metrics for the buckets shown. 

1. To filter the **Buckets** list to display only specific cost-optimization metrics, choose the preferences icon (![\[A screenshot that shows the preferences icon in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/preferences.png)).

1. Clear the toggles for all cost-optimization metrics until only **Incomplete multipart upload bytes greater than 7 days old** and **Abort incomplete multipart upload lifecycle rule count** remain selected. 

1. (Optional) Under **Page size**, choose the number of buckets to display in the list.

1. Choose **Confirm**.

   The **Buckets** list updates to display bucket-level metrics for incomplete multipart uploads and lifecycle rule counts. You can use this data to identify buckets that have the most incomplete multipart upload bytes that are greater than 7 days old and are missing lifecycle rules to abort incomplete multipart uploads. Then, you can navigate to these buckets in the S3 console and add lifecycle rules to delete abandoned incomplete multipart uploads.

### Step 3: Add a lifecycle rule to delete incomplete multipart uploads after 7 days
<a name="locate-incomplete-mpu-step3"></a>

To automatically manage incomplete multipart uploads, you can use the S3 console to create a lifecycle configuration to expire incomplete multipart upload bytes from a bucket after a specified number of days. For more information, see [Configuring a bucket lifecycle configuration to delete incomplete multipart uploads](mpu-abort-incomplete-mpu-lifecycle-config.md).

## Reduce the number of noncurrent versions retained
<a name="reduce-noncurrent-versions-retained"></a>

When enabled, S3 Versioning retains multiple distinct copies of the same object that you can use to quickly recover data if an object is accidentally deleted or overwritten. If you've enabled S3 Versioning without configuring lifecycle rules to transition or expire noncurrent versions, a large number of previous noncurrent versions can accumulate, which can have storage-cost implications. For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).

### Step 1: Identify buckets with the most noncurrent object versions
<a name="reduce-noncurrent-versions-retained-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

1. In the **Snapshot for *date*** section, under **Metric categories**, choose **Cost optimization**.

   The **Snapshot for *date*** section updates to display **Cost optimization** metrics, which include the metric for **% noncurrent version bytes**. The **% noncurrent version bytes** metric represents the proportion of your total storage bytes that is attributed to noncurrent versions, within the dashboard scope and for the selected date.
**Note**  
If your **% noncurrent version bytes** is greater than 10 percent of your storage at the account level, you might be storing too many object versions.

1. To identify specific buckets that are accumulating a large number of noncurrent versions:

   1. Scroll down to the **Top N overview for *date*** section. For **Top N**, enter the number of buckets that you would like to see data for. 

   1. For **Metric**, choose **% noncurrent version bytes**.

      Under **Top *number* buckets**, you can see the buckets (for the number that you specified) with the highest **% noncurrent version bytes**. The **Top N overview for *date*** section also shows the percentage change from the prior day or week and a spark-line to visualize the trend. This trend is a 14-day trend for free metrics and a 30-day trend for advanced metrics and recommendations. 
**Note**  
With S3 Storage Lens advanced metrics and recommendations, metrics are available for queries for 15 months. For more information, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

   1. To view more detailed bucket-level metrics for noncurrent object versions, scroll to the top of the page, and then choose the **Bucket** tab.

      In any chart or visualization in your S3 Storage Lens dashboard, you can drill down to deeper levels of aggregation, using the **Account**, **AWS Region**, **Storage class**, or **Bucket** tabs. For an example, see [Uncover cold Amazon S3 buckets](#uncover-cold-buckets).

   1. In the **Buckets** section, for **Metric categories**, select **Cost optimization**. Then, clear **Summary**. 

      You can now see the **% noncurrent version bytes** metric, along with other metrics related to noncurrent versions.

### Step 2: Identify buckets that are missing transition and expiration lifecycle rules for managing noncurrent versions
<a name="reduce-noncurrent-versions-retained-step2"></a>

**Prerequisite**  
To see the **Noncurrent version transition lifecycle rule count** and **Noncurrent version expiration lifecycle rule count** metrics in your S3 Storage Lens dashboard, you must enable S3 Storage Lens **Advanced metrics and recommendations**, and then select **Advanced cost optimization metrics**. For more information, see [Using the S3 console](storage_lens_editing.md#storage_lens_console_editing).

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

1. In your Storage Lens dashboard, choose the **Bucket ** tab.

1. Scroll down to the **Buckets** section. For **Metrics categories**, select **Cost optimization**. Then clear **Summary**.

   The **Buckets** list updates to display all the available **Cost optimization** metrics for the buckets shown. 

1. To filter the **Buckets** list to display only specific cost-optimization metrics, choose the preferences icon (![\[A screenshot that shows the preferences icon in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/preferences.png)).

1. Clear the toggles for all cost-optimization metrics until only the following remain selected:
   + **% noncurrent version bytes**
   + **Noncurrent version transition lifecycle rule count**
   + **Noncurrent version expiration lifecycle rule count**

1. (Optional) Under **Page size**, choose the number of buckets to display in the list.

1. Choose **Confirm**.

   The **Buckets** list updates to display metrics for noncurrent version bytes and noncurrent version lifecycle rule counts. You can use this data to identify buckets that have a high percentage of noncurrent version bytes but are missing transition and expiration lifecycle rules. Then, you can navigate to these buckets in the S3 console and add lifecycle rules to these buckets.

### Step 3: Add lifecycle rules to transition or expire noncurrent object versions
<a name="reduce-noncurrent-versions-retained-step3"></a>

After you've determined which buckets require further investigation, you can navigate to the buckets within the S3 console and add a lifecycle rule to expire noncurrent versions after a specified number of days. Alternatively, to reduce costs while still retaining noncurrent versions, you can configure a lifecycle rule to transition noncurrent versions to one of the Amazon Glacier storage classes. For more information, see [Specifying a lifecycle rule for a versioning-enabled bucket](lifecycle-configuration-examples.md#lifecycle-config-conceptual-ex6). 

## Identify buckets that don't have lifecycle rules and review lifecycle rule counts
<a name="identify-missing-lifecycle-rules"></a>

S3 Storage Lens provides S3 Lifecycle rule count metrics that you can use to identify buckets that are missing lifecycle rules. To find buckets that don't have lifecycle rules, you can use the **Total buckets without lifecycle rules** metric. A bucket with no S3 Lifecycle configuration might have storage that you no longer need or can migrate to a lower-cost storage class. You can also use lifecycle rule count metrics to identify buckets that are missing specific types of lifecycle rules, such as expiration or transition rules.

**Prerequisite**  
To see lifecycle rule count metrics and the **Total buckets without lifecycle rules** metric in your S3 Storage Lens dashboard, you must enable S3 Storage Lens **Advanced metrics and recommendations**, and then select **Advanced cost optimization metrics**. For more information, see [Using the S3 console](storage_lens_editing.md#storage_lens_console_editing).

### Step 1: Identify buckets without lifecycle rules
<a name="identify-missing-lifecycle-rules-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

1. To identify specific buckets without lifecycle rules, scroll down to the **Top N overview for *date*** section.

   By default, the **Top N overview for *date*** section displays metrics for the top 3 buckets. In the **Top N** field, you can increase the number of buckets. The **Top N overview for *date*** section also shows the percentage change from the prior day or week and a spark-line to visualize the trend. This trend is a 14-day trend for free metrics and a 30-day trend for advanced metrics and recommendations. 
**Note**  
With S3 Storage Lens advanced metrics and recommendations, metrics are available for queries for 15 months. For more information, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

1. For **Metric**, choose **Total buckets without lifecycle rules** from the **Cost optimization** category.

1. Review the following data for **Total buckets without lifecycle rules**:
   + **Top *number* accounts** ‐ See which accounts that have the most buckets without lifecycle rules.
   + **Top *number* Regions** ‐ View a breakdown of buckets without lifecycle rules by Region.
   + **Top *number* buckets** ‐ See which buckets don't have lifecycle rules. 

   In any chart or visualization in your S3 Storage Lens dashboard, you can drill down to deeper levels of aggregation, using the **Account**, **AWS Region**, **Storage class**, or **Bucket** tabs. For an example, see [Uncover cold Amazon S3 buckets](#uncover-cold-buckets).

   After you identify which buckets don't have lifecycle rules, you can also review specific lifecycle rule counts for your buckets. 

### Step 2: Review lifecycle rule counts for your buckets
<a name="identify-missing-lifecycle-rules-step2"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the dashboard that you want to view.

1. In your S3 Storage Lens dashboard, choose the **Bucket** tab.

1. Scroll down to the **Buckets** section. Under **Metrics categories**, select **Cost optimization**. Then clear **Summary**.

   The **Buckets** list updates to display all the available **Cost optimization** metrics for the buckets shown. 

1. To filter the **Buckets** list to display only specific cost-optimization metrics, choose the preferences icon (![\[A screenshot that shows the preferences icon in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/preferences.png)).

1. Clear the toggles for all cost-optimization metrics until only the following remain selected:
   + **Transition lifecycle rule count**
   + **Expiration lifecycle rule count**
   + **Noncurrent version transition lifecycle rule count**
   + **Noncurrent version expiration lifecycle rule count**
   + **Abort incomplete multipart upload lifecycle rule count**
   + **Total lifecycle rule count**

1. (Optional) Under **Page size**, choose the number of buckets to display in the list.

1. Choose **Confirm**.

   The **Buckets** list updates to display lifecycle rule count metrics for your buckets. You can use this data to identify buckets without lifecycle rules or buckets that are missing specific kinds of lifecycle rules, for example, expiration or transition rules. Then, you can navigate to these buckets in the S3 console and add lifecycle rules to these buckets.

### Step 3: Add lifecycle rules
<a name="identify-missing-lifecycle-rules-step3"></a>

After you've identified buckets with no lifecycle rules, you can add lifecycle rules. For more information, see [Setting an S3 Lifecycle configuration on a bucket](how-to-set-lifecycle-configuration-intro.md) and [Examples of S3 Lifecycle configurations](lifecycle-configuration-examples.md).

# Using S3 Storage Lens to protect your data
<a name="storage-lens-data-protection"></a>

You can use Amazon S3 Storage Lens data-protection metrics to identify buckets where data-protection best practices haven't been applied. You can use these metrics to take action and apply standard settings that align with best practices for protecting your data across the buckets in your account or organization. For example, you can use data-protection metrics to identify buckets that don't use AWS Key Management Service (AWS KMS) keys (SSE-KMS) for default encryption or requests that use AWS Signature Version 2 (SigV2). 

The following use cases provide strategies for using your S3 Storage Lens dashboard to identify outliers and apply data-protection best practices across your S3 buckets.

**Topics**
+ [Identify buckets that don't use server-side encryption with AWS KMS for default encryption (SSE-KMS)](#storage-lens-sse-kms)
+ [Identify buckets that have S3 Versioning enabled](#storage-lens-data-protection-versioning)
+ [Identify requests that use AWS Signature Version 2 (SigV2)](#storage-lens-data-protection-sigv)
+ [Count the total number of replication rules for each bucket](#storage-lens-data-protection-replication-rule)
+ [Identify percentage of Object Lock bytes](#storage-lens-data-protection-object-lock)

## Identify buckets that don't use server-side encryption with AWS KMS for default encryption (SSE-KMS)
<a name="storage-lens-sse-kms"></a>

With Amazon S3 default encryption, you can set the default encryption behavior for an S3 bucket. For more information, see [Setting default server-side encryption behavior for Amazon S3 buckets](bucket-encryption.md).

You can use the **SSE-KMS enabled bucket count** and **% SSE-KMS enabled buckets** metrics to identify buckets that use server-side encryption with AWS KMS keys (SSE-KMS) for default encryption. S3 Storage Lens also provides metrics for unencrypted bytes, unencrypted objects, encrypted bytes, and encrypted objects. For a complete list of metrics, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md). 

You can analyze SSE-KMS encryption metrics in the context of general encryption metrics to identify buckets that don't use SSE-KMS. If you want to use SSE-KMS for all the buckets in your account or organization, you can then update the default encryption settings for these buckets to use SSE-KMS. In addition to SSE-KMS, you can use server-side encryption with Amazon S3 managed keys (SSE-S3) or customer-provided keys (SSE-C). For more information, see [Protecting data with encryption](UsingEncryption.md). 

### Step 1: Identify which buckets are using SSE-KMS for default encryption
<a name="storage-lens-sse-kms-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. In the **Trends and distributions** section, choose **% SSE-KMS enabled bucket count** for the primary metric and **% encrypted bytes** for the secondary metric.

   The **Trend for *date*** chart updates to display trends for SSE-KMS and encrypted bytes. 

1. To view more granular, bucket-level insights for SSE-KMS:

   1. Choose a point on the chart. A box will appear with choices for more granular insights.

   1. Choose the **Buckets** dimension. Then choose **Apply**.

1. In the **Distribution by buckets for *date*** chart, choose the **SSE-KMS enabled bucket count** metric. 

1. You can now see which buckets have SSE-KMS enabled and which do not.

### Step 2: Update bucket default encryption settings
<a name="storage-lens-sse-kms-step2"></a>

Now that you've determined which buckets use SSE-KMS in the context of your **% encrypted bytes**, you can identify buckets that don't use SSE-KMS. You can then optionally navigate to these buckets within the S3 console and update their default encryption settings to use SSE-KMS or SSE-S3. For more information, see [Configuring default encryption](default-bucket-encryption.md).

## Identify buckets that have S3 Versioning enabled
<a name="storage-lens-data-protection-versioning"></a>

When enabled, the S3 Versioning feature retains multiple versions of the same object that can be used to quickly recover data if an object is accidentally deleted or overwritten. You can use the **Versioning-enabled bucket count** metric to see which buckets use S3 Versioning. Then, you can take action in the S3 console to enable S3 Versioning for other buckets.

### Step 1: Identify buckets that have S3 Versioning enabled
<a name="storage-lens-data-protection-versioning-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. In the **Trends and distributions** section, choose **Versioning-enabled bucket count** for the primary metric and **Buckets** for the secondary metric.

   The **Trend for *date*** chart updates to display trends for S3 Versioning enabled buckets. Right below the trends line, you can see the **Storage class distribution** and **Region distribution** subsections.

1. To view more granular insights for any of the buckets that you see in the **Trend for *date*** chart so that you can perform a deeper analysis, do the following:

   1. Choose a point on the chart. A box will appear with choices for more granular insights.

   1. Choose a dimension to apply to your data for deeper analysis: **Account**, **AWS Region**, **Storage class**, or **Bucket**. Then choose **Apply**.

1. In the **Bubble analysis by buckets for *date*** section, choose the **Versioning-enabled bucket count**, **Buckets**, and **Active buckets** metrics.

   The **Bubble analysis by buckets for *date*** section updates to display data for the metrics that you selected. You can use this data to see which buckets have S3 Versioning enabled in the context of your total bucket count. In the **Bubble analysis by buckets for *date*** section, you can plot your buckets on multiple dimensions by using any three metrics to represent the **X-axis**, **Y-axis**, and **Size** of the bubble. 

### Step 2: Enable S3 Versioning
<a name="storage-lens-data-protection-versioning-step2"></a>

After you've identified buckets that have S3 Versioning enabled, you can identify buckets that have never had S3 Versioning enabled or are versioning suspended. Then, you can optionally enable versioning for these buckets in the S3 console. For more information, see [Enabling versioning on buckets](manage-versioning-examples.md).

## Identify requests that use AWS Signature Version 2 (SigV2)
<a name="storage-lens-data-protection-sigv"></a>

You can use the **All unsupported signature requests** metric to identify requests that use AWS Signature Version 2 (SigV2). This data can help you identify specific applications that are using SigV2. You can then migrate these applications to AWS Signature Version 4 (SigV4). 

SigV4 is the recommended signing method for all new S3 applications. SigV4 provides improved security and is supported in all AWS Regions. For more information, see [Amazon S3 update - SigV2 deprecation period extended & modified](https://aws.amazon.com/blogs/aws/amazon-s3-update-sigv2-deprecation-period-extended-modified/).

**Prerequisite**  
To see **All unsupported signature requests** in your S3 Storage Lens dashboard, you must enable S3 Storage Lens **Advanced metrics and recommendations** and then select **Advanced data protection metrics**. For more information, see [Using the S3 console](storage_lens_editing.md#storage_lens_console_editing).

### Step 1: Examine SigV2 signing trends by AWS account, Region, and bucket
<a name="storage-lens-data-protection-sigv-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. To identify specific buckets, accounts, and Regions with requests that use SigV2:

   1. Under **Top N overview for *date***, in **Top N**, enter the number of buckets that you would like to see data for. 

   1. For **Metric**, choose **All unsupported signature requests** from the **Data protection** category.

      The **Top N overview for *date*** updates to display data for SigV2 requests by account, AWS Region, and bucket. The **Top N overview for *date*** section also shows the percentage change from the prior day or week and a spark-line to visualize the trend. This trend is a 14-day trend for free metrics and a 30-day trend for advanced metrics and recommendations. 
**Note**  
With S3 Storage Lens advanced metrics and recommendations, metrics are available for queries for 15 months. For more information, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

### Step 2: Identify buckets that are accessed by applications through SigV2 requests
<a name="storage-lens-data-protection-sigv-step2"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. In your Storage Lens dashboard, choose the **Bucket** tab.

1. Scroll down to the **Buckets** section. Under **Metrics categories**, choose **Data protection**. Then clear **Summary**.

   The **Buckets** list updates to display all the available **Data protection** metrics for the buckets shown. 

1. To filter the **Buckets** list to display only specific data-protection metrics, choose the preferences icon (![\[A screenshot that shows the preferences icon in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/preferences.png)).

1. Clear the toggles for all data-protection metrics until only the following metrics remain selected:
   + **All unsupported signature requests**
   + **% all unsupported signature requests**

1. (Optional) Under **Page size**, choose the number of buckets to display in the list.

1. Choose **Confirm**.

   The **Buckets** list updates to display bucket-level metrics for SigV2 requests. You can use this data to identify specific buckets that have SigV2 requests. Then, you can use this information to migrate your applications to SigV4. For more information, see [Authenticating Requests (AWS Signature Version 4)](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html) in the *Amazon Simple Storage Service API Reference*.

## Count the total number of replication rules for each bucket
<a name="storage-lens-data-protection-replication-rule"></a>

S3 Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. For more information, see [Replicating objects within and across Regions](replication.md). 

You can use S3 Storage Lens replication rule count metrics to get detailed per-bucket information about your buckets that are configured for replication. This information includes replication rules within and across buckets and Regions.

**Prerequisite**  
To see replication rule count metrics in your S3 Storage Lens dashboard, you must enable S3 Storage Lens **Advanced metrics and recommendations** and then select **Advanced data protection metrics**. For more information, see [Using the S3 console](storage_lens_editing.md#storage_lens_console_editing).

### Step 1: Count the total number of replication rules for each bucket
<a name="storage-lens-data-protection-replication-rule-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. In your Storage Lens dashboard, choose the **Bucket** tab.

1. Scroll down to the **Buckets** section. Under **Metrics categories**, choose **Data protection**. Then clear **Summary**.

1. To filter the **Buckets** list to display only replication rule count metrics, choose the preferences icon (![\[A screenshot that shows the preferences icon in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/preferences.png)).

1. Clear the toggles for all data-protection metrics until only the replication rule count metrics remain selected:
   + **Same-Region Replication rule count**
   + **Cross-Region Replication rule count**
   + **Same-account replication rule count**
   + **Cross-account replication rule count**
   + **Total replication rule count**

1. (Optional) Under **Page size**, choose the number of buckets to display in the list.

1. Choose **Confirm**.

### Step 2: Add replication rules
<a name="storage-lens-data-protection-replication-rule-step2"></a>

After you have a per-bucket replication rule count, you can optionally create additional replication rules. For more information, see [Examples for configuring live replication](replication-example-walkthroughs.md).

## Identify percentage of Object Lock bytes
<a name="storage-lens-data-protection-object-lock"></a>

With S3 Object Lock, you can store objects by using a *write-once-read-many (WORM)* model. You can use Object Lock to help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely. You can enable Object Lock only when you create a bucket and also enable S3 Versioning. However, you can edit the retention period for individual object versions or apply legal holds for buckets that have Object Lock enabled. For more information, see [Locking objects with Object Lock](object-lock.md).

You can use Object Lock metrics in S3 Storage Lens to see the **% Object Lock bytes** metric for your account or organization. You can use this information to identify buckets in your account or organization that aren't following your data-protection best practices. 

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. In the **Snapshot** section, under **Metrics categories**, choose **Data protection**.

   The **Snapshot** section updates to display data-protection metrics, including the **% Object Lock bytes** metric. You can see the overall percentage of Object Lock bytes for your account or organization. 

1. To see the **% Object Lock bytes** per bucket, scroll down to the **Top N overview** section.

   To get object-level data for Object Lock, you can also use the **Object Lock object count** and **% Object Lock objects** metrics. 

1. For **Metric**, choose **% Object Lock bytes** from the **Data protection** category.

   By default, the **Top N overview for *date*** section displays metrics for the top 3 buckets. In the **Top N** field, you can increase the number of buckets. The **Top N overview for *date*** section also shows the percentage change from the prior day or week and a spark-line to visualize the trend. This trend is a 14-day trend for free metrics and a 30-day trend for advanced metrics and recommendations. 
**Note**  
With S3 Storage Lens advanced metrics and recommendations, metrics are available for queries for 15 months. For more information, see [Metrics selection](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection).

1. Review the following data for **% Object Lock bytes**:
   + **Top *number* accounts** ‐ See which accounts have the highest and lowest **% Object Lock bytes**.
   + **Top *number* Regions** ‐ View a breakdown of **% Object Lock bytes** by Region.
   + **Top *number* buckets** ‐ See which buckets have the highest and lowest **% Object Lock bytes**.

# Using S3 Storage Lens to audit Object Ownership settings
<a name="storage-lens-access-management"></a>

Amazon S3 Object Ownership is an S3 bucket-level setting that you can use to disable access control lists (ACLs) and control ownership of the objects in your bucket. If you set Object Ownership to bucket owner enforced, you can disable [access control lists (ACLs)](acl-overview.md) and take ownership of every object in your bucket. This approach simplifies access management for data stored in Amazon S3. 

By default, when another AWS account uploads an object to your S3 bucket, that account (the object writer) owns the object, has access to it, and can grant other users access to it through ACLs. You can use Object Ownership to change this default behavior. 

A majority of modern use cases in Amazon S3 no longer require the use of ACLs. Therefore, we recommend that you disable ACLs, except in circumstances where you must control access for each object individually. By setting Object Ownership to bucket owner enforced, you can disable ACLs and rely on policies for access control. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

With S3 Storage Lens access-management metrics, you can identify buckets that don't have disabled ACLs. After identifying these buckets, you can migrate ACL permissions to policies and disable ACLs for these buckets.

**Topics**
+ [Step 1: Identify general trends for Object Ownership settings](#storage-lens-access-management-step1)
+ [Step 2: Identify bucket-level trends for Object Ownership settings](#storage-lens-access-management-step2)
+ [Step 3: Update your Object Ownership setting to bucket owner enforced to disable ACLs](#storage-lens-access-management-step3)

## Step 1: Identify general trends for Object Ownership settings
<a name="storage-lens-access-management-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. In the **Snapshot for *date*** section, under **Metrics categories**, choose **Access management**.

   The **Snapshot for *date*** section updates to display the **% Object Ownership bucket owner enforced** metric. You can see the overall percentage of buckets in your account or organization that use the bucket owner enforced setting for Object Ownership to disable ACLs.

## Step 2: Identify bucket-level trends for Object Ownership settings
<a name="storage-lens-access-management-step2"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. To view more detailed bucket-level metrics, choose the **Bucket** tab.

1. In the **Distribution by buckets for *date*** section, choose the **% Object Ownership bucket owner enforced** metric.

   The chart updates to show a per-bucket breakdown for **% Object Ownership bucket owner enforced**. You can see which buckets use the bucket owner enforced setting for Object Ownership to disable ACLs.

1. To view the bucket owner enforced settings in context, scroll down to the **Buckets** section. For **Metrics categories**, select **Access management**. Then clear **Summary**.

   The **Buckets** list displays data for all three Object Ownership settings: bucket owner enforced, bucket owner preferred, and object writer.

1. To filter the **Buckets** list to display metrics only for a specific Object Ownership setting, choose the preferences icon (![\[A screenshot that shows the preferences icon in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/preferences.png)).

1. Clear the metrics that you don't want to see.

1. (Optional) Under **Page size**, choose the number of buckets to display in the list.

1. Choose **Confirm**.

## Step 3: Update your Object Ownership setting to bucket owner enforced to disable ACLs
<a name="storage-lens-access-management-step3"></a>

After you've identified buckets that use the object writer and bucket owner preferred setting for Object Ownership, you can migrate your ACL permissions to bucket policies. When you've finished migrating your ACL permissions, you can then update your Object Ownership settings to bucket owner enforced in order to disable ACLs. For more information, see [Prerequisites for disabling ACLs](object-ownership-migrating-acls-prerequisites.md).

# Using S3 Storage Lens metrics to improve performance
<a name="storage-lens-detailed-status-code"></a>

If you have [S3 Storage Lens advanced metrics](storage_lens_basics_metrics_recommendations.md#storage_lens_basics_metrics_selection) enabled, you can use detailed status-code metrics to get counts for successful or failed requests. You can use this information to troubleshoot access or performance issues. Detailed status-code metrics show counts for HTTP status codes, such as 403 Forbidden and 503 Service Unavailable. You can examine overall trends for detailed status-code metrics across S3 buckets, accounts, and organizations. Then, you can drill down into bucket-level metrics to identify workloads that are currently accessing these buckets and causing errors. 

For example, you can look at the **403 Forbidden error count** metric to identify workloads that are accessing buckets without the correct permissions applied. After you've identified these workloads, you can do a deep dive outside of S3 Storage Lens to troubleshoot your 403 Forbidden errors.

This example shows you how to do a trend analysis for the 403 Forbidden error by using the **403 Forbidden error count** and the **% 403 Forbidden errors** metrics. You can use these metrics to identify workloads that are accessing buckets without the correct permissions applied. You can do a similar trend analysis for any of the other **Detailed status code metrics**. For more information, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

**Prerequisite**  
To see **Detailed status code metrics** in your S3 Storage Lens dashboard, you must enable S3 Storage Lens **Advanced metrics and recommendations**, and then select **Detailed status code metrics**. For more information, see [Using the S3 console](storage_lens_editing.md#storage_lens_console_editing).

**Topics**
+ [Step 1: Do a trend analysis for an individual HTTP status code](#storage-lens-detailed-status-code-step1)
+ [Step 2: Analyze error counts by bucket](#storage-lens-detailed-status-code-step2)
+ [Step 3: Troubleshoot errors](#storage-lens-detailed-status-code-step3)

## Step 1: Do a trend analysis for an individual HTTP status code
<a name="storage-lens-detailed-status-code-step1"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. In the **Trends and distributions** section, for **Primary metric**, choose **403 Forbidden error count** from the **Detailed status codes** category. For **Secondary metric**, choose **% 403 Forbidden errors**.

1. Scroll down to the **Top N overview for *date*** section. For **Metrics**, choose **403 Forbidden error count** or **% 403 Forbidden errors** from the **Detailed status codes** category.

   The **Top N overview for *date*** section updates to display the top 403 Forbidden error counts by account, AWS Region, and bucket. 

## Step 2: Analyze error counts by bucket
<a name="storage-lens-detailed-status-code-step2"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens**, **Dashboards**.

1. In the **Dashboards** list, choose the name of the dashboard that you want to view.

1. In your Storage Lens dashboard, choose the **Bucket** tab.

1. Scroll down to the **Buckets** section. For **Metrics categories**, select **Detailed status code** metrics. Then clear **Summary**.

   The **Buckets** list updates to display all the available detailed status code metrics. You can use this information to see which buckets have a large proportion of certain HTTP status codes and which status codes are common across buckets. 

1. To filter the **Buckets** list to display only specific detailed status-code metrics, choose the preferences icon (![\[A screenshot that shows the preferences icon in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/preferences.png)).

1. Clear the toggles for any detailed status-code metrics that you don't want to view in the **Buckets** list.

1. (Optional) Under **Page size**, choose the number of buckets to display in the list.

1. Choose **Confirm**.

   The **Buckets** list displays error count metrics for the number of buckets that you specified. You can use this information to identify specific buckets that are experiencing many errors and troubleshoot errors by bucket.

## Step 3: Troubleshoot errors
<a name="storage-lens-detailed-status-code-step3"></a>

 After you identify buckets with a high proportion of specific HTTP status codes, you can troubleshoot these errors. For more information, see the following:
+ [Why am I getting a 403 Forbidden error when I try to upload files in Amazon S3? ](https://aws.amazon.com/premiumsupport/knowledge-center/s3-403-forbidden-error/)
+ [Why am I getting a 403 Forbidden error when I try to modify a bucket policy in Amazon S3?](https://aws.amazon.com/premiumsupport/knowledge-center/s3-access-denied-bucket-policy/)
+ [How do I troubleshoot 403 Forbidden errors from my Amazon S3 bucket where all the resources are from the same AWS account?](https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403-resource-same-account/)
+ [How do I troubleshoot an HTTP 500 or 503 error from Amazon S3?](https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/)

# Working with S3 Storage Lens data in S3 Tables
<a name="storage-lens-s3-tables"></a>

Amazon S3 Storage Lens can export your storage analytics and insights to S3 Tables, enabling you to query your Storage Lens metrics using SQL with AWS analytics services like Amazon Athena, Amazon EMR, Amazon SageMaker Studio (SMStudio), and other AWS analytics tools. When you configure S3 Storage Lens to export to S3 Tables, your metrics are automatically stored in read-only Apache Iceberg tables in the AWS-managed `aws-s3` table bucket.

This integration provides structured data access for querying Storage Lens metrics using standard SQL, analytics integration with AWS analytics services, historical analysis capabilities, and cost optimization with no additional charges for exporting to AWS-managed S3 Tables.

**Topics**
+ [Exporting S3 Storage Lens metrics to S3 Tables](storage-lens-s3-tables-export.md)
+ [Table naming for S3 Storage Lens export to S3 Tables](storage-lens-s3-tables-naming.md)
+ [Understanding S3 Storage Lens table schemas](storage-lens-s3-tables-schemas.md)
+ [Permissions for S3 Storage Lens tables](storage-lens-s3-tables-permissions.md)
+ [Querying S3 Storage Lens data with analytics tools](storage-lens-s3-tables-querying.md)
+ [Using AI assistants with S3 Storage Lens tables](storage-lens-s3-tables-ai-tools.md)

# Exporting S3 Storage Lens metrics to S3 Tables
<a name="storage-lens-s3-tables-export"></a>

You can configure Amazon S3 Storage Lens to export your storage analytics and insights to S3 Tables. When you enable S3 Tables export, your metrics are automatically stored in read-only Apache Iceberg tables in the AWS-managed `aws-s3` table bucket, making them queryable using SQL with AWS analytics services like Amazon Athena, Amazon Redshift, and Amazon EMR.

**Note**  
There is no additional charge for exporting S3 Storage Lens metrics to AWS-managed S3 Tables. Standard charges apply for table storage, table management, and requests on the tables. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing). 

## Enable S3 Tables export using the console
<a name="storage-lens-s3-tables-export-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/). 

1. In the left navigation pane, choose **Storage Lens**, and then choose **Storage Lens Dashboards**. 

1. In **Storage Lens Dashboards** list, choose the dashboard that you want to edit.

1. Choose **Edit**. 

1. On the **Dashboard** page, navigate to **Metrics export and publishing** section.

1. To enable Table Export for **Default metrics report**, select **Table bucket** in the Bucket type.

1. To enable Table Export for **Expanded prefixes metrics report**, select **Table bucket** in the Bucket type.

1. Review dashboard config and click **Submit**. 

**Note**  
After you enable S3 Tables export, it can take up to 48 hours for the first data to be available in the tables.

**Note**  
There is no additional charge for exporting S3 Storage Lens metrics to AWS-managed S3 Tables. Standard charges apply for table storage, table management, requests on the tables, and monitoring. You can enable or disable export to S3 Tables by using the Amazon S3 console, Amazon S3 API, the AWS CLI, or AWS SDKs.

**Note**  
By default, records in your S3 tables don't expire. To help minimize storage costs for your tables, you can enable and configure record expiration for the tables. With this option, Amazon S3 automatically removes records from a table when the records expire. See: [Record expiration for tables.](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-record-expiration.html) 

## Enable S3 Tables export using the AWS CLI
<a name="storage-lens-s3-tables-export-cli"></a>

**Note**  
Before running the following commands, make sure that you have an up to date CLI version. See [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). 

The following example enables S3 Tables export for an S3 Storage Lens configuration "Default metrics report" using the AWS CLI. To use this example, replace the *user input placeholders* with your own information.

```
aws s3control put-storage-lens-configuration --account-id=555555555555 --config-id=your-configuration-id --storage-lens-configuration '{
   "Id":"your-configuration-id",
   "AccountLevel":{
      "ActivityMetrics":{
        "IsEnabled":true
      },
      "BucketLevel":{
         "ActivityMetrics":{
            "IsEnabled":true
         }
      }
   },
   "DataExport":{
      "S3BucketDestination":{
         "OutputSchemaVersion":"V_1",
         "Format":"CSV",
         "AccountId":"555555555555",
         "Arn":"arn:aws:s3:::my-export-bucket",
         "Prefix":"storage-lens-exports/"
      },
      "StorageLensTableDestination":{
         "IsEnabled":true
      }
   },
   "IsEnabled":true
}'
```

## Enable S3 Tables export using the AWS SDKs
<a name="storage-lens-s3-tables-export-sdk"></a>

The following example enables S3 Tables export for an S3 Storage Lens configuration "Default metrics report" using the AWS SDK for Python (Boto3). To use this example, replace the *user input placeholders* with your own information.

```
import boto3

s3control = boto3.client('s3control')

response = s3control.put_storage_lens_configuration( AccountId='555555555555', ConfigId='your-configuration-id', StorageLensConfiguration={
        'Id': 'your-configuration-id',
        'AccountLevel': {
            'ActivityMetrics': {
              'IsEnabled': True
            },
            'BucketLevel': {
                'ActivityMetrics': {
                    'IsEnabled': True
                }
            }
        },
        'DataExport': {
            'S3BucketDestination': {
                'OutputSchemaVersion': 'V_1',
                'Format': 'CSV',
                'AccountId': '555555555555',
                'Arn': 'arn:aws:s3:::my-export-bucket',
                'Prefix': 'storage-lens-exports/'
            },
            'StorageLensTableDestination': {
                'IsEnabled': True
            }
        },
        'IsEnabled': True
    }
)
```

For more information about using the AWS SDKs, see [AWS SDKs and tools](https://aws.amazon.com/developer/tools/). 

## Next steps
<a name="storage-lens-s3-tables-export-next-steps"></a>

After enabling S3 Tables export, you can:
+ Learn about [Table naming for S3 Storage Lens export to S3 Tables](storage-lens-s3-tables-naming.md) 
+ Learn about [Understanding S3 Storage Lens table schemas](storage-lens-s3-tables-schemas.md) 

# Table naming for S3 Storage Lens export to S3 Tables
<a name="storage-lens-s3-tables-naming"></a>

When you export S3 Storage Lens metrics to S3 Tables, the tables are organized using Apache Iceberg catalog conventions with specific naming patterns to ensure compatibility and organization.

## Table location structure
<a name="storage-lens-s3-tables-naming-location"></a>

The complete table location follows this pattern:

```
s3tablescatalog/aws-s3/<namespace>/<table-name>
```

### Table bucket name
<a name="storage-lens-s3-tables-naming-bucket"></a>

 **Table Bucket:** `aws-s3` 

The S3 Storage Lens export uses the `aws-s3` table bucket, which is the designated bucket for AWS S3-related system tables.

### Catalog name
<a name="storage-lens-s3-tables-naming-catalog"></a>

 **Catalog:** `s3tablescatalog/aws-s3` 

S3 Storage Lens tables are stored in the S3 catalog because Storage Lens provides insights about three types of S3 resources:
+ Storage metrics
+ Bucket properties
+ API usage metrics

## Namespace naming convention
<a name="storage-lens-s3-tables-naming-namespace"></a>

Namespaces organize tables within the catalog. For S3 Storage Lens, the namespace is derived from your Storage Lens configuration ID.

### Standard namespace format
<a name="storage-lens-s3-tables-naming-namespace-standard"></a>

For Storage Lens configuration IDs without dots (`.`): 

```
lens_<configuration-id>_exp
```

 **Example:** If your configuration ID is `my-lens-config`, the namespace will be:

```
lens_my-lens-config_exp
```

### Namespace format with dot character or uppercase letters handling
<a name="storage-lens-s3-tables-naming-namespace-dots"></a>

Storage Lens configuration IDs can contain dots (`.`) or uppercase letters (`A-Z`), but S3 Tables namespaces only support lowercase letters, numbers, hyphens (`-`), and underscores (`_`). When your configuration ID contains dots, they are converted to hyphens, uppercase letters are converted to lower case letters, and a hash suffix is added for uniqueness:

```
lens_<configuration-id-with-dots-or-uppercase-replaced>_exp_<7-char-hash>
```

 **Example:** If your configuration ID is `my.LENS.config`, the namespace will be:

```
lens_my-lens-config_exp_a1b2c3d
```

Where `a1b2c3d` is the first 7 characters of the SHA-1 hash of the original configuration ID.

## Complete examples
<a name="storage-lens-s3-tables-naming-examples"></a>

For a Storage Lens configuration with ID `production-metrics`: 
+  **Table Bucket:** `aws-s3` 
+  **Catalog:** `s3tablescatalog/aws-s3` 
+  **Namespace:** `lens_production-metrics_exp` 
+  **Full Path:** `s3tablescatalog/aws-s3/lens_production-metrics_exp/<table-name>` 

For a Storage Lens configuration with ID `prod.us.east.metrics`: 
+  **Table Bucket:** `aws-s3` 
+  **Catalog:** `s3tablescatalog/aws-s3` 
+  **Namespace:** `lens_prod-us-east-metrics_exp_f8e9a1b` (with hash)
+  **Full Path:** `s3tablescatalog/aws-s3/lens_prod-us-east-metrics_exp_f8e9a1b/<table-name>` 

## Table types
<a name="storage-lens-s3-tables-naming-types"></a>

The following table shows the different types of tables created for S3 Storage Lens exports:


| Catalog | Namespace | S3 table name | Description | 
| --- | --- | --- | --- | 
| s3tablescatalog/aws-s3 | lens\$1<conf\$1name>\$1exp[\$1<hash>] | default\$1storage\$1metrics | This table contains the storage metrics for your Storage Lens configuration. | 
| s3tablescatalog/aws-s3 | lens\$1<conf\$1name>\$1exp[\$1<hash>] | default\$1activity\$1metrics | This table contains the activity metrics for your Storage Lens configuration. | 
| s3tablescatalog/aws-s3 | lens\$1<conf\$1name>\$1exp[\$1<hash>] | expanded\$1prefixes\$1storage\$1metrics | This table contains the storage metrics for all the prefixes in your Storage Lens configuration. | 
| s3tablescatalog/aws-s3 | lens\$1<conf\$1name>\$1exp[\$1<hash>] | expanded\$1prefixes\$1activity\$1metrics | This table contains the activity metrics for all the prefixes in your Storage Lens configuration. | 
| s3tablescatalog/aws-s3 | lens\$1<conf\$1name>\$1exp[\$1<hash>] | bucket\$1property\$1metrics | This table contains the bucket property metrics for all the buckets in your Storage Lens configuration. | 

## Next steps
<a name="storage-lens-s3-tables-naming-next-steps"></a>
+ Learn about [Understanding S3 Storage Lens table schemas](storage-lens-s3-tables-schemas.md) 
+ Learn about [Permissions for S3 Storage Lens tables](storage-lens-s3-tables-permissions.md) 

# Understanding S3 Storage Lens table schemas
<a name="storage-lens-s3-tables-schemas"></a>

When exporting S3 Storage Lens metrics to S3 tables, the data is organized into three separate table schemas: storage metrics, bucket property metrics, and activity metrics.

## Storage metrics table schema
<a name="storage-lens-s3-tables-schemas-storage"></a>


| Name | Type | Description | 
| --- | --- | --- | 
|  version\$1number  | string | Version identifier of the schema of the table | 
|  configuration\$1id  | string | S3 Storage Lens configuration name | 
|  report\$1time  | timestamptz | Date the S3 Storage Lens report refers to | 
|  aws\$1account\$1id  | string | Account id the entry refers to | 
|  aws\$1region  | string | Region | 
|  storage\$1class  | string | Storage Class | 
|  record\$1type  | string | Type of record, related to what is the level of aggregation of data. Values: ACCOUNT, BUCKET, PREFIX, STORAGE\$1LENS\$1GROUP\$1BUCKET, STORAGE\$1LENS\$1GROUP\$1ACCOUNT.  | 
|  record\$1value  | string | Disambiguator for record types that have more than one record under them. It is used to reference the prefix | 
|  bucket\$1name  | string | Bucket name | 
|  object\$1count  | long | Number of objects stored for the current referenced item | 
|  storage\$1bytes  | DECIMAL(38,0) | Number of bytes stored for the current referenced item | 
|  bucket\$1key\$1sse\$1kms\$1object\$1count  | long | Number of objects encrypted with a customer managed key stored for the current referenced item | 
|  bucket\$1key\$1sse\$1kms\$1storage\$1bytes  | DECIMAL(38,0) | Number of bytes encrypted with a customer managed key stored for the current referenced item | 
|  current\$1version\$1object\$1count  | long | Number of current version objects stored for the current referenced item | 
|  current\$1version\$1storage\$1bytes  | DECIMAL(38,0) | Number of current version bytes stored for the current referenced item | 
|  delete\$1marker\$1object\$1count  | long | Number of delete marker objects stored for the current referenced item | 
|  delete\$1marker\$1storage\$1bytes  | DECIMAL(38,0) | Number of delete marker bytes stored for the current referenced item | 
|  encrypted\$1object\$1count  | long | Number of encrypted objects stored for the current referenced item | 
|  encrypted\$1storage\$1bytes  | DECIMAL(38,0) | Number of encrypted bytes stored for the current referenced item | 
|  incomplete\$1mpu\$1object\$1older\$1than\$17\$1days\$1count  | long | Number of incomplete multipart upload objects older than 7 days stored for the current referenced item | 
|  incomplete\$1mpu\$1storage\$1older\$1than\$17\$1days\$1bytes  | DECIMAL(38,0) | Number of incomplete multipart upload bytes stored older than 7 days for the current referenced item | 
|  incomplete\$1mpu\$1object\$1count  | long | Number of incomplete multipart upload objects stored for the current referenced item | 
|  incomplete\$1mpu\$1storage\$1bytes  | DECIMAL(38,0) | Number of incomplete multipart upload bytes stored for the current referenced item | 
|  non\$1current\$1version\$1object\$1count  | long | Number of non-current version objects stored for the current referenced item | 
|  non\$1current\$1version\$1storage\$1bytes  | DECIMAL(38,0) | Number of non-current version bytes stored for the current referenced item | 
|  object\$1lock\$1enabled\$1object\$1count  | long | Number of objects stored for for objects with lock enabled in the current referenced item | 
|  object\$1lock\$1enabled\$1storage\$1bytes  | DECIMAL(38,0) | Number of bytes stored for objects with lock enabled in the current referenced item | 
|  replicated\$1object\$1count  | long | Number of objects replicated for the current referenced item | 
|  replicated\$1storage\$1bytes  | DECIMAL(38,0) | Number of bytes replicated for the current referenced item | 
|  replicated\$1object\$1source\$1count  | long | Number of objects replicated as source stored for the current referenced item | 
|  replicated\$1storage\$1source\$1bytes  | DECIMAL(38,0) | Number of bytes replicated as source for the current referenced item | 
|  sse\$1kms\$1object\$1count  | long | Number of objects encrypted with SSE key stored for the current referenced item | 
|  sse\$1kms\$1storage\$1bytes  | DECIMAL(38,0) | Number of bytes encrypted with SSE key stored for the current referenced item | 
|  object\$10kb\$1count  | long | Number of objects with sizes equal to 0KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$10kb\$1to\$1128kb\$1count  | long | Number of objects with sizes greater than 0KB and less than equal to 128KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1128kb\$1to\$1256kb\$1count  | long | Number of objects with sizes greater than 128KB and less than equal to 256KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1256kb\$1to\$1512kb\$1count  | long | Number of objects with sizes greater than 256KB and less than equal to 512KB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1512kb\$1to\$11mb\$1count  | long | Number of objects with sizes greater than 512KB and less than equal to 1MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$11mb\$1to\$12mb\$1count  | long | Number of objects with sizes greater than 1MB and less than equal to 2MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$12mb\$1to\$14mb\$1count  | long | Number of objects with sizes greater than 2MB and less than equal to 4MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$14mb\$1to\$18mb\$1count  | long | Number of objects with sizes greater than 4MB and less than equal to 8MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$18mb\$1to\$116mb\$1count  | long | Number of objects with sizes greater than 8MB and less than equal to 16MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$116mb\$1to\$132mb\$1count  | long | Number of objects with sizes greater than 16MB and less than equal to 32MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$132mb\$1to\$164mb\$1count  | long | Number of objects with sizes greater than 32MB and less than equal to 64MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$164mb\$1to\$1128mb\$1count  | long | Number of objects with sizes greater than 64MB and less than equal to 128MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1128mb\$1to\$1256mb\$1count  | long | Number of objects sizes greater than 128MB and less than equal to 256MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1256mb\$1to\$1512mb\$1count  | long | Number of objects sizes greater than 256MB and less than equal to 512MB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1512mb\$1to\$11gb\$1count  | long | Number of objects sizes greater than 512MB and less than equal to 1GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$11gb\$1to\$12gb\$1count  | long | Number of objects sizes greater than 1GB and less than equal to 2GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$12gb\$1to\$14gb\$1count  | long | Number of objects sizes greater than 2GB and less than equal to 4GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 
|  object\$1larger\$1than\$14gb\$1count  | long | Number of objects sizes greater than 4GB, including current version, noncurrent versions, incomplete multipart uploads, and delete markers | 

## Bucket property metrics table schema
<a name="storage-lens-s3-tables-schemas-bucket-property"></a>


| Name | Type | Description | 
| --- | --- | --- | 
|  version\$1number  | string | Version identifier of the schema of the table | 
|  configuration\$1id  | string | S3 Storage Lens configuration name | 
|  report\$1time  | timestamptz | Date the S3 Storage Lens report refers to | 
|  aws\$1account\$1id  | string | Account id the entry refers to | 
|  record\$1type  | string | Type of record, related to what is the level of aggregation of data. Values: ACCOUNT, BUCKET, PREFIX, STORAGE\$1LENS\$1GROUP\$1BUCKET, STORAGE\$1LENS\$1GROUP\$1ACCOUNT.  | 
|  record\$1value  | string | Disambiguator for record types that have more than one record under them. It is used to reference the prefix. | 
|  aws\$1region  | string | Region | 
|  storage\$1class  | string | Storage Class | 
|  bucket\$1name  | string | Bucket name | 
|  versioning\$1enabled\$1bucket\$1count  | long | Number of buckets with versioning enabled for the current referenced item | 
|  mfa\$1delete\$1enabled\$1bucket\$1count  | long | Number of buckets with MFA delete enabled for the current referenced item | 
|  sse\$1kms\$1enabled\$1bucket\$1count  | long | Number of buckets with KMS enabled for the current referenced item | 
|  object\$1ownership\$1bucket\$1owner\$1enforced\$1bucket\$1count  | long | Number of buckets with Object Ownership bucket owner enforced for the current referenced item | 
|  object\$1ownership\$1bucket\$1owner\$1preferred\$1bucket\$1count  | long | Number of buckets with Object Ownership bucket owner preferred for the current referenced item | 
|  object\$1ownership\$1object\$1writer\$1bucket\$1count  | long | Number of buckets with Object Ownership object writer for the current referenced item | 
|  transfer\$1acceleration\$1enabled\$1bucket\$1count  | long | Number of buckets with transfer acceleration enabled for the current referenced item | 
|  event\$1notification\$1enabled\$1bucket\$1count  | long | Number of buckets with event notification enabled for the current referenced item | 
|  transition\$1lifecycle\$1rule\$1count  | long | Number of transition lifecycle rules for the current referenced item | 
|  expiration\$1lifecycle\$1rule\$1count  | long | Number of expiration lifecycle rules for the current referenced item | 
|  non\$1current\$1version\$1transition\$1lifecycle\$1rule\$1count  | long | Number of noncurrent version transition lifecycle rules for the current referenced item | 
|  non\$1current\$1version\$1expiration\$1lifecycle\$1rule\$1count  | long | Number of noncurrent version expiration lifecycle rules for the current referenced item | 
|  abort\$1incomplete\$1multipart\$1upload\$1lifecycle\$1rule\$1count  | long | Number of abort incomplete multipart upload lifecycle rules for the current referenced item | 
|  expired\$1object\$1delete\$1marker\$1lifecycle\$1rule\$1count  | long | Number of expire object delete marker lifecycle rules for the current referenced item | 
|  same\$1region\$1replication\$1rule\$1count  | long | Number of Same-Region Replication rule count for the current referenced item | 
|  cross\$1region\$1replication\$1rule\$1count  | long | Number of Cross-Region Replication rule count for the current referenced item | 
|  same\$1account\$1replication\$1rule\$1count  | long | Number of Same-account replication rule count for the current referenced item | 
|  cross\$1account\$1replication\$1rule\$1count  | long | Number of Cross-account replication rule count for the current referenced item | 
|  invalid\$1destination\$1replication\$1rule\$1count  | long | Number of buckets with Invalid destination replication for the current referenced item | 

## Activity metrics table schema
<a name="storage-lens-s3-tables-schemas-activity"></a>


| Name | Type | Description | 
| --- | --- | --- | 
|  version\$1number  | string | Version identifier of the schema of the table | 
|  configuration\$1id  | string | S3 Storage Lens configuration name | 
|  report\$1time  | timestamptz | Date the S3 Storage Lens report refers to | 
|  aws\$1account\$1id  | string | Account id the entry refers to | 
|  aws\$1region  | string | Region | 
|  storage\$1class  | string | Storage Class | 
|  record\$1type  | string | Type of record, related to what is the level of aggregation of data. Values: ACCOUNT, BUCKET, PREFIX, STORAGE\$1LENS\$1GROUP\$1BUCKET, STORAGE\$1LENS\$1GROUP\$1ACCOUNT.  | 
|  record\$1value  | string | Disambiguator for record types that have more than one record under them. It is used to reference the prefix | 
|  bucket\$1name  | string | Bucket name | 
|  all\$1request\$1count  | long | Number of \$1all\$1 requests for the current referenced item | 
|  all\$1sse\$1kms\$1encrypted\$1request\$1count  | long | Number of KMS encrypted requests for the current referenced item | 
|  all\$1unsupported\$1sig\$1request\$1count  | long | Number of unsupported sig requests for the current referenced item | 
|  all\$1unsupported\$1tls\$1request\$1count  | long | Number of unsupported TLS requests for the current referenced item | 
|  bad\$1request\$1error\$1400\$1count  | long | Number of 400 bad request errors for the current referenced item | 
|  delete\$1request\$1count  | long | Number of delete requests for the current referenced item | 
|  downloaded\$1bytes  | decimal(0,0) | Number of downloaded bytes for the current referenced item | 
|  error\$14xx\$1count  | long | Number of 4xx errors for the current referenced item | 
|  error\$15xx\$1count  | long | Number of 5xx errors for the current referenced item | 
|  forbidden\$1error\$1403\$1count  | long | Number of 403 forbidden errors for the current referenced item | 
|  get\$1request\$1count  | long | Number of get requests for the current referenced item | 
|  head\$1request\$1count  | long | Number of head requests for the current referenced item | 
|  internal\$1server\$1error\$1500\$1count  | long | Number of 500 internal server errors for the current referenced item | 
|  list\$1request\$1count  | long | Number of list requests for the current referenced item | 
|  not\$1found\$1error\$1404\$1count  | long | Number of 404 not found errors for the current referenced item | 
|  ok\$1status\$1200\$1count  | long | Number of 200 OK requests for the current referenced item | 
|  partial\$1content\$1status\$1206\$1count  | long | Number of 206 partial content requests for the current referenced item | 
|  post\$1request\$1count  | long | Number of post requests for the current referenced item | 
|  put\$1request\$1count  | long | Number of put requests for the current referenced item | 
|  select\$1request\$1count  | long | Number of select requests for the current referenced item | 
|  select\$1returned\$1bytes  | decimal(0,0) | Number of bytes returned by select requests for the current referenced item | 
|  select\$1scanned\$1bytes  | decimal(0,0) | Number of bytes scanned by select requests for the current referenced item | 
|  service\$1unavailable\$1error\$1503\$1count  | long | Number of 503 service unavailable errors for the current referenced item | 
|  uploaded\$1bytes  | decimal(0,0) | Number of uploaded bytes for the current referenced item | 
|  average\$1first\$1byte\$1latency  | long | Average per-request time between when an S3 bucket receives a complete request and when it starts returning the response, measured over the past 24 hours | 
|  average\$1total\$1request\$1latency  | long | Average elapsed per-request time between the first byte received and the last byte sent to an S3 bucket, measured over the past 24 hours | 
|  read\$10kb\$1request\$1count  | long | Number of GetObject requests with data sizes of 0KB, including both range-based requests and whole object requests | 
|  read\$10kb\$1to\$1128kb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 0KB and up to 128KB, including both range-based requests and whole object requests | 
|  read\$1128kb\$1to\$1256kb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 128KB and up to 256KB, including both range-based requests and whole object requests | 
|  read\$1256kb\$1to\$1512kb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 256KB and up to 512KB, including both range-based requests and whole object requests | 
|  read\$1512kb\$1to\$11mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 512KB and up to 1MB, including both range-based requests and whole object requests | 
|  read\$11mb\$1to\$12mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 1MB and up to 2MB, including both range-based requests and whole object requests | 
|  read\$12mb\$1to\$14mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 2MB and up to 4MB, including both range-based requests and whole object requests | 
|  read\$14mb\$1to\$18mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 4MB and up to 8MB, including both range-based requests and whole object requests | 
|  read\$18mb\$1to\$116mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 8MB and up to 16MB, including both range-based requests and whole object requests | 
|  read\$116mb\$1to\$132mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 16MB and up to 32MB, including both range-based requests and whole object requests | 
|  read\$132mb\$1to\$164mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 32MB and up to 64MB, including both range-based requests and whole object requests | 
|  read\$164mb\$1to\$1128mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 64MB and up to 128MB, including both range-based requests and whole object requests | 
|  read\$1128mb\$1to\$1256mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 128MB and up to 256MB, including both range-based requests and whole object requests | 
|  read\$1256mb\$1to\$1512mb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 256MB and up to 512MB, including both range-based requests and whole object requests | 
|  read\$1512mb\$1to\$11gb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 512MB and up to 1GB, including both range-based requests and whole object requests | 
|  read\$11gb\$1to\$12gb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 1GB and up to 2GB, including both range-based requests and whole object requests | 
|  read\$12gb\$1to\$14gb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 2GB and up to 4GB, including both range-based requests and whole object requests | 
|  read\$1larger\$1than\$14gb\$1request\$1count  | long | Number of GetObject requests with data sizes greater than 4GB, including both range-based requests and whole object requests | 
|  write\$10kb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes of 0KB | 
|  write\$10kb\$1to\$1128kb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 0KB and up to 128KB | 
|  write\$1128kb\$1to\$1256kb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 128KB and up to 256KB | 
|  write\$1256kb\$1to\$1512kb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 256KB and up to 512KB | 
|  write\$1512kb\$1to\$11mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 512KB and up to 1MB | 
|  write\$11mb\$1to\$12mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 1MB and up to 2MB | 
|  write\$12mb\$1to\$14mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 2MB and up to 4MB | 
|  write\$14mb\$1to\$18mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 4MB and up to 8MB | 
|  write\$18mb\$1to\$116mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 8MB and up to 16MB | 
|  write\$116mb\$1to\$132mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 16MB and up to 32MB | 
|  write\$132mb\$1to\$164mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 32MB and up to 64MB | 
|  write\$164mb\$1to\$1128mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 64MB and up to 128MB | 
|  write\$1128mb\$1to\$1256mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 128MB and up to 256MB | 
|  write\$1256mb\$1to\$1512mb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 256MB and up to 512MB | 
|  write\$1512mb\$1to\$11gb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 512MB and up to 1GB | 
|  write\$11gb\$1to\$12gb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 1GB and up to 2GB | 
|  write\$12gb\$1to\$14gb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 2GB and up to 4GB | 
|  write\$1larger\$1than\$14gb\$1request\$1count  | long | Number of PutObject, UploadPart, and CreateMultipartUpload requests with data sizes greater than 4GB | 
|  concurrent\$1put\$1503\$1error\$1count  | long | Number of 503 errors that are generated due to concurrent writes to the same object | 
|  cross\$1region\$1request\$1count  | long | Number of requests that originate from a client in different Region than bucket's home Region | 
|  cross\$1region\$1transferred\$1bytes  | decimal(0,0) | Number of bytes that are transferred from calls in different Region than bucket's home Region | 
|  cross\$1region\$1without\$1replication\$1request\$1count  | long | Number of requests that originate from a client in different Region than bucket's home Region, excluding cross-region replication requests | 
|  cross\$1region\$1without\$1replication\$1transferred\$1bytes  | decimal(0,0) | Number of bytes that are transferred from calls in different Region than bucket's home Region, excluding cross-region replication bytes | 
|  inregion\$1request\$1count  | long | Number of requests that originate from a client in same Region as bucket's home Region | 
|  inregion\$1transferred\$1bytes  | decimal(0,0) | Number of bytes that are transferred from calls from same Region as bucket's home Region | 
|  unique\$1objects\$1accessed\$1daily\$1count  | long | Number of objects that were accessed at least once in last 24 hrs | 

## Next steps
<a name="storage-lens-s3-tables-schemas-next-steps"></a>
+ Learn about [Permissions for S3 Storage Lens tables](storage-lens-s3-tables-permissions.md) 
+ Start [Querying S3 Storage Lens data with analytics tools](storage-lens-s3-tables-querying.md) 
+ Review the [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md) for detailed metric definitions

# Permissions for S3 Storage Lens tables
<a name="storage-lens-s3-tables-permissions"></a>

To work with S3 Storage Lens data exported to S3 Tables, you need appropriate AWS Identity and Access Management (IAM) permissions. This topic covers the permissions required for exporting metrics and managing encryption.

## Permissions for metrics export to S3 Tables
<a name="storage-lens-s3-tables-permissions-export"></a>

To create and work with S3 Storage Lens tables and table buckets, you must have certain `s3tables` permissions. At a minimum, to configure S3 Storage Lens to S3 Tables, you must have the following `s3tables` permissions:
+  `s3tables:CreateTableBucket` – This permission allows you to create an AWS-managed table bucket. All S3 Storage Lens metrics in your account are stored in a single AWS-managed table bucket named `aws-s3`. 
+  `s3tables:PutTableBucketPolicy` – S3 Storage Lens uses this permission to set a table bucket policy that allows `systemtables.s3.amazonaws.com` access to the bucket so that logs can be delivered.

**Important**  
If you remove permissions for the service principal `systemtables.s3.amazonaws.com`, S3 Storage Lens will not be able to update the S3 tables with data based on your configuration. We recommend adding other access control policies in addition to the policy already provided, instead of editing the canned policy that is added when your table bucket is set up.

**Note**  
A separate S3 table for each type of metric export is created for each Storage Lens configuration. If you have multiple Storage Lens configurations in the Region, separate tables are created for additional configurations. For example, there are three types of tables available for your S3 table bucket.

## Permissions for AWS KMS encrypted tables
<a name="storage-lens-s3-tables-permissions-kms"></a>

All data in S3 tables including S3 Storage Lens metrics are encrypted with SSE-S3 encryption by default. You can choose to encrypt your Storage Lens metrics report with AWS KMS keys (SSE-KMS). If you choose to encrypt your S3 Storage Lens metric reports with KMS keys, you must have additional permissions.

1. The user or IAM role needs the following permissions. You can grant these permissions by using the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).
   +  `kms:DescribeKey` on the AWS KMS key used

1. On the key policy for the AWS KMS key, you need the following permissions. You can grant these permissions by using the AWS KMS console at [https://console.aws.amazon.com/kms](https://console.aws.amazon.com/kms). To use this policy, replace the ` user input placeholders ` with your own information.

   ```
   {
       "Version": "2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "EnableSystemTablesKeyUsage",
               "Effect": "Allow",
               "Principal": {
                   "Service": "systemtables.s3.amazonaws.com"
               },
               "Action": [
                   "kms:DescribeKey",
                   "kms:GenerateDataKey",
                   "kms:Decrypt"
               ],
               "Resource": "arn:aws:kms:us-east-1:111122223333:key/key-id",
               "Condition": {
                   "StringEquals": {
                       "aws:SourceAccount": "111122223333"
                   }
               }
           },
           {
               "Sid": "EnableKeyUsage",
               "Effect": "Allow",
               "Principal": {
                   "Service": "maintenance.s3tables.amazonaws.com"
               },
               "Action": [
                   "kms:GenerateDataKey",
                   "kms:Decrypt"
               ],
               "Resource": "arn:aws:kms:us-east-1:111122223333:key/key-id",
               "Condition": {
                   "StringLike": {
                       "kms:EncryptionContext:aws:s3:arn": "<table-bucket-arn>/*"
                   }
               }
           }
       ]
   }
   ```

## Service-linked role for S3 Storage Lens
<a name="storage-lens-s3-tables-permissions-slr"></a>

S3 Storage Lens uses a service-linked role to write metrics to S3 Tables. This role is automatically created when you enable S3 Tables export for the first time in your account. The service-linked role has the following permissions:
+  `s3tables:CreateTable` - To create tables in the `aws-s3` table bucket
+  `s3tables:PutTableData` - To write metrics data to tables
+  `s3tables:GetTable` - To retrieve table metadata

You don't need to manually create or manage this service-linked role. For more information about service-linked roles, see [Using service-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html) in the *IAM User Guide*. 

## Best practices for permissions
<a name="storage-lens-s3-tables-permissions-best-practices"></a>

Follow these best practices when configuring permissions for S3 Storage Lens tables:
+  **Use least privilege** - Grant only the permissions required for specific tasks. For example, if users only need to query data, don't grant permissions to modify Storage Lens configurations.
+  **Use IAM roles** - Use IAM roles instead of long-term access keys for applications and services that access S3 Storage Lens tables.
+  **Enable AWS CloudTrail** - Enable CloudTrail logging to monitor access to S3 Storage Lens tables and track permission changes.
+  **Use resource-based policies** - When possible, use resource-based policies to control access to specific tables or namespaces.
+  **Regularly review permissions** - Periodically review and audit IAM policies and Lake Formation permissions to ensure they follow the principle of least privilege.

## Troubleshooting permissions
<a name="storage-lens-s3-tables-permissions-troubleshooting"></a>

### Access denied when enabling S3 Tables export
<a name="storage-lens-s3-tables-permissions-troubleshooting-export"></a>

 **Problem:** You receive an "access denied" error when trying to enable S3 Tables export.

 **Solution:** Verify that your IAM user or role has the `s3:PutStorageLensConfiguration` permission and the necessary S3 Tables permissions.

### Access denied when querying tables
<a name="storage-lens-s3-tables-permissions-troubleshooting-query"></a>

 **Problem:** You receive an "access denied" error when querying S3 Storage Lens tables in Amazon Athena.

 **Solution:** Verify that:
+ Analytics integration is enabled on the `aws-s3` table bucket
+ Lake Formation permissions are correctly configured
+ Your IAM user or role has the necessary Amazon Athena permissions

### KMS encryption errors
<a name="storage-lens-s3-tables-permissions-troubleshooting-kms"></a>

 **Problem:** You receive KMS-related errors when accessing encrypted tables.

 **Solution:** Verify that:
+ Your IAM policy includes the required KMS permissions
+ The KMS key policy grants permissions to the S3 Storage Lens service principal
+ The KMS key is in the same Region as your Storage Lens configuration

## Next steps
<a name="storage-lens-s3-tables-permissions-next-steps"></a>
+ Learn about [Setting Amazon S3 Storage Lens permissions](storage_lens_iam_permissions.md) 
+ Learn about [Querying S3 Storage Lens data with analytics tools](storage-lens-s3-tables-querying.md) 
+ Learn about [Using AI assistants with S3 Storage Lens tables](storage-lens-s3-tables-ai-tools.md) 

# Querying S3 Storage Lens data with analytics tools
<a name="storage-lens-s3-tables-querying"></a>

Before you can query S3 Storage Lens data exported to S3 Tables using AWS analytics services like Amazon Athena or Amazon EMR, you must enable analytics integration on the AWS-managed `aws-s3` table bucket and configure AWS Lake Formation permissions.

**Important**  
Enabling analytics integration on the "aws-s3" table bucket is a required step that is often missed. Without this configuration, you will not be able to query your S3 Storage Lens tables using AWS analytics services.

## Prerequisites
<a name="storage-lens-s3-tables-querying-prerequisites"></a>

Before you begin, ensure that you have:
+ An S3 Storage Lens configuration with S3 Tables export enabled. For more information, see [Exporting S3 Storage Lens metrics to S3 Tables](storage-lens-s3-tables-export.md) .
+ Access to Amazon Athena or another analytics service.
+ Waited 24-48 hours after enabling export for the first data to be available.

## Integration overview
<a name="storage-lens-s3-tables-querying-integration-overview"></a>

For detailed information about integrating S3 Tables with AWS analytics services, including prerequisites, IAM role configuration, and step-by-step procedures, see [Integrating Amazon S3 Tables with AWS analytics services.](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-aws.html) 

After you enable S3 Tables export and set up analytics integration, you can query your S3 Storage Lens data using AWS analytics services such as Amazon Athena, Amazon Redshift, and Amazon EMR. This enables you to perform custom analysis, create dashboards, and derive insights from your storage data using standard SQL.

## Querying with Amazon Athena
<a name="storage-lens-s3-tables-querying-athena"></a>

Amazon Athena is a serverless interactive query service that makes it easy to analyze data using standard SQL. Use the following steps to query S3 Storage Lens data in Athena.

**Note**  
In all query examples, replace `lens_my-config_exp` with your actual Storage Lens configuration namespace. For more information about namespace naming, see [Table naming for S3 Storage Lens export to S3 Tables](storage-lens-s3-tables-naming.md) .

### Example: Query top storage consumers
<a name="storage-lens-s3-tables-querying-athena-top-consumers"></a>

The following query identifies the top 10 buckets by storage consumption:

```
SELECT 
    bucket_name,
    storage_class,
    SUM(storage_bytes) / POWER(1024, 3) AS storage_gb,
    SUM(object_count) AS objects
FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."default_storage_metrics"
WHERE report_time = (
    SELECT MAX(report_time) 
    FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."default_storage_metrics"
)
    AND record_type = 'BUCKET'
    AND bucket_name != ''
GROUP BY bucket_name, storage_class
ORDER BY storage_gb DESC
LIMIT 10
```

### Example: Analyze storage growth over time
<a name="storage-lens-s3-tables-querying-athena-growth"></a>

The following query analyzes storage growth over the last 30 days:

```
SELECT 
    CAST(report_time AS date) AS report_date,
    SUM(storage_bytes) / POWER(1024, 3) AS total_storage_gb
FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."default_storage_metrics"
WHERE report_time >= current_date - interval '30' day
    AND record_type = 'ACCOUNT'
GROUP BY CAST(report_time AS date)
ORDER BY report_date DESC;
```

### Example: Identify incomplete multipart uploads
<a name="storage-lens-s3-tables-querying-athena-mpu"></a>

The following query finds buckets with incomplete multipart uploads older than 7 days:

```
SELECT 
    bucket_name,
    SUM(incomplete_mpu_storage_older_than_7_days_bytes) / POWER(1024, 3) AS wasted_storage_gb,
    SUM(incomplete_mpu_object_older_than_7_days_count) AS wasted_objects
FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."default_storage_metrics"
WHERE report_time = (
    SELECT MAX(report_time) 
    FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."default_storage_metrics"
)
    AND record_type = 'BUCKET'
    AND incomplete_mpu_storage_older_than_7_days_bytes > 0
GROUP BY bucket_name
ORDER BY wasted_storage_gb DESC;
```

### Example: Find cold data candidates
<a name="storage-lens-s3-tables-querying-athena-cold-data"></a>

The following query identifies prefixes with no activity in the last 100 days that are stored in hot storage tiers:

```
WITH recent_activity AS (
    SELECT DISTINCT 
        bucket_name,
        record_value AS prefix_path
    FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."expanded_prefixes_activity_metrics"
    WHERE report_time >= current_date - interval '100' day
        AND record_type = 'PREFIX'
        AND all_request_count > 0
)
SELECT 
    s.bucket_name,
    s.record_value AS prefix_path,
    s.storage_class,
    SUM(s.storage_bytes) / POWER(1024, 3) AS storage_gb
FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."expanded_prefixes_storage_metrics" s
LEFT JOIN recent_activity r 
    ON s.bucket_name = r.bucket_name 
    AND s.record_value = r.prefix_path
WHERE s.report_time = (
    SELECT MAX(report_time) 
    FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."expanded_prefixes_storage_metrics"
)
    AND s.record_type = 'PREFIX'
    AND s.storage_class IN ('STANDARD', 'REDUCED_REDUNDANCY')
    AND s.storage_bytes > 1073741824  -- > 1GB
    AND r.prefix_path IS NULL  -- No recent activity
GROUP BY s.bucket_name, s.record_value, s.storage_class
ORDER BY storage_gb DESC
LIMIT 20;
```

### Example: Analyze request patterns
<a name="storage-lens-s3-tables-querying-athena-requests"></a>

The following query analyzes request patterns to understand access frequency:

```
SELECT 
    bucket_name,
    SUM(all_request_count) AS total_requests,
    SUM(get_request_count) AS get_requests,
    SUM(put_request_count) AS put_requests,
    ROUND(100.0 * SUM(get_request_count) / NULLIF(SUM(all_request_count), 0), 2) AS get_percentage,
    SUM(downloaded_bytes) / POWER(1024, 3) AS downloaded_gb
FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."default_activity_metrics"
WHERE report_time >= current_date - interval '7' day
    AND record_type = 'BUCKET'
    AND bucket_name != ''
GROUP BY bucket_name
HAVING SUM(all_request_count) > 0
ORDER BY total_requests DESC
LIMIT 10;
```

## Querying with Apache Spark on Amazon EMR
<a name="storage-lens-s3-tables-querying-emr"></a>

Amazon EMR provides a managed Hadoop framework that makes it easy to process vast amounts of data using Apache Spark. You can use the Iceberg connector to read S3 Storage Lens tables directly.

### Read S3 Tables with Spark
<a name="storage-lens-s3-tables-querying-emr-spark"></a>

Use the following Python code to read S3 Storage Lens data with Spark:

```
from pyspark.sql import SparkSession

spark = SparkSession.builder \
    .appName("S3StorageLensAnalysis") \
    .config("spark.sql.catalog.s3tablescatalog", "org.apache.iceberg.spark.SparkCatalog") \
    .config("spark.sql.catalog.s3tablescatalog.catalog-impl", "org.apache.iceberg.aws.glue.GlueCatalog") \
    .getOrCreate()

# Read S3 Storage Lens data
df = spark.read \
    .format("iceberg") \
    .load("s3tablescatalog/aws-s3.lens_my-config_exp.default_storage_metrics")

# Analyze data
df.filter("record_type = 'BUCKET'") \
    .groupBy("bucket_name", "storage_class") \
    .sum("storage_bytes") \
    .orderBy("sum(storage_bytes)", ascending=False) \
    .show(10)
```

## Query optimization best practices
<a name="storage-lens-s3-tables-querying-optimization"></a>

Follow these best practices to optimize query performance and reduce costs:
+  **Filter by report\$1time** – Always include date filters to reduce the amount of data scanned. This is especially important for tables with long retention periods.

  ```
  WHERE report_time >= current_date - interval '7' day
  ```
+  **Use record\$1type filters** – Specify the appropriate aggregation level (ACCOUNT, BUCKET, PREFIX) to query only the data you need.

  ```
  WHERE record_type = 'BUCKET'
  ```
+  **Include LIMIT clauses** – Use LIMIT for exploratory queries to control result size and reduce query costs.

  ```
  LIMIT 100
  ```
+  **Filter empty records** – Use conditions to exclude empty or zero-value records.

  ```
  WHERE storage_bytes > 0
  ```
+  **Use the latest data** – When analyzing current state, filter for the most recent report\$1time to avoid scanning historical data.

  ```
  WHERE report_time = (SELECT MAX(report_time) FROM table_name)
  ```

### Example optimized query pattern
<a name="storage-lens-s3-tables-querying-optimization-example"></a>

The following query demonstrates best practices for optimization:

```
SELECT 
    bucket_name,
    SUM(storage_bytes) / POWER(1024, 3) AS storage_gb
FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."default_storage_metrics"
WHERE report_time >= current_date - interval '7' day  -- Date filter
    AND record_type = 'BUCKET'                         -- Record type filter
    AND storage_bytes > 0                              -- Non-empty filter
    AND bucket_name != ''                              -- Non-empty filter
GROUP BY bucket_name
ORDER BY storage_gb DESC
LIMIT 100;                                             -- Result limit
```

## Troubleshooting
<a name="storage-lens-s3-tables-querying-troubleshooting"></a>

### Query returns no results
<a name="storage-lens-s3-tables-querying-troubleshooting-no-results"></a>

 **Problem:** Your query completes successfully but returns no results.

 **Solution:** 
+ Verify that data is available by checking the latest report\$1time:

  ```
  SELECT MAX(report_time) AS latest_data
  FROM "s3tablescatalog/aws-s3"."lens_my-config_exp"."default_storage_metrics";
  ```
+ Ensure that you're using the correct namespace name. Use `SHOW TABLES IN `lens_my-config_exp`;` to list available tables.
+ Wait 24-48 hours after enabling S3 Tables export for the first data to be available.

### Access denied errors
<a name="storage-lens-s3-tables-querying-troubleshooting-access"></a>

 **Problem:** You receive access denied errors when running queries.

 **Solution:** Verify that AWS Lake Formation permissions are correctly configured. For more information, see [Integrating Amazon S3 Tables with AWS analytics services.](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-aws.html) 

## Next steps
<a name="storage-lens-s3-tables-querying-next-steps"></a>
+ Learn about [Using AI assistants with S3 Storage Lens tables](storage-lens-s3-tables-ai-tools.md)
+ Review the [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md) for metric definitions
+ Explore [Amazon S3 Storage Lens metrics use cases](storage-lens-use-cases.md) for more analysis ideas
+ Learn about [Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/what-is.html) for serverless querying

# Using AI assistants with S3 Storage Lens tables
<a name="storage-lens-s3-tables-ai-tools"></a>

You can use AI assistants and conversational AI tools to interact with your S3 Storage Lens data exported to S3 Tables using natural language. By leveraging the Model Context Protocol (MCP) and the MCP Server for Amazon S3 Tables, you can query, analyze, and gain insights from your storage data without writing SQL queries.

## Overview
<a name="storage-lens-s3-tables-ai-tools-overview"></a>

Model Context Protocol (MCP) is a standardized way for AI applications to access and utilize contextual information. The MCP Server for Amazon S3 Tables provides tools that enable AI assistants to interact with your S3 Tables data using natural language interfaces. This democratizes data access and enables individuals across technical skill levels to work with S3 Storage Lens metrics.

With the MCP Server for S3 Tables, you can use natural language to:
+ List S3 table buckets, namespaces, and tables
+ Query S3 Storage Lens metrics and get insights
+ Analyze storage trends and patterns
+ Identify cost optimization opportunities
+ Generate reports and visualizations

## Supported AI assistants
<a name="storage-lens-s3-tables-ai-tools-supported"></a>

The MCP Server for S3 Tables works with various AI assistants that support the Model Context Protocol, including:
+ **Kiro** - An AI coding assistant with built-in MCP support
+ **Amazon Q Developer** - AWS's AI-powered assistant for developers
+ **Cline** - An AI coding assistant with MCP integration
+ **Claude Desktop** - Anthropic's desktop application with MCP support
+ **Cursor** - An AI-powered code editor

**Important**  
AI-generated SQL queries and recommendations should be reviewed and validated before use. Verify that queries are appropriate for your data structure, use case, and performance requirements. Always test recommendations in a non-production environment before implementing them in production.

## Setting up Kiro with S3 Storage Lens tables
<a name="storage-lens-s3-tables-ai-tools-kiro-setup"></a>

Kiro is an AI coding assistant that provides seamless integration with S3 Tables through the MCP Server. Kiro can help you install and configure the MCP Server directly through its interface, simplifying the setup process.

For more information about Kiro, see [Kiro AI](https://kiro.ai/).

### Prerequisites
<a name="storage-lens-s3-tables-ai-tools-kiro-prerequisites"></a>

Before you begin, ensure that you have:
+ Kiro installed on your system. Download from [https://kiro.ai/](https://kiro.ai/)
+ AWS CLI configured with appropriate credentials
+ An S3 Storage Lens configuration with S3 Tables export enabled
+ Permissions to query S3 Tables. For more information, see [Permissions for S3 Storage Lens tables](storage-lens-s3-tables-permissions.md).

### Step 1: Install the S3 Tables MCP Server
<a name="storage-lens-s3-tables-ai-tools-kiro-step1"></a>

You can install the S3 Tables MCP Server in two ways:

**Option 1: Using Kiro's built-in MCP server management**  
Kiro can help you discover and install MCP servers directly through its interface:

1. Open Kiro

1. Access the MCP server management interface (typically through settings or command palette)

1. Search for "S3 Tables" or "awslabs.s3-tables-mcp-server"

1. Follow Kiro's prompts to install and configure the server

**Option 2: Manual installation using uvx**  
Alternatively, you can manually install the MCP Server using `uvx`, a Python package runner:

```
uvx awslabs.s3-tables-mcp-server@latest
```

For more information about installing the MCP Server, see the [AWS S3 Tables MCP Server documentation](https://awslabs.github.io/mcp/servers/s3-tables-mcp-server).

### Step 2: Configure Kiro MCP settings
<a name="storage-lens-s3-tables-ai-tools-kiro-step2"></a>

Create or update your Kiro MCP configuration file at `~/.kiro/settings/mcp.json` with the following content:

```
{
  "mcpServers": {
    "awslabs.s3-tables-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.s3-tables-mcp-server@latest"],
      "env": {
        "AWS_PROFILE": "your-aws-profile",
        "AWS_REGION": "us-east-1"
      }
    }
  }
}
```

Replace `your-aws-profile` with your AWS CLI profile name and `us-east-1` with your AWS Region.

### Step 3: Verify the configuration
<a name="storage-lens-s3-tables-ai-tools-kiro-step3"></a>

After configuring the MCP Server, restart Kiro and verify that the S3 Tables tools are available. You can check the available MCP servers in Kiro's settings or by asking Kiro to list available tools.

## Example use cases with AI assistants
<a name="storage-lens-s3-tables-ai-tools-examples"></a>

The following examples demonstrate how to use natural language prompts with AI assistants to interact with S3 Storage Lens data.

### Example 1: Query top storage consumers
<a name="storage-lens-s3-tables-ai-tools-examples-consumers"></a>

**Prompt:** "Show me the top 10 buckets by storage consumption from my S3 Storage Lens data."

The AI assistant will use the MCP Server to query your S3 Storage Lens tables and return the results, including bucket names, storage classes, and storage amounts.

### Example 2: Analyze storage growth
<a name="storage-lens-s3-tables-ai-tools-examples-growth"></a>

**Prompt:** "Analyze my storage growth over the last 30 days and show me the trend."

The AI assistant will query the storage metrics table, calculate daily storage totals, and present the growth trend.

### Example 3: Identify cost optimization opportunities
<a name="storage-lens-s3-tables-ai-tools-examples-optimization"></a>

**Prompt:** "Find buckets with incomplete multipart uploads older than 7 days that are wasting storage."

The AI assistant will query the storage metrics table for incomplete multipart uploads and provide a list of buckets with potential cost savings.

### Example 4: Find cold data candidates
<a name="storage-lens-s3-tables-ai-tools-examples-cold-data"></a>

**Prompt:** "Identify prefixes with no activity in the last 100 days that are stored in hot storage tiers."

The AI assistant will analyze both storage and activity metrics to identify data that could be moved to colder storage tiers for cost optimization.

### Example 5: Generate storage reports
<a name="storage-lens-s3-tables-ai-tools-examples-reports"></a>

**Prompt:** "Create a summary report of my S3 storage showing total storage, object counts, and request patterns for the last week."

The AI assistant will query multiple tables, aggregate the data, and generate a comprehensive report.

## Best practices for using AI assistants
<a name="storage-lens-s3-tables-ai-tools-best-practices"></a>

Follow these best practices when using AI assistants with S3 Storage Lens data:
+ **Be specific in your prompts** - Provide clear, specific instructions about what data you want to analyze and what insights you're looking for.
+ **Verify AI-generated queries** - Always review and validate the SQL queries and recommendations that the AI assistant generates before executing them or taking action. AI assistants may occasionally produce incorrect queries or recommendations that need to be verified against your specific use case and data.
+ **Use appropriate permissions** - Ensure that the IAM credentials used by the AI assistant have only the necessary permissions. For read-only analysis, grant only SELECT permissions.
+ **Monitor usage** - Track the queries executed by AI assistants using AWS CloudTrail to maintain audit trails.
+ **Start with simple queries** - Begin with straightforward queries to understand how the AI assistant interprets your prompts, then progress to more complex analysis.

## Logging and traceability
<a name="storage-lens-s3-tables-ai-tools-logging"></a>

When using the S3 Tables MCP Server with AI assistants, you have multiple ways to audit operations:
+ **Local logs** - The MCP Server logs requests and responses locally. You can specify a log directory using the `--log-dir` configuration option.
+ **AWS CloudTrail** - All S3 Tables operations via the MCP Server using PyIceberg will have `awslabs/mcp/s3-tables-mcp-server/<version>` as the user agent string. You can filter CloudTrail logs by this user agent to trace actions performed by AI assistants.
+ **AI assistant history** - AI assistants like Kiro and Cline maintain history logs that record natural language requests, LLM responses, and instructions provided to the MCP Server.

## Security considerations
<a name="storage-lens-s3-tables-ai-tools-security"></a>

When using AI assistants with S3 Storage Lens data, follow these security best practices:
+ **Use least privilege access** - Grant AI assistants only the minimum permissions required for their tasks.
+ **Enable MFA** - Use multi-factor authentication for AWS accounts that AI assistants access.
+ **Review permissions regularly** - Periodically audit the permissions granted to AI assistants and revoke unnecessary access.
+ **Use separate credentials** - Consider using separate AWS credentials for AI assistant access to facilitate tracking and auditing.
+ **Avoid sharing sensitive data** - Be cautious about sharing sensitive information in prompts to AI assistants, especially when using cloud-based AI services.

## Troubleshooting
<a name="storage-lens-s3-tables-ai-tools-troubleshooting"></a>

### AI assistant cannot connect to S3 Tables
<a name="storage-lens-s3-tables-ai-tools-troubleshooting-connection"></a>

**Problem:** The AI assistant reports that it cannot connect to S3 Tables or the MCP Server is not responding.

**Solution:**
+ Verify that the MCP Server is correctly installed using `uvx awslabs.s3-tables-mcp-server@latest --version`
+ Check that your AWS credentials are configured correctly
+ Ensure that the MCP configuration file has the correct AWS profile and region

### Access denied errors
<a name="storage-lens-s3-tables-ai-tools-troubleshooting-access"></a>

**Problem:** The AI assistant receives access denied errors when querying S3 Storage Lens tables.

**Solution:**
+ Verify that analytics integration is enabled on the `aws-s3` table bucket
+ Check that Lake Formation permissions are correctly configured
+ Ensure that the AWS credentials have the necessary IAM permissions

### Incorrect or unexpected results
<a name="storage-lens-s3-tables-ai-tools-troubleshooting-results"></a>

**Problem:** The AI assistant returns incorrect or unexpected results.

**Solution:**
+ Review the SQL query generated by the AI assistant
+ Verify that you're using the correct namespace name for your Storage Lens configuration
+ Check that data is available by querying the latest report\$1time
+ Refine your prompt to be more specific about what you want to analyze

## Additional resources
<a name="storage-lens-s3-tables-ai-tools-resources"></a>

For more information about using AI assistants with S3 Tables, see the following resources:
+ [Kiro AI](https://kiro.ai/) - AI coding assistant with built-in MCP support
+ [Implementing conversational AI for S3 Tables using Model Context Protocol (MCP)](https://aws.amazon.com/blogs/storage/implementing-conversational-ai-for-s3-tables-using-model-context-protocol-mcp/) - AWS Storage Blog
+ [AWS S3 Tables MCP Server documentation](https://awslabs.github.io/mcp/servers/s3-tables-mcp-server)
+ [Model Context Protocol specification](https://modelcontextprotocol.io/)

# Using Amazon S3 Storage Lens with AWS Organizations
<a name="storage_lens_with_organizations"></a>

Amazon S3 Storage Lens is a cloud-storage analytics feature that you can use to gain organization-wide visibility into object-storage usage and activity. You can use S3 Storage Lens metrics to generate summary insights, such as finding out how much storage you have across your entire organization or which are the fastest-growing buckets and prefixes. You can also use Amazon S3 Storage Lens to collect storage metrics and usage data for all AWS accounts that are part of your AWS Organizations hierarchy. To do this, you must be using AWS Organizations, and you must enable S3 Storage Lens trusted access by using your AWS Organizations management account.

After enabling trusted access, add delegated administrator access to accounts in your organization. The delegated administrator accounts are used to create S3 Storage Lens configurations and dashboards that collect organization-wide storage metrics and user data. For more information about enabling trusted access, see [Amazon S3 Storage Lens and AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/services-that-can-integrate-s3lens.html) in the *AWS Organizations User Guide*.

**Topics**
+ [Enabling trusted access for S3 Storage Lens](storage_lens_with_organizations_enabling_trusted_access.md)
+ [Disabling trusted access for S3 Storage Lens](storage_lens_with_organizations_disabling_trusted_access.md)
+ [Registering a delegated administrator for S3 Storage Lens](storage_lens_with_organizations_registering_delegated_admins.md)
+ [Deregistering a delegated administrator for S3 Storage Lens](storage_lens_with_organizations_deregistering_delegated_admins.md)

# Enabling trusted access for S3 Storage Lens
<a name="storage_lens_with_organizations_enabling_trusted_access"></a>

By enabling trusted access, you allow Amazon S3 Storage Lens to access your AWS Organizations hierarchy, membership, and structure through AWS Organizations API operations. S3 Storage Lens then becomes a trusted service for your entire organization's structure.

Whenever a dashboard configuration is created, S3 Storage Lens creates service-linked roles in your organization's management or delegated administrator accounts. The service-linked role grants S3 Storage Lens permission to perform the following actions: 
+ Describe organizations
+ List accounts
+ Verify a list of AWS service access for the organizations
+ Get delegated administrators for the organizations



S3 Storage Lens can then ensure that it has access to collect the cross-account metrics for the accounts in your organization. For more information, see [ Using service-linked roles for Amazon S3 Storage Lens](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-service-linked-roles.html). 

After enabling trusted access, you can assign delegated administrator access to accounts in your organization. When an account is marked as a delegated administrator for a service, the account receives authorization to access all read-only organization API operations. This access provides the delegated administrator visibility to the members and structures of your organization so that they too can create S3 Storage Lens dashboards.

**Note**  
Trusted access can only be enabled by the [management account](https://docs.aws.amazon.com/managedservices/latest/userguide/management-account.html).
 Only the management account and delegated administrators can create S3 Storage Lens dashboards or configurations for your organization.

# Using the S3 console
<a name="storage_lens_console_organizations_enabling_trusted_access"></a>

**To enable S3 Storage Lens to have AWS Organizations trusted access**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. On the left navigation pane, navigate to **Storage Lens**.

1. Choose **AWS Organizations settings**. The **AWS Organizations access for Storage Lens** page displays.

1. Under **AWS Organizations trusted access**, choose **Edit**.

   The **AWS Organizations access** page displays.

1. Choose **Enable** to enable trusted access for your S3 Storage Lens dashboard.

1. Choose **Save changes**.

# Using the AWS CLI
<a name="OrganizationsEnableTrustedAccessS3LensCLI"></a>

**Example**  
The following example shows you how to enable AWS Organizations trusted access for S3 Storage Lens in AWS CLI.  

```
aws organizations enable-aws-service-access --service-principal storage-lens.s3.amazonaws.com
```

# Using the AWS SDK for Java
<a name="OrganizationsEnableTrustedAccessS3LensJava"></a>

**Example – Enable AWS Organizations trusted access for S3 Storage Lens using SDK for Java**  
The following example shows you how to enable trusted access for S3 Storage Lens in SDK for Java. To use this example, replace the `user input placeholders` with your own information.  

```
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.organizations.AWSOrganizations;
import com.amazonaws.services.organizations.AWSOrganizationsClient;
import com.amazonaws.services.organizations.model.EnableAWSServiceAccessRequest;

public class EnableOrganizationsTrustedAccess {
	private static final String S3_STORAGE_LENS_SERVICE_PRINCIPAL = "storage-lens.s3.amazonaws.com";

	public static void main(String[] args) {
		try {
            AWSOrganizations organizationsClient = AWSOrganizationsClient.builder()
                .withCredentials(new ProfileCredentialsProvider())
                .withRegion(Regions.US_EAST_1)
                .build();

            organizationsClient.enableAWSServiceAccess(new EnableAWSServiceAccessRequest()
                .withServicePrincipal(S3_STORAGE_LENS_SERVICE_PRINCIPAL));
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but AWS Organizations couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // AWS Organizations couldn't be contacted for a response, or the client
            // couldn't parse the response from AWS Organizations.
            e.printStackTrace();
        }
	}
}
```

# Disabling trusted access for S3 Storage Lens
<a name="storage_lens_with_organizations_disabling_trusted_access"></a>

Removing an account as a delegated administrator or disabling trusted access limits the account owner's S3 Storage Lens dashboard metrics to work only on an account level. Each account holder is then only be able to see the benefits of S3 Storage Lens through the limited scope of their account, and not their entire organization.

When you disable trusted access in S3 Storage Lens, any dashboards requiring trusted access are no longer updated. Any organizational dashboards that are created are also no longer updated. Instead, you're only able to query [historic data for the S3 Storage Lens dashboard](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens_basics_metrics_recommendations.html#storage_lens_basics_data_queries), while the data is still available.

**Note**  
Disabling trusted access for S3 Storage Lens also automatically stops all organization-level dashboards from collecting and aggregating storage metrics. This is because S3 Storage Lens no longer has trusted access to the organization accounts.
Your management and delegate administrator accounts can still see the historic data for any disabled dashboards. They can also query this historic data while it is still available. 

# Using the S3 console
<a name="storage_lens_console_organizations_disabling_trusted_access"></a>

**To disable trusted access for S3 Storage Lens**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. On the left navigation pane, navigate to **Storage Lens**.

1. Choose **AWS Organizations settings**. The **AWS Organizations access for Storage Lens** page displays.

1. Under **AWS Organizations trusted access**, choose **Edit**.

   The **AWS Organizations access** page displays.

1. Choose **Disable** to disable trusted access for your S3 Storage Lens dashboard.

1. Choose **Save changes**.

# Using the AWS CLI
<a name="OrganizationsDisableTrustedAccessS3LensCLI"></a>

**Example**  
The following example disables trusted access for S3 Storage Lens using the AWS CLI.  

```
aws organizations disable-aws-service-access --service-principal storage-lens.s3.amazonaws.com
```

# Using the AWS SDK for Java
<a name="OrganizationsDisableTrustedAccessS3LensJava"></a>

**Example – Disable AWS Organizations trusted access for S3 Storage Lens**  
The following example shows you how to disable AWS Organizations trusted access for S3 Storage Lens in SDK for Java. To use this example, replace the `user input placeholders` with your own information.  

```
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.organizations.AWSOrganizations;
import com.amazonaws.services.organizations.AWSOrganizationsClient;
import com.amazonaws.services.organizations.model.DisableAWSServiceAccessRequest;

public class DisableOrganizationsTrustedAccess {
	private static final String S3_STORAGE_LENS_SERVICE_PRINCIPAL = "storage-lens.s3.amazonaws.com";

	public static void main(String[] args) {
		try {
            AWSOrganizations organizationsClient = AWSOrganizationsClient.builder()
                .withCredentials(new ProfileCredentialsProvider())
                .withRegion(Regions.US_EAST_1)
                .build();

            // Make sure to remove any existing delegated administrator for S3 Storage Lens 
            // before disabling access; otherwise, the request will fail.
            organizationsClient.disableAWSServiceAccess(new DisableAWSServiceAccessRequest()
                .withServicePrincipal(S3_STORAGE_LENS_SERVICE_PRINCIPAL));
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but AWS Organizations couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // AWS Organizations couldn't be contacted for a response, or the client
            // couldn't parse the response from AWS Organizations.
            e.printStackTrace();
        }
	}
}
```

# Registering a delegated administrator for S3 Storage Lens
<a name="storage_lens_with_organizations_registering_delegated_admins"></a>

You can create organization-level dashboards by using your organization’s management account or delegated administrator accounts. Delegated administrator accounts allow other accounts besides your management account to create organization-level dashboards. Only the management account of an organization can register and deregister other accounts as delegated administrators for the organization.

After enabling trusted access, you can register delegate administrator access to accounts in your organization by using the AWS Organizations REST API, AWS CLI, or SDKs from the [management account](https://docs.aws.amazon.com/managedservices/latest/userguide/management-account.html). (For more information, see [https://docs.aws.amazon.com/organizations/latest/APIReference/API_RegisterDelegatedAdministrator.html](https://docs.aws.amazon.com/organizations/latest/APIReference/API_RegisterDelegatedAdministrator.html) in the *AWS Organizations API Reference*.) When an account is registered as a delegated administrator, the account receives authorization to access all read-only AWS Organizations API operations. This provides visibility to the members and structures of your organization so that they can create S3 Storage Lens dashboards on your behalf.

**Note**  
Before you can designate a delegated administrator by using the AWS Organizations REST API, AWS CLI, or SDKs, you must call the [https://docs.aws.amazon.com/organizations/latest/APIReference/API_EnableAWSServiceAccess.html](https://docs.aws.amazon.com/organizations/latest/APIReference/API_EnableAWSServiceAccess.html) operation.

# Using the S3 console
<a name="storage_lens_console_organizations_registering_delegated_admins"></a>

**To register delegated administrators for S3 Storage Lens**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. On the left navigation pane, navigate to **Storage Lens**.

1.  Choose **AWS Organizations settings**.

1. Under **Delegated administrators**, choose **Register account**.

1. Add an AWS account ID to register the account as a delegated administrator. The delegated administrator is able to create organization-level dashboards for all accounts and storage in your organization.

1. Choose **Register account**.

# Using the AWS CLI
<a name="OrganizationsRegisterDelegatedAdministratorS3LensCLI"></a>

**Example**  
The following example shows you how to register Organizations delegated administrators for S3 Storage Lens using the AWS CLI. To use this example, replace the `user input placeholders` with your own information.  

```
aws organizations register-delegated-administrator --service-principal storage-lens.s3.amazonaws.com --account-id 111122223333
```

# Using the AWS SDK for Java
<a name="OrganizationsRegisterDelegatedAdministratorS3LensJava"></a>

**Example – Register Organizations delegated administrators for S3 Storage Lens**  
The following example shows you how to register AWS Organizations delegated administrators for S3 Storage Lens in SDK for Java. To use this example, replace the `user input placeholders` with your own information.  

```
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.organizations.AWSOrganizations;
import com.amazonaws.services.organizations.AWSOrganizationsClient;
import com.amazonaws.services.organizations.model.RegisterDelegatedAdministratorRequest;

public class RegisterOrganizationsDelegatedAdministrator {
	private static final String S3_STORAGE_LENS_SERVICE_PRINCIPAL = "storage-lens.s3.amazonaws.com";

	public static void main(String[] args) {
		try {
            String delegatedAdminAccountId = "111122223333"; // Account Id for the delegated administrator.
            AWSOrganizations organizationsClient = AWSOrganizationsClient.builder()
                .withCredentials(new ProfileCredentialsProvider())
                .withRegion(Regions.US_EAST_1)
                .build();

            organizationsClient.registerDelegatedAdministrator(new RegisterDelegatedAdministratorRequest()
                .withAccountId(delegatedAdminAccountId)
                .withServicePrincipal(S3_STORAGE_LENS_SERVICE_PRINCIPAL));
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but AWS Organizations couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // AWS Organizations couldn't be contacted for a response, or the client
            // couldn't parse the response from AWS Organizations.
            e.printStackTrace();
        }
	}
}
```

# Deregistering a delegated administrator for S3 Storage Lens
<a name="storage_lens_with_organizations_deregistering_delegated_admins"></a>

After enabling trusted access, you can also deregister delegate administrator access to accounts in your organization. Delegated administrator accounts allow other accounts besides your [management account](https://docs.aws.amazon.com/managedservices/latest/userguide/management-account.html) to create organization-level dashboards. Only the management account of an organization can deregister accounts as delegated administrators for the organization.

You can deregister a delegated administrator by using the AWS Organizations AWS Management Console, REST API, AWS CLI, or AWS SDKS from the management account. For more information, see [https://docs.aws.amazon.com/organizations/latest/APIReference/API_DeregisterDelegatedAdministrator.html](https://docs.aws.amazon.com/organizations/latest/APIReference/API_DeregisterDelegatedAdministrator.html) in the *AWS Organizations API Reference*.

When an account is deregistered as a delegated administrator, the account loses access to the following:
+ All read-only AWS Organizations API operations that provide visibility to the members and structures of your organization.
+ All organization-level dashboards created by the delegated administrator. Deregistering a delegated administrator also automatically stops all organization-level dashboards created by that delegated administrator from aggregating new storage metrics.
**Note**  
The deregistered delegated administrator will still be able to see the historic data for the disabled dashboards that they created if data is still available for querying.

# Using the S3 console
<a name="storage_lens_console_organizations_deregistering_delegated_admins"></a>

**To deregister delegated administrators for S3 Storage Lens**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. On the left navigation pane, navigate to **Storage Lens**.

1.  Choose **AWS Organizations settings**.

1. Under **Delegated administrators**, choose the account that you wish to deregister.

1. Choose **De-register account**. The deregistered account is no longer a delegated administrator and is now unable to create organization-level dashboards for all accounts and storage in your organization.

1. Choose **Register account**.

# Using the AWS CLI
<a name="OrganizationsDeregisterDelegatedAdministratorS3LensCLI"></a>

**Example**  
The following example shows you how to deregister Organizations delegated administrators for S3 Storage Lens using the AWS CLI. To use this example, replace `111122223333` with your own AWS account ID.  

```
aws organizations deregister-delegated-administrator --service-principal storage-lens.s3.amazonaws.com --account-id 111122223333
```

# Using the AWS SDK for Java
<a name="OrganizationsDeregisterDelegatedAdministratorS3LensJava"></a>

**Example – Deregister Organizations delegated administrators for S3 Storage Lens**  
The following example shows you how to deregister Organizations delegated administrators for S3 Storage Lens using SDK for Java. To use this example, replace the `user input placeholders` with your own information.  

```
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.organizations.AWSOrganizations;
import com.amazonaws.services.organizations.AWSOrganizationsClient;
import com.amazonaws.services.organizations.model.DeregisterDelegatedAdministratorRequest;

public class DeregisterOrganizationsDelegatedAdministrator {
	private static final String S3_STORAGE_LENS_SERVICE_PRINCIPAL = "storage-lens.s3.amazonaws.com";

	public static void main(String[] args) {
		try {
            String delegatedAdminAccountId = "111122223333"; // Account Id for the delegated administrator.
            AWSOrganizations organizationsClient = AWSOrganizationsClient.builder()
                .withCredentials(new ProfileCredentialsProvider())
                .withRegion(Regions.US_EAST_1)
                .build();

            organizationsClient.deregisterDelegatedAdministrator(new DeregisterDelegatedAdministratorRequest()
                .withAccountId(delegatedAdminAccountId)
                .withServicePrincipal(S3_STORAGE_LENS_SERVICE_PRINCIPAL));
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but AWS Organizations couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // AWS Organizations couldn't be contacted for a response, or the client
            // couldn't parse the response from AWS Organizations.
            e.printStackTrace();
        }
	}
}
```

# Working with S3 Storage Lens groups to filter and aggregate metrics
<a name="storage-lens-groups-overview"></a>

An Amazon S3 Storage Lens group aggregates metrics using custom filters based on object metadata. Storage Lens groups help you drill down into characteristics of your data, such as distribution of objects by age, your most common file types, and more. For example, you can filter metrics by object tag to identify your fastest-growing datasets, or visualize your storage based on object size and age to inform your storage archive strategy. As a result, Amazon S3 Storage Lens groups helps you to better understand and optimize your S3 storage.

When you use Storage Lens groups, you can analyze and filter S3 Storage Lens metrics using object metadata such as prefixes, suffixes, [object tags](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-tagging.html), object size, or object age. You can also apply a combination of these filters. After you attach your Storage Lens group to your S3 Storage Lens dashboard, you can view S3 Storage Lens metrics aggregated by Amazon S3 Storage Lens groups directly in your dashboard.

For example, you can also filter your metrics by object size or age bands to determine which portion of your storage consists of small objects. You can then use this information with S3 Intelligent-Tiering or S3 Lifecycle to transition small objects to different storage classes for cost and storage optimization.

**Topics**
+ [How S3 Storage Lens groups work](storage-lens-groups.md)
+ [Using Storage Lens groups](storage-lens-group-tasks.md)

# How S3 Storage Lens groups work
<a name="storage-lens-groups"></a>

You can use Storage Lens groups to aggregate metrics using custom filters based on object metadata. When you define a custom filter, you can use prefixes, suffixes, object tags, object sizes, object age, or a combination of these custom filters. During Storage Lens group creation, you can also include a single filter or multiple filter conditions. To specify multiple filter conditions, you use `And` or `Or` logical operators.

When you create and configure a Storage Lens group, the Storage Lens group itself acts as a custom filter in the dashboard that you attach the group to. In your dashboard, you can then use the Storage Lens group filter to obtain storage metrics based on the custom filter that you defined in the group. 

To view the data for your Storage Lens group in your S3 Storage Lens dashboard, you must attach the group to the dashboard after you've created the group. After your Storage Lens group is attached to your Storage Lens dashboard, your dashboard will collect storage usage metrics within 48 hours. You can then visualize this data in the Storage Lens dashboard or export it through a metrics export. If you forget to attach a Storage Lens group to a dashboard, your Storage Lens group data won’t be captured or displayed anywhere.

**Note**  
When you create a S3 Storage Lens group, you're creating an AWS resource. Therefore, each Storage Lens group has its own Amazon Resource Name (ARN), which you can specify when [attaching it to or excluding it from a S3 Storage Lens dashboard](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-lens-groups-dashboard-console.html). 
If your Storage Lens group isn't attached to a dashboard, you won't incur any additional charges for creating a Storage Lens group.
S3 Storage Lens aggregates usage metrics for an object under all matching Storage Lens groups. Therefore, if an object matches the filter conditions for two or more Storage Lens groups, you will see repeated counts for the same object across your storage usage.

You can create a Storage Lens group at the account level in a specified home Region (from the list of supported AWS Regions). Then, you can attach your Storage Lens group to multiple Storage Lens dashboards, as long as the dashboards are in the same AWS account and home Region. You can create up to 50 Storage Lens groups per home Region in each AWS account.

You can create and manage S3 Storage Lens groups by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), AWS SDKs, or the Amazon S3 REST API.

**Topics**
+ [Viewing Storage Lens group aggregated metrics](#storage-lens-group-aggregation)
+ [Storage Lens groups permissions](#storage-lens-group-permissions)
+ [Storage Lens groups configuration](#storage-lens-groups-configuration)
+ [AWS resource tags](#storage-lens-group-resource-tags)
+ [Storage Lens groups metrics export](#storage-lens-groups-metrics-export)

## Viewing Storage Lens group aggregated metrics
<a name="storage-lens-group-aggregation"></a>

You can view the aggregated metrics for your Storage Lens groups by attaching the groups to a dashboard. The Storage Lens groups that you want to attach must reside within the designated home Region in the dashboard account. 

To attach a Storage Lens group to a dashboard, you must specify the group in the **Storage Lens group aggregation** section of your dashboard configuration. If you have several Storage Lens groups, you can filter the **Storage Lens group aggregation** results to include or exclude only the groups that you want. For more information about attaching groups to your dashboards, see [Attaching or removing S3 Storage Lens groups to or from your dashboard](storage-lens-groups-dashboard-console.md). 

After you've attached your groups, you will see the additional Storage Lens group aggregation data in your dashboard within 48 hours. 

**Note**  
To view aggregated metrics for your Storage Lens group, you must attach the group to an S3 Storage Lens dashboard.

## Storage Lens groups permissions
<a name="storage-lens-group-permissions"></a>

Storage Lens groups require certain permissions in AWS Identity and Access Management (IAM) to authorize access to S3 Storage Lens group actions. To grant these permissions, you can use an identity-based IAM policy. You can attach this policy to IAM users, groups, or roles to grant them permissions. Such permissions can include the ability to create or delete Storage Lens groups, view their configurations, or manage their tags.

The IAM user or role that you grant permissions to must belong to the account that created or owns the Storage Lens group.

To use Storage Lens groups and to view your Storage Lens groups metrics, you must first have the appropriate permissions to use S3 Storage Lens. For more information, see [Setting Amazon S3 Storage Lens permissions](storage_lens_iam_permissions.md).

To create and manage S3 Storage Lens groups, you must have the following IAM permissions, depending on which actions you want to perform:


| Action | IAM permissions | 
| --- | --- | 
|  Create a new Storage Lens group  |  `s3:CreateStorageLensGroup`  | 
|  Create a new Storage Lens group with tags  |  `s3:CreateStorageLensGroup`, `s3:TagResource`  | 
|  Update an existing Storage Lens group  |  `s3:UpdateStorageLensGroup`  | 
|  Return the details of a Storage Lens group configuration  |  `s3:GetStorageLensGroup`  | 
|  List all Storage Lens groups in your home Region  |  `s3:ListStorageLensGroups`  | 
|  Delete a Storage Lens group  |  `s3:DeleteStorageLensGroup`  | 
|  List the tags that were added to your Storage Lens group  |  `s3:ListTagsForResource`  | 
|  Add or update a Storage Lens group tag for an existing Storage Lens group  |  `s3:TagResource`  | 
|  Delete a tag from a Storage Lens group  |  `s3:UntagResource`  | 

Here's an example of how to configure your IAM policy in the account that creates the Storage Lens group. To use this policy, replace `us-east-1` with the home Region that your Storage Lens group is located in. Replace `111122223333` with your AWS account ID, and replace `example-storage-lens-group` with the name of your Storage Lens group. To apply these permissions to all Storage Lens groups, replace `example-storage-lens-group` with an `*`.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "EXAMPLE-Statement-ID",
            "Effect": "Allow",
            "Action": [
                "s3:CreateStorageLensGroup",
                "s3:UpdateStorageLensGroup",
                "s3:GetStorageLensGroup",
                "s3:ListStorageLensGroups",
                "s3:DeleteStorageLensGroup,
                "s3:TagResource",
                "s3:UntagResource",
                "s3:ListTagsForResource"
                ],
            "Resource": "arn:aws:s3:us-east-1:111122223333:storage-lens-group/example-storage-lens-group"
        }
    ]
}
```

For more information about S3 Storage Lens permissions, see [Setting Amazon S3 Storage Lens permissions](storage_lens_iam_permissions.md). For more information about IAM policy language, see [Policies and permissions in Amazon S3](access-policy-language-overview.md).

## Storage Lens groups configuration
<a name="storage-lens-groups-configuration"></a>

### S3 Storage Lens group name
<a name="storage-lens-group-name"></a>

We recommend giving your Storage Lens groups names that indicate their purpose so that you can easily determine which groups you want to attach to your dashboards. To [attach a Storage Lens group to a dashboard](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-lens-groups-dashboard-console.html), you must specify the group in the **Storage Lens group aggregation** section of the dashboard configuration. 

Storage Lens group names must be unique within the account. They must not exceed 64 characters, and can contain only letters (a-z, A-Z), numbers (0-9), hyphens (`-`), and underscores (`_`).

### Home Region
<a name="storage-lens-group-home-region"></a>

The home Region is the AWS Region where your Storage Lens group is created and maintained. Your Storage Lens group is created in the same home Region as your Amazon S3 Storage Lens dashboard. The Storage Lens group configuration and metrics are also stored in this Region. You can create up to 50 Storage Lens groups in a home Region.

 After you create your Storage Lens group, you can’t edit the home Region.

### Scope
<a name="storage-lens-group-scope"></a>

To include objects in your Storage Lens group, they must be in scope for your Amazon S3 Storage Lens dashboard. The scope of your Storage Lens dashboard is determined by the buckets that you included in the **Dashboard scope** of your S3 Storage Lens dashboard configuration.

You can use different filters for your objects to define the scope of your Storage Lens group. To view these Storage Lens group metrics in your S3 Storage Lens dashboard, objects must match the filters that you include in your Storage Lens groups. For example, suppose that your Storage Lens group includes objects with the prefix `marketing` and the suffix `.png`, but no objects match those criteria. In this case, metrics for this Storage Lens group won't be generated in your daily metrics export, and no metrics for this group will be visible in your dashboard.

### Filters
<a name="storage-lens-group-filters"></a>

You can use the following filters in an S3 Storage Lens group:
+ **Prefixes** – Specifies the [prefix](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-prefixes.html) of included objects, which is a string of characters at the beginning of the object key name. For example, a value of `images` for the **Prefixes** filter includes objects with any of the following prefixes: `images/`, `images-marketing`, and `images/production`. The maximum length of a prefix is 1,024 bytes.
+ **Suffixes** – Specifies the suffix of included objects (for example, `.png`, `.jpeg`, or `.csv`). The maximum length of a suffix is 1,024 bytes.
+ **Object tags** – Specifies the list of [object tags](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-tagging.html) that you want to filter on. A tag key can't exceed 128 Unicode characters, and a tag value can’t exceed 256 Unicode characters. Note that if the object tag value field is left empty, S3 Storage Lens groups only matches the object to other objects that also have empty tag values.
+ **Age** – Specifies the object age range of included objects in days. Only integers are supported.
+ **Size** – Specifies the object size range of included objects in bytes. Only integers are supported. The maximum allowable value is 50 TB.

### Storage Lens group object tags
<a name="storage-lens-group-object-tags"></a>

You can [create a Storage Lens group](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-lens-groups-create.html) that includes up to 10 object tag filters. The following example includes two object tag key-value pairs as filters for a Storage Lens group that's named `Marketing-Department`. To use this example, replace `Marketing-Department` with the name of your group, and replace `object-tag-key-1`, `object-tag-value-1`, and so on with the object tag key-value pairs that you want to filter on.

```
{
    "Name": "Marketing-Department",
    "Filter": {
     "MatchAnyTag":[
                {
                    "Key": "object-tag-key-1",
                    "Value": "object-tag-value-1"
                },
                {
                    "Key": "object-tag-key-2",
                    "Value": "object-tag-value-2"
                }
            ]
    }
}
```

### Logical operators (`And` or `Or`)
<a name="storage-lens-group-logical-operators"></a>

To include multiple filter conditions in your Storage Lens group, you can use logical operators (either `And` or `Or`). In the following example, the Storage Lens group that's named `Marketing-Department` has an `And` operator that contains `Prefix`, `ObjectAge`, and `ObjectSize` filters. Because an `And` operator is used, only objects that match **all** of these filter conditions will be included the Storage Lens group's scope. 

To use this example, replace the `user input placeholders` with the values that you want to filter on.

```
{
    "Name": "Marketing-Department",
    "Filter": {
        "And": {
            "MatchAnyPrefix": [
                "prefix-1",
                "prefix-2",
                "prefix-3/sub-prefix-1" 
            ],
             "MatchObjectAge": {
                "DaysGreaterThan": 10,
                "DaysLessThan": 60
            },
            "MatchObjectSize": {
                "BytesGreaterThan": 10,
                "BytesLessThan": 60 
            }
        }
    }
}
```

**Note**  
If you want to include objects that match **any** of the conditions in the filters, replace the `And` logical operator with the `Or` logical operator in this example.

## AWS resource tags
<a name="storage-lens-group-resource-tags"></a>

Each S3 Storage Lens group is counted as an AWS resource with its own Amazon Resource Name (ARN). Therefore, when you configure your Storage Lens group, you can optionally add AWS resource tags to the group. You can add up to 50 tags for each Storage Lens group. To create a Storage Lens group with tags, you must have the `s3:CreateStorageLensGroup` and `s3:TagResource` permissions.

You can use AWS resource tags to categorize resources according to department, line of business, or project. Doing so is useful when you have many resources of the same type. By applying tags, you can quickly identify a specific Storage Lens group based on the tags that you've assigned to it. You can also use tags to track and allocate costs.

In addition, when you add an AWS resource tag to your Storage Lens group, you activate [attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html). ABAC is an authorization strategy that defines permissions based on attributes, in this case tags. You can also use conditions that specify resource tags in your IAM policies to [control access to AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-resources).

You can edit tag keys and values, and you can remove tags from a resource at any time. Also, be aware of the following limitations:
+ Tag keys and tag values are case sensitive.
+ If you add a tag that has the same key as an existing tag on that resource, the new value overwrites the old value.
+ If you delete a resource, any tags for the resource are also deleted. 
+ Don't include private or sensitive data in your AWS resource tags.
+ System tags (or tags with tag keys that begin with `aws:`) aren't supported.
+ The length of each tag key can't exceed 128 characters. The length of each tag value can't exceed 256 characters.

## Storage Lens groups metrics export
<a name="storage-lens-groups-metrics-export"></a>

S3 Storage Lens group metrics are included in the [Amazon S3 Storage Lens metrics export](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens_understanding_metrics_export_schema.html) for the dashboard that the Storage Lens group is attached to. For general information about the Storage Lens metrics export feature, see [Viewing Amazon S3 Storage Lens metrics using a data export](storage_lens_view_metrics_export.md).

Your metrics export for Storage Lens groups includes any S3 Storage Lens metrics that are in scope for the dashboard that you attached the Storage Lens group to. The export also includes additional metrics data for Storage Lens groups.

After you create your Storage Lens group, your metrics export is sent daily to the bucket that you selected when you configured the metrics export for the dashboard that your group is attached to. It can take up to 48 hours for you to receive the first metrics export. 

To generate metrics in the daily export, objects must match the filters that you include in your Storage Lens groups. If no objects match the filters that you included in your Storage Lens group, then no metrics will be generated. However, if an object matches two or more Storage Lens groups, the object is listed separately for each group when it appears in the metrics export.

You can identify metrics for Storage Lens groups by looking for one of the following values in the `record_type` column of the metrics export for your dashboard:
+ `STORAGE_LENS_GROUP_BUCKET`
+ `STORAGE_LENS_GROUP_ACCOUNT`

The `record_value` column displays the resource ARN for the Storage Lens group (for example, `arn:aws:s3:us-east-1:111122223333:storage-lens-group/Marketing-Department`).

# Using Storage Lens groups
<a name="storage-lens-group-tasks"></a>

Amazon S3 Storage Lens groups aggregates metrics using custom filters based on object metadata. You can analyze and filter S3 Storage Lens metrics using prefixes, suffixes, object tags, object size, or object age. With Amazon S3 Storage Lens groups, you can also categorize your usage within and across Amazon S3 buckets. As a result, you'll be able to better understand and optimize your S3 storage.

To start visualizing the data for a Storage Lens group, you must first [attach your Storage Lens group to an S3 Storage Lens dashboard](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-lens-groups-dashboard-console.html#storage-lens-groups-attach-dashboard-console). If you need to manage Storage Lens groups in the dashboard, you can edit the dashboard configuration. To check which Storage Lens groups are under your account, you can list them. To check which Storage Lens groups are attached to your dashboard, you can always check the **Storage Lens groups** tab in the dashboard. To review or update the scope of an existing Storage Lens group, you can view its details. You can also permanently delete a Storage Lens group.

To manage permissions, you can create and add user-defined AWS resource tags to your Storage Lens groups. You can use AWS resource tags to categorize resources according to department, line of business, or project. Doing so is useful when you have many resources of the same type. By applying tags, you can quickly identify a specific Storage Lens group based on the tags that you've assigned to it. 

In addition, when you add an AWS resource tag to your Storage Lens group, you activate [attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html). ABAC is an authorization strategy that defines permissions based on attributes, in this case tags. You can also use conditions that specify resource tags in your IAM policies to [control access to AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-resources).

**Topics**
+ [Creating a Storage Lens group](storage-lens-groups-create.md)
+ [Attaching or removing S3 Storage Lens groups to or from your dashboard](storage-lens-groups-dashboard-console.md)
+ [Visualizing your Storage Lens groups data](storage-lens-groups-visualize.md)
+ [Updating a Storage Lens group](storage-lens-groups-update.md)
+ [Managing AWS resource tags with Storage Lens groups](storage-lens-groups-manage-tags.md)
+ [Listing all Storage Lens groups](storage-lens-groups-list.md)
+ [Viewing Storage Lens group details](storage-lens-groups-view.md)
+ [Deleting a Storage Lens group](storage-lens-groups-delete.md)

# Creating a Storage Lens group
<a name="storage-lens-groups-create"></a>

The following examples demonstrate how to create an Amazon S3 Storage Lens group by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and AWS SDK for Java.

## Using the S3 console
<a name="create-storage-lens-group-console"></a>

**To create a Storage Lens group**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to switch to. 

1. In the left navigation pane, choose **Storage Lens groups**.

1. Choose **Create Storage Lens group**.

1. Under **General**, view your **Home Region** and enter your **Storage Lens group name**.

1. Under **Scope**, choose the filter that you want to apply to your Storage Lens group. To apply multiple filters, choose your filters, and then choose the **AND** or **OR** logical operator.
   + For the **Prefixes** filter, choose **Prefixes**, and enter a prefix string. To add multiple prefixes, choose **Add prefix**. To remove a prefix, choose **Remove** next to the prefix that you want to remove.
   + For the **Object tags** filter, choose **Object tags**, and enter the key-value pair for your object. Then, choose **Add tag**. To remove a tag, choose **Remove** next to the tag that you want to remove.
   + For the **Suffixes** filter, choose **Suffixes**, and enter a suffix string. To add multiple suffixes, choose **Add suffix**. To remove a suffix, choose **Remove** next to the suffix that you want to remove.
   + For the **Age** filter, specify the object age range in days. Choose **Specify minimum object age**, and enter the minimum object age. Then, choose **Specify maximum object age**, and enter the maximum object age.
   + For the **Size** filter, specify the object size range and unit of measurement. Choose **Specify minimum object size**, and enter the minimum object size. Choose **Specify maximum object size**, and enter the maximum object size.

1. (Optional) For AWS resource tags, add the key-value pair, and then choose **Add tag**.

1. Choose **Create Storage Lens group**.

## Using the AWS CLI
<a name="create-storage-lens-group-cli"></a>

The following example AWS CLI command creates a Storage Lens group. To use this example command, replace the `user input placeholders` with your own information.

```
aws s3control create-storage-lens-group --account-id 111122223333 \ 
--region us-east-1 --storage-lens-group=file://./marketing-department.json
```

The following example AWS CLI command creates a Storage Lens group with two AWS resource tags. To use this example command, replace the `user input placeholders` with your own information.

```
aws s3control create-storage-lens-group --account-id 111122223333 \ 
--region us-east-1 --storage-lens-group=file://./marketing-department.json \
--tags Key=k1,Value=v1 Key=k2,Value=v2
```

For example JSON configurations, see [Storage Lens groups configuration](storage-lens-groups.md#storage-lens-groups-configuration).

## Using the AWS SDK for Java
<a name="create-storage-lens-group-sdk-java"></a>

The following AWS SDK for Java example creates a Storage Lens group. To use this example, replace the `user input placeholders` with your own information.

**Example – Create a Storage Lens group with a single filter**  
The following example creates a Storage Lens group named `Marketing-Department`. This group has an object age filter that specifies the age range as `30` to `90` days. To use this example, replace the `user input placeholders` with your own information.  

```
package aws.example.s3control;
 
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.CreateStorageLensGroupRequest;
import software.amazon.awssdk.services.s3control.model.MatchObjectAge;
import software.amazon.awssdk.services.s3control.model.StorageLensGroup;
import software.amazon.awssdk.services.s3control.model.StorageLensGroupFilter;
 
public class CreateStorageLensGroupWithObjectAge {
    public static void main(String[] args) {
        String storageLensGroupName = "Marketing-Department";
        String accountId = "111122223333";
        
        try {
            StorageLensGroupFilter objectAgeFilter = StorageLensGroupFilter.builder()
                    .matchObjectAge(MatchObjectAge.builder()
                            .daysGreaterThan(30)
                            .daysLessThan(90)
                            .build())
                    .build();

            StorageLensGroup storageLensGroup = StorageLensGroup.builder()
                    .name(storageLensGroupName)
                    .filter(objectAgeFilter)
                    .build();

            CreateStorageLensGroupRequest createStorageLensGroupRequest = CreateStorageLensGroupRequest.builder()
                    .storageLensGroup(storageLensGroup)
                    .accountId(accountId).build();

            S3ControlClient s3ControlClient = S3ControlClient.builder()
                    .region(Region.US_WEST_2)
                    .credentialsProvider(ProfileCredentialsProvider.create())
                    .build();
            s3ControlClient.createStorageLensGroup(createStorageLensGroupRequest);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

**Example – Create a Storage Lens group with an `AND` operator that includes multiple filters**  
The following example creates a Storage Lens group named `Marketing-Department`. This group uses the `AND` operator to indicate that objects must match **all** of the filter conditions. To use this example, replace the `user input placeholders` with your own information.   

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.CreateStorageLensGroupRequest;
import software.amazon.awssdk.services.s3control.model.MatchObjectAge;
import software.amazon.awssdk.services.s3control.model.MatchObjectSize;
import software.amazon.awssdk.services.s3control.model.S3Tag;
import software.amazon.awssdk.services.s3control.model.StorageLensGroup;
import software.amazon.awssdk.services.s3control.model.StorageLensGroupAndOperator;
import software.amazon.awssdk.services.s3control.model.StorageLensGroupFilter;


public class CreateStorageLensGroupWithAndFilter {
    public static void main(String[] args) {
        String storageLensGroupName = "Marketing-Department";
        String accountId = "111122223333";

        try {
            // Create object tags.
            S3Tag tag1 = S3Tag.builder()
                    .key("object-tag-key-1")
                    .value("object-tag-value-1")
                    .build();
            S3Tag tag2 = S3Tag.builder()
                    .key("object-tag-key-2")
                    .value("object-tag-value-2")
                    .build();

            StorageLensGroupAndOperator andOperator = StorageLensGroupAndOperator.builder()
                    .matchAnyPrefix("prefix-1", "prefix-2", "prefix-3/sub-prefix-1")
                    .matchAnySuffix(".png", ".gif", ".jpg")
                    .matchAnyTag(tag1, tag2)
                    .matchObjectAge(MatchObjectAge.builder()
                            .daysGreaterThan(30)
                            .daysLessThan(90).build())
                    .matchObjectSize(MatchObjectSize.builder()
                            .bytesGreaterThan(1000L)
                            .bytesLessThan(6000L).build())
                    .build();

            StorageLensGroupFilter andFilter = StorageLensGroupFilter.builder()
                    .and(andOperator)
                    .build();

            StorageLensGroup storageLensGroup = StorageLensGroup.builder()
                    .name(storageLensGroupName)
                    .filter(andFilter)
                    .build();

            CreateStorageLensGroupRequest createStorageLensGroupRequest = CreateStorageLensGroupRequest.builder()
                    .storageLensGroup(storageLensGroup)
                    .accountId(accountId).build();

            S3ControlClient s3ControlClient = S3ControlClient.builder()
                    .region(Region.US_WEST_2)
                    .credentialsProvider(ProfileCredentialsProvider.create())
                    .build();
            s3ControlClient.createStorageLensGroup(createStorageLensGroupRequest);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

**Example – Create a Storage Lens group with an `OR` operator that includes multiple filters**  
The following example creates a Storage Lens group named `Marketing-Department`. This group uses an `OR` operator to apply a prefix filter (`prefix-1`, `prefix-2`, `prefix3/sub-prefix-1`) or an object size filter with a size range between `1000` bytes and `6000` bytes. To use this example, replace the `user input placeholders` with your own information.  

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.CreateStorageLensGroupRequest;
import software.amazon.awssdk.services.s3control.model.MatchObjectSize;
import software.amazon.awssdk.services.s3control.model.StorageLensGroup;
import software.amazon.awssdk.services.s3control.model.StorageLensGroupFilter;
import software.amazon.awssdk.services.s3control.model.StorageLensGroupOrOperator;

public class CreateStorageLensGroupWithOrFilter {
    public static void main(String[] args) {
        String storageLensGroupName = "Marketing-Department";
        String accountId = "111122223333";

        try {
            StorageLensGroupOrOperator orOperator = StorageLensGroupOrOperator.builder()
                    .matchAnyPrefix("prefix-1", "prefix-2", "prefix-3/sub-prefix-1")
                    .matchObjectSize(MatchObjectSize.builder()
                            .bytesGreaterThan(1000L)
                            .bytesLessThan(6000L)
                            .build())
                    .build();

            StorageLensGroupFilter orFilter = StorageLensGroupFilter.builder()
                    .or(orOperator)
                    .build();

            StorageLensGroup storageLensGroup = StorageLensGroup.builder()
                    .name(storageLensGroupName)
                    .filter(orFilter)
                    .build();

            CreateStorageLensGroupRequest createStorageLensGroupRequest = CreateStorageLensGroupRequest.builder()
                    .storageLensGroup(storageLensGroup)
                    .accountId(accountId).build();

            S3ControlClient s3ControlClient = S3ControlClient.builder()
                    .region(Region.US_WEST_2)
                    .credentialsProvider(ProfileCredentialsProvider.create())
                    .build();
            s3ControlClient.createStorageLensGroup(createStorageLensGroupRequest);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

**Example – Create a Storage Lens group with a single filter and two AWS resource tags**  
The following example creates a Storage Lens group named `Marketing-Department` that has a suffix filter. This example also adds two AWS resource tags to the Storage Lens group. To use this example, replace the `user input placeholders` with your own information.  

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.CreateStorageLensGroupRequest;
import software.amazon.awssdk.services.s3control.model.StorageLensGroup;
import software.amazon.awssdk.services.s3control.model.StorageLensGroupFilter;
import software.amazon.awssdk.services.s3control.model.Tag;

public class CreateStorageLensGroupWithResourceTags {
    public static void main(String[] args) {
        String storageLensGroupName = "Marketing-Department";
        String accountId = "111122223333";

        try {
            // Create AWS resource tags.
            Tag resourceTag1 = Tag.builder()
                    .key("resource-tag-key-1")
                    .value("resource-tag-value-1")
                    .build();
            Tag resourceTag2 = Tag.builder()
                    .key("resource-tag-key-2")
                    .value("resource-tag-value-2")
                    .build();

            StorageLensGroupFilter suffixFilter = StorageLensGroupFilter.builder()
                    .matchAnySuffix(".png", ".gif", ".jpg")
                    .build();

            StorageLensGroup storageLensGroup = StorageLensGroup.builder()
                    .name(storageLensGroupName)
                    .filter(suffixFilter)
                    .build();

            CreateStorageLensGroupRequest createStorageLensGroupRequest = CreateStorageLensGroupRequest.builder()
                    .storageLensGroup(storageLensGroup)
                    .tags(resourceTag1, resourceTag2)
                    .accountId(accountId).build();

            S3ControlClient s3ControlClient = S3ControlClient.builder()
                    .region(Region.US_WEST_2)
                    .credentialsProvider(ProfileCredentialsProvider.create())
                    .build();
            s3ControlClient.createStorageLensGroup(createStorageLensGroupRequest);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

For example JSON configurations, see [Storage Lens groups configuration](storage-lens-groups.md#storage-lens-groups-configuration).

# Attaching or removing S3 Storage Lens groups to or from your dashboard
<a name="storage-lens-groups-dashboard-console"></a>

After you've upgraded to the advanced tier in Amazon S3 Storage Lens, you can attach a [Storage Lens group](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-lens-groups-overview.html) to your dashboard. If you have several Storage Lens groups, you can include or exclude the groups that you want. 

Your Storage Lens groups must reside within the designated home Region in the dashboard account. After you attach a Storage Lens group to your dashboard, you'll receive the additional Storage Lens group aggregation data in your metrics export within 48 hours.

**Note**  
If you want to view aggregated metrics for your Storage Lens group, you must attach it to your Storage Lens dashboard. For examples of Storage Lens group JSON configuration files, see [S3 Storage Lens example configuration with Storage Lens groups in JSON](S3LensHelperFilesCLI.md#StorageLensGroupsHelperFilesCLI). 

## Using the S3 console
<a name="storage-lens-groups-attach-dashboard-console"></a>

**To attach a Storage Lens group to a Storage Lens dashboard**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, under **Storage Lens**, choose **Dashboards**.

1. Choose the option button for the Storage Lens dashboard that you want to attach a Storage Lens group to.

1. Choose **Edit**.

1. Under **Metrics selection**, choose **Advanced metrics and recommendations**.

1. Select **Storage Lens group aggregation**.
**Note**  
By default, **Advanced metrics** is also selected. However, you can also deselect this setting as it's not required to aggregate Storage Lens groups data.

1. Scroll down to **Storage Lens group aggregation** and specify the Storage Lens group or groups that you either want to include or exclude in the data aggregation. You can use the following filtering options:
   + If you want to include certain Storage Lens groups, choose **Include Storage Lens groups**. Under **Storage Lens groups to include**, select your Storage Lens groups.
   + If you want to include all Storage Lens groups, select **Include all Storage Lens groups in home Region in this account**.
   + If you want to exclude certain Storage Lens groups, choose **Exclude Storage Lens groups**. Under **Storage Lens groups to exclude**, select the Storage Lens groups that you want to exclude.

1. Choose **Save changes**. If you've configured your Storage Lens groups correctly, you will see the additional Storage Lens group aggregation data in your dashboard within 48 hours.

**To remove a Storage Lens group from an S3 Storage Lens dashboard**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, under **Storage Lens**, choose **Dashboards**.

1. Choose the option button for the Storage Lens dashboard that you want to remove a Storage Lens group from.

1. Choose **View dashboard configuration**.

1. Choose **Edit**.

1. Scroll down to the **Metrics selection** section.

1. Under **Storage Lens group aggregation**, choose the **X** next to the Storage Lens group that you want to remove. This removes your Storage Lens group.

   If you included all of your Storage Lens groups in your dashboard, clear the check box next to **Include all Storage Lens groups in home Region in this account**. 

1. Choose **Save changes**.
**Note**  
It will take up to 48 hours for your dashboard to reflect the configuration updates.

## Using the AWS SDK for Java
<a name="StorageLensGroupsConfigurationJava"></a>

**Example – Attach all Storage Lens groups to a dashboard**  
The following SDK for Java example attaches all Storage Lens groups in the account *111122223333* to the *DashBoardConfigurationId* dashboard:  

```
package aws.example.s3control;


import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.services.s3control.AWSS3Control;
import com.amazonaws.services.s3control.AWSS3ControlClient;
import com.amazonaws.services.s3control.model.BucketLevel;
import com.amazonaws.services.s3control.model.PutStorageLensConfigurationRequest;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3control.model.AccountLevel;
import com.amazonaws.services.s3control.model.StorageLensConfiguration;
import com.amazonaws.services.s3control.model.StorageLensGroupLevel;

import static com.amazonaws.regions.Regions.US_WEST_2;

public class CreateDashboardWithStorageLensGroups {
    public static void main(String[] args) {
        String configurationId = "ExampleDashboardConfigurationId";
        String sourceAccountId = "111122223333";

        try {
            StorageLensGroupLevel storageLensGroupLevel = new StorageLensGroupLevel();

            AccountLevel accountLevel = new AccountLevel()
                    .withBucketLevel(new BucketLevel())
                    .withStorageLensGroupLevel(storageLensGroupLevel);

            StorageLensConfiguration configuration = new StorageLensConfiguration()
                    .withId(configurationId)
                    .withAccountLevel(accountLevel)
                    .withIsEnabled(true);

            AWSS3Control s3ControlClient = AWSS3ControlClient.builder()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(US_WEST_2)
                    .build();

            s3ControlClient.putStorageLensConfiguration(new PutStorageLensConfigurationRequest()
                    .withAccountId(sourceAccountId)
                    .withConfigId(configurationId)
                    .withStorageLensConfiguration(configuration)
            );
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

**Example – Attach two Storage Lens groups to a dashboard**  
The following AWS SDK for Java example attaches two Storage Lens groups (*StorageLensGroupName1* and *StorageLensGroupName2*) to the *ExampleDashboardConfigurationId* dashboard.  

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3control.AWSS3Control;
import com.amazonaws.services.s3control.AWSS3ControlClient;
import com.amazonaws.services.s3control.model.AccountLevel;
import com.amazonaws.services.s3control.model.BucketLevel;
import com.amazonaws.services.s3control.model.PutStorageLensConfigurationRequest;
import com.amazonaws.services.s3control.model.StorageLensConfiguration;
import com.amazonaws.services.s3control.model.StorageLensGroupLevel;
import com.amazonaws.services.s3control.model.StorageLensGroupLevelSelectionCriteria;

import static com.amazonaws.regions.Regions.US_WEST_2;

public class CreateDashboardWith2StorageLensGroups {
    public static void main(String[] args) {
        String configurationId = "ExampleDashboardConfigurationId";
        String storageLensGroupName1 = "StorageLensGroupName1";
        String storageLensGroupName2 = "StorageLensGroupName2";
        String sourceAccountId = "111122223333";

        try {
            StorageLensGroupLevelSelectionCriteria selectionCriteria = new StorageLensGroupLevelSelectionCriteria()
                    .withInclude(
                            "arn:aws:s3:" + US_WEST_2.getName() + ":" + sourceAccountId + ":storage-lens-group/" + storageLensGroupName1,
                            "arn:aws:s3:" + US_WEST_2.getName() + ":" + sourceAccountId + ":storage-lens-group/" + storageLensGroupName2);

            System.out.println(selectionCriteria);
            StorageLensGroupLevel storageLensGroupLevel = new StorageLensGroupLevel()
                    .withSelectionCriteria(selectionCriteria);

            AccountLevel accountLevel = new AccountLevel()
                    .withBucketLevel(new BucketLevel())
                    .withStorageLensGroupLevel(storageLensGroupLevel);

            StorageLensConfiguration configuration = new StorageLensConfiguration()
                    .withId(configurationId)
                    .withAccountLevel(accountLevel)
                    .withIsEnabled(true);

            AWSS3Control s3ControlClient = AWSS3ControlClient.builder()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(US_WEST_2)
                    .build();

            s3ControlClient.putStorageLensConfiguration(new PutStorageLensConfigurationRequest()
                    .withAccountId(sourceAccountId)
                    .withConfigId(configurationId)
                    .withStorageLensConfiguration(configuration)
            );
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

**Example – Attach all Storage Lens groups with exclusions**  
The following SDK for Java example attaches all Storage Lens groups to the *ExampleDashboardConfigurationId* dashboard, excluding the two specified (*StorageLensGroupName1* and *StorageLensGroupName2*):  

```
package aws.example.s3control;


import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3control.AWSS3Control;
import com.amazonaws.services.s3control.AWSS3ControlClient;
import com.amazonaws.services.s3control.model.AccountLevel;
import com.amazonaws.services.s3control.model.BucketLevel;
import com.amazonaws.services.s3control.model.PutStorageLensConfigurationRequest;
import com.amazonaws.services.s3control.model.StorageLensConfiguration;
import com.amazonaws.services.s3control.model.StorageLensGroupLevel;
import com.amazonaws.services.s3control.model.StorageLensGroupLevelSelectionCriteria;

import static com.amazonaws.regions.Regions.US_WEST_2;

public class CreateDashboardWith2StorageLensGroupsExcluded {
    public static void main(String[] args) {
        String configurationId = "ExampleDashboardConfigurationId";
        String storageLensGroupName1 = "StorageLensGroupName1";
        String storageLensGroupName2 = "StorageLensGroupName2";
        String sourceAccountId = "111122223333";

        try {
            StorageLensGroupLevelSelectionCriteria selectionCriteria = new StorageLensGroupLevelSelectionCriteria()
                    .withInclude(
                            "arn:aws:s3:" + US_WEST_2.getName() + ":" + sourceAccountId + ":storage-lens-group/" + storageLensGroupName1,
                            "arn:aws:s3:" + US_WEST_2.getName() + ":" + sourceAccountId + ":storage-lens-group/" + storageLensGroupName2);

            System.out.println(selectionCriteria);
            StorageLensGroupLevel storageLensGroupLevel = new StorageLensGroupLevel()
                    .withSelectionCriteria(selectionCriteria);

            AccountLevel accountLevel = new AccountLevel()
                    .withBucketLevel(new BucketLevel())
                    .withStorageLensGroupLevel(storageLensGroupLevel);

            StorageLensConfiguration configuration = new StorageLensConfiguration()
                    .withId(configurationId)
                    .withAccountLevel(accountLevel)
                    .withIsEnabled(true);

            AWSS3Control s3ControlClient = AWSS3ControlClient.builder()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(US_WEST_2)
                    .build();

            s3ControlClient.putStorageLensConfiguration(new PutStorageLensConfigurationRequest()
                    .withAccountId(sourceAccountId)
                    .withConfigId(configurationId)
                    .withStorageLensConfiguration(configuration)
            );
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

# Visualizing your Storage Lens groups data
<a name="storage-lens-groups-visualize"></a>

You can visualize your Storage Lens groups data by [attaching the group to your Amazon S3 Storage Lens dashboard](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-lens-groups-dashboard-console.html#storage-lens-groups-attach-dashboard-console). After you've included the Storage Lens group in the Storage Lens group aggregation in your dashboard configuration, it can take up to 48 hours for the Storage Lens group data to display in your dashboard.

After the dashboard configuration has been updated, any newly attached Storage Lens groups appear in the list of available resources under the **Storage Lens groups** tab. You can also further analyze storage usage in your **Overview** tab by slicing the data by another dimension. For example, you can choose one of the items listed under the **Top 3** categories and choose **Analyze by** to slice the data by another dimension. You can't apply the same dimension as the filter itself.

**Note**  
You can't apply a Storage Lens group filter along with a prefix filter, or the reverse. You also can't further analyze a Storage Lens group by using a prefix filter.

You can use the **Storage Lens groups** tab in the Amazon S3 Storage Lens dashboard to customize the data visualization for the Storage Lens groups that are attached to your dashboard. You can either visualize the data for some Storage Lens groups that are attached to your dashboard, or all of them.

When visualizing Storage Lens group data in your S3 Storage Lens dashboard, be aware of the following:
+ S3 Storage Lens aggregates usage metrics for an object under all matching Storage Lens groups. Therefore, if an object matches the filter conditions for two or more Storage Lens groups, you will see repeated counts for the same object across your storage usage.
+ Objects must match the filters that you include in your Storage Lens groups. If no objects match the filters that you include in your Storage Lens group, then no metrics are generated. To determine if there are any unassigned objects, check your total object count in the dashboard at the account level and bucket level.

# Updating a Storage Lens group
<a name="storage-lens-groups-update"></a>

The following examples demonstrate how to update an Amazon S3 Storage Lens group. You can update a Storage Lens group by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and AWS SDK for Java.

## Using the S3 console
<a name="update-storage-lens-group-console"></a>

**To update a Storage Lens group**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens groups**.

1. Under **Storage Lens groups**, choose the Storage Lens group that you want to update.

1. Under **Scope**, choose **Edit**.

1. On the **Scope** page, select the filter that you want to apply to your Storage Lens group. To apply multiple filters, select your filters, and choose the **AND** or **OR** logical operator.
   + For the **Prefixes** filter, select **Prefixes**, and enter a prefix string. To add multiple prefixes, choose **Add prefix**. To remove a prefix, choose **Remove** next to the prefix that you want to remove.
   + For the **Object tags** filter, enter the key-value pair for your object. Then, choose **Add tag**. To remove a tag, choose **Remove** next to the tag that you want to remove.
   + For the **Suffixes** filter, select **Suffixes**, and enter a suffix string. To add multiple suffixes, choose **Add suffix**. To remove a suffix, choose **Remove** next to the suffix that you want to remove.
   + For the **Age** filter, specify the object age range in days. Choose **Specify minimum object age**, and enter the minimum object age. For **Specify maximum object age**, enter the maximum object age.
   + For the **Size** filter, specify the object size range and unit of measurement. Choose **Specify minimum object size**, and enter the minimum object size. For **Specify maximum object size**, enter the maximum object size.

1. Choose **Save changes**. The details page for the Storage Lens group appears. 

1. (Optional) If you want to add a new AWS resource tag, scroll to the **AWS resource tags** section, then choose **Add tags**. The **Add tags** page appears. 

   Add the new key-value pair, then choose **Save changes**. The details page for the Storage Lens group appears.

1. (Optional) If you want to remove an existing AWS resource tag, scroll to the **AWS resource tags** section, and select the resource tag. Then, choose **Delete**. The **Delete AWS tags** dialog box appears. 

   Choose **Delete** again to permanently delete the AWS resource tag.
**Note**  
After you permanently delete an AWS resource tag, it can’t be restored.

## Using the AWS CLI
<a name="update-storage-lens-group-cli"></a>

The following AWS CLI example command returns the configuration details for a Storage Lens group named `marketing-department`. To use this example command, replace the `user input placeholders` with your own information.



```
aws s3control get-storage-lens-group --account-id 111122223333 \ 
--region us-east-1 --name marketing-department
```

The following AWS CLI example updates a Storage Lens group. To use this example command, replace the `user input placeholders` with your own information. 

```
aws s3control update-storage-lens-group --account-id 111122223333 \ 
--region us-east-1 --storage-lens-group=file://./marketing-department.json
```

For example JSON configurations, see [Storage Lens groups configuration](storage-lens-groups.md#storage-lens-groups-configuration).

## Using the AWS SDK for Java
<a name="update-storage-lens-group-sdk-java"></a>

The following AWS SDK for Java example returns the configuration details for the `Marketing-Department` Storage Lens group in account `111122223333`. To use this example, replace the `user input placeholders` with your own information.

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.GetStorageLensGroupRequest;
import software.amazon.awssdk.services.s3control.model.GetStorageLensGroupResponse;

public class GetStorageLensGroup {
    public static void main(String[] args) {
        String storageLensGroupName = "Marketing-Department";
        String accountId = "111122223333";

        try {
            GetStorageLensGroupRequest getRequest = GetStorageLensGroupRequest.builder()
                    .name(storageLensGroupName)
                    .accountId(accountId).build();
            S3ControlClient s3ControlClient = S3ControlClient.builder()
                    .region(Region.US_WEST_2)
                    .credentialsProvider(ProfileCredentialsProvider.create())
                    .build();
            GetStorageLensGroupResponse response = s3ControlClient.getStorageLensGroup(getRequest);
            System.out.println(response);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

The following example updates the Storage Lens group named `Marketing-Department` in account `111122223333`. This example updates the dashboard scope to include objects that match any of the following suffixes: `.png`, `.gif`, `.jpg`, or `.jpeg`. To use this example, replace the `user input placeholders` with your own information.

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.StorageLensGroup;
import software.amazon.awssdk.services.s3control.model.StorageLensGroupFilter;
import software.amazon.awssdk.services.s3control.model.UpdateStorageLensGroupRequest;

public class UpdateStorageLensGroup {
    public static void main(String[] args) {
        String storageLensGroupName = "Marketing-Department";
        String accountId = "111122223333";

        try {
            // Create updated filter.
            StorageLensGroupFilter suffixFilter = StorageLensGroupFilter.builder()
                    .matchAnySuffix(".png", ".gif", ".jpg", ".jpeg")
                    .build();

            StorageLensGroup storageLensGroup = StorageLensGroup.builder()
                    .name(storageLensGroupName)
                    .filter(suffixFilter)
                    .build();

            UpdateStorageLensGroupRequest updateStorageLensGroupRequest = UpdateStorageLensGroupRequest.builder()
                    .name(storageLensGroupName)
                    .storageLensGroup(storageLensGroup)
                    .accountId(accountId)
                    .build();

            S3ControlClient s3ControlClient = S3ControlClient.builder()
                    .region(Region.US_WEST_2)
                    .credentialsProvider(ProfileCredentialsProvider.create())
                    .build();
            s3ControlClient.updateStorageLensGroup(updateStorageLensGroupRequest);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

For example JSON configurations, see [Storage Lens groups configuration](storage-lens-groups.md#storage-lens-groups-configuration).

# Managing AWS resource tags with Storage Lens groups
<a name="storage-lens-groups-manage-tags"></a>

Each Amazon S3 Storage Lens group is counted as an AWS resource with its own Amazon Resource Name (ARN). Therefore, when you configure your Storage Lens group, you can optionally add AWS resource tags to the group. You can add up to 50 tags for each Storage Lens group. To create a Storage Lens group with tags, you must have the `s3:CreateStorageLensGroup` and `s3:TagResource` permissions.

You can use AWS resource tags to categorize resources according to department, line of business, or project. Doing so is useful when you have many resources of the same type. By applying tags, you can quickly identify a specific Storage Lens group based on the tags that you've assigned to it. You can also use tags to track and allocate costs.

In addition, when you add an AWS resource tag to your Storage Lens group, you activate [attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html). ABAC is an authorization strategy that defines permissions based on attributes, in this case tags. You can also use conditions that specify resource tags in your IAM policies to [control access to AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-resources).

You can edit tag keys and values, and you can remove tags from a resource at any time. Also, be aware of the following limitations:
+ Tag keys and tag values are case sensitive.
+ If you add a tag that has the same key as an existing tag on that resource, the new value overwrites the old value.
+ If you delete a resource, any tags for the resource are also deleted. 
+ Don't include private or sensitive data in your AWS resource tags.
+ System tags (with tag keys that begin with `aws:`) aren't supported.
+ The length of each tag key can't exceed 128 characters. The length of each tag value can't exceed 256 characters.

The following examples demonstrate how to use AWS resource tags with Storage Lens groups.

**Topics**
+ [Adding an AWS resource tag to a Storage Lens group](storage-lens-groups-add-tags.md)
+ [Updating Storage Lens group tag values](storage-lens-groups-update-tags.md)
+ [Deleting an AWS resource tag from a Storage Lens group](storage-lens-groups-delete-tags.md)
+ [Listing Storage Lens group tags](storage-lens-groups-list-tags.md)

# Adding an AWS resource tag to a Storage Lens group
<a name="storage-lens-groups-add-tags"></a>

The following examples demonstrate how to add AWS resource tags to an Amazon S3 Storage Lens group. You can add resource tags by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and AWS SDK for Java.

## Using the S3 console
<a name="storage-lens-groups-add-tags-console"></a>

**To add an AWS resource tag to a Storage Lens group**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens groups**.

1. Under **Storage Lens groups**, choose the Storage Lens group that you want to update.

1. Under **AWS resource tags**, choose **Add tags**.

1. On the **Add tags** page, add the new key-value pair.
**Note**  
Adding a new tag with the same key as an existing tag overwrites the previous tag value.

1. (Optional) To add more than one new tag, choose **Add tag** again to continue adding new entries. You can add up to 50 AWS resource tags to your Storage Lens group.

1. (Optional) If you want to remove a newly added entry, choose **Remove** next to the tag that you want to remove.

1. Choose **Save changes**.

## Using the AWS CLI
<a name="storage-lens-groups-add-tags-cli"></a>

The following example AWS CLI command adds two resource tags to an existing Storage Lens group named `marketing-department`. To use this example command, replace the `user input placeholders` with your own information.

```
aws s3control tag-resource --account-id 111122223333 \
--resource-arn arn:aws:s3:us-east-1:111122223333:storage-lens-group/marketing-department \
--region us-east-1 --tags Key=k1,Value=v1 Key=k2,Value=v2
```

## Using the AWS SDK for Java
<a name="storage-lens-groups-add-tags-sdk-java"></a>

The following AWS SDK for Java example adds two AWS resource tags to an existing Storage Lens group. To use this example, replace the `user input placeholders` with your own information.

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.Tag;
import software.amazon.awssdk.services.s3control.model.TagResourceRequest;

public class TagResource {
    public static void main(String[] args) {
        String resourceARN = "Resource_ARN";
        String accountId = "111122223333";

        try {
            Tag resourceTag1 = Tag.builder()
                .key("resource-tag-key-1")
                .value("resource-tag-value-1")
                .build();
            Tag resourceTag2 = Tag.builder()
                    .key("resource-tag-key-2")
                    .value("resource-tag-value-2")
                    .build();
            TagResourceRequest tagResourceRequest = TagResourceRequest.builder()
                    .resourceArn(resourceARN)
                    .tags(resourceTag1, resourceTag2)
                    .accountId(accountId)
                    .build();
            S3ControlClient s3ControlClient = S3ControlClient.builder()
                    .region(Region.US_WEST_2)
                    .credentialsProvider(ProfileCredentialsProvider.create())
                    .build();
            s3ControlClient.tagResource(tagResourceRequest);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

# Updating Storage Lens group tag values
<a name="storage-lens-groups-update-tags"></a>

The following examples demonstrate how to update Storage Lens group tag values by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and AWS SDK for Java.

## Using the S3 console
<a name="storage-lens-groups-update-tags-console"></a>

**To update an AWS resource tag for a Storage Lens group**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens groups**.

1. Under **Storage Lens groups**, choose the Storage Lens group that you want to update.

1. Under **AWS resource tags**, select the tag that you want to update.

1. Add the new tag value, using the same key of the key-value pair that you want to update. Choose the checkmark icon to update the tag value.
**Note**  
Adding a new tag with the same key as an existing tag overwrites the previous tag value.

1. (Optional) If you want to add new tags, choose **Add tag** to add new entries. The **Add tags** page appears. 

   You can add up to 50 AWS resource tags for your Storage Lens group. When you're finished adding new tags, choose **Save changes**.

1. (Optional) If you want to remove a newly added entry, choose **Remove** next to the tag that you want to remove. When you're finished removing tags, choose **Save changes**. 

## Using the AWS CLI
<a name="storage-lens-groups-update-tags-cli"></a>

The following example AWS CLI command updates two tag values for the Storage Lens group named `marketing-department`. To use this example command, replace the `user input placeholders` with your own information.

```
aws s3control tag-resource --account-id 111122223333 \
--resource-arn arn:aws:s3:us-east-1:111122223333:storage-lens-group/marketing-department \
--region us-east-1 --tags Key=k1,Value=v3 Key=k2,Value=v4
```

## Using the AWS SDK for Java
<a name="storage-lens-groups-update-tags-sdk-java"></a>

The following AWS SDK for Java example updates two Storage Lens group tag values. To use this example, replace the `user input placeholders` with your own information.

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.Tag;
import software.amazon.awssdk.services.s3control.model.TagResourceRequest;

public class UpdateTagsForResource {
    public static void main(String[] args) {
        String resourceARN = "Resource_ARN";
        String accountId = "111122223333";

        try {
            Tag updatedResourceTag1 = Tag.builder()
                .key("resource-tag-key-1")
                .value("resource-tag-updated-value-1")
                .build();
            Tag updatedResourceTag2 = Tag.builder()
                    .key("resource-tag-key-2")
                    .value("resource-tag-updated-value-2")
                    .build();
            TagResourceRequest tagResourceRequest = TagResourceRequest.builder()
                    .resourceArn(resourceARN)
                    .tags(updatedResourceTag1, updatedResourceTag2)
                    .accountId(accountId)
                    .build();
            S3ControlClient s3ControlClient = S3ControlClient.builder()
                    .region(Region.US_WEST_2)
                    .credentialsProvider(ProfileCredentialsProvider.create())
                    .build();
            s3ControlClient.tagResource(tagResourceRequest);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

# Deleting an AWS resource tag from a Storage Lens group
<a name="storage-lens-groups-delete-tags"></a>

The following examples demonstrate how to delete an AWS resource tag from a Storage Lens group. You can delete tags by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and AWS SDK for Java.

## Using the S3 console
<a name="storage-lens-groups-delete-tags-console"></a>

**To delete an AWS resource tag from a Storage Lens group**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens groups**.

1. Under **Storage Lens groups**, choose the Storage Lens group that you want to update.

1. Under **AWS resource tags**, select the key-value pair that you want to delete.

1. Choose **Delete**. The **Delete AWS resource tags** dialog box appears.
**Note**  
If tags are used to control access, proceeding with this action can affect related resources. After you permanently delete a tag, it can't be restored.

1. Choose **Delete** to permanently delete the key-value pair.

## Using the AWS CLI
<a name="storage-lens-groups-delete-tags-cli"></a>

The following AWS CLI command deletes two AWS resource tags from an existing Storage Lens group: To use this example command, replace the `user input placeholders` with your own information.

```
aws s3control untag-resource --account-id 111122223333 \
--resource-arn arn:aws:s3:us-east-1:111122223333:storage-lens-group/Marketing-Department \
--region us-east-1 --tag-keys k1 k2
```

## Using the AWS SDK for Java
<a name="storage-lens-groups-delete-tags-sdk-java"></a>

The following AWS SDK for Java example deletes two AWS resource tags from the Storage Lens group Amazon Resource Name (ARN) that you specify in account `111122223333`. To use this example, replace the `user input placeholders` with your own information.

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.UntagResourceRequest;

public class UntagResource {
    public static void main(String[] args) {
        String resourceARN = "Resource_ARN";
        String accountId = "111122223333";

        try {
            String tagKey1 = "resource-tag-key-1";
            String tagKey2 = "resource-tag-key-2";
            UntagResourceRequest untagResourceRequest = UntagResourceRequest.builder()
                    .resourceArn(resourceARN)
                    .tagKeys(tagKey1, tagKey2)
                    .accountId(accountId)
                    .build();
            S3ControlClient s3ControlClient = S3ControlClient.builder()
                    .region(Region.US_WEST_2)
                    .credentialsProvider(ProfileCredentialsProvider.create())
                    .build();
            s3ControlClient.untagResource(untagResourceRequest);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

# Listing Storage Lens group tags
<a name="storage-lens-groups-list-tags"></a>

The following examples demonstrate how to list the AWS resource tags associated with a Storage Lens group. You can list tags by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and AWS SDK for Java.

## Using the S3 console
<a name="storage-lens-groups-list-tags-console"></a>

**To review the list of tags and tag values for a Storage Lens group**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens groups**.

1. Under **Storage Lens groups**, choose the Storage Lens group that you're interested in.

1. Scroll down to the **AWS resource tags** section. All of the user-defined AWS resource tags that are added to your Storage Lens group are listed along with their tag values.

## Using the AWS CLI
<a name="storage-lens-group-list-tags-cli"></a>

The following AWS CLI example command lists all the Storage Lens group tag values for the Storage Lens group named `marketing-department`. To use this example command, replace the `user input placeholders` with your own information.

```
aws s3control list-tags-for-resource --account-id 111122223333 \
--resource-arn arn:aws:s3:us-east-1:111122223333:storage-lens-group/marketing-department \
--region us-east-1
```

## Using the AWS SDK for Java
<a name="storage-lens-group-list-tags-sdk-java"></a>

The following AWS SDK for Java example lists the Storage Lens group tag values for the Storage Lens group Amazon Resource Name (ARN) that you specify. To use this example, replace the `user input placeholders` with your own information.

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.ListTagsForResourceRequest;
import software.amazon.awssdk.services.s3control.model.ListTagsForResourceResponse;

public class ListTagsForResource {
    public static void main(String[] args) {
        String resourceARN = "Resource_ARN";
        String accountId = "111122223333";

        try {
            ListTagsForResourceRequest listTagsForResourceRequest = ListTagsForResourceRequest.builder()
                    .resourceArn(resourceARN)
                    .accountId(accountId)
                    .build();
            S3ControlClient s3ControlClient = S3ControlClient.builder()
                    .region(Region.US_WEST_2)
                    .credentialsProvider(ProfileCredentialsProvider.create())
                    .build();
            ListTagsForResourceResponse response = s3ControlClient.listTagsForResource(listTagsForResourceRequest);
            System.out.println(response);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

# Listing all Storage Lens groups
<a name="storage-lens-groups-list"></a>

The following examples demonstrate how to list all Amazon S3 Storage Lens groups in an AWS account and home Region. These examples show how list all Storage Lens groups by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and AWS SDK for Java.

## Using the S3 console
<a name="storage-lens-group-list-console"></a>

**To list all Storage Lens groups in an account and home Region**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens groups**.

1. Under **Storage Lens groups**, the list of Storage Lens groups in your account is displayed.

## Using the AWS CLI
<a name="storage-lens-groups-list-cli"></a>

The following AWS CLI example lists all of the Storage Lens groups for your account. To use this example command, replace the `user input placeholders` with your own information.

```
aws s3control list-storage-lens-groups --account-id 111122223333 \
--region us-east-1
```

## Using the AWS SDK for Java
<a name="storage-lens-groups-list-sdk-java"></a>

The following AWS SDK for Java example lists the Storage Lens groups for account `111122223333`. To use this example, replace the `user input placeholders` with your own information.

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.ListStorageLensGroupsRequest;
import software.amazon.awssdk.services.s3control.model.ListStorageLensGroupsResponse;

public class ListStorageLensGroups {
    public static void main(String[] args) {
        String accountId = "111122223333";

        try {
            ListStorageLensGroupsRequest listStorageLensGroupsRequest = ListStorageLensGroupsRequest.builder()
                    .accountId(accountId)
                    .build();
            S3ControlClient s3ControlClient = S3ControlClient.builder()
                    .region(Region.US_WEST_2)
                    .credentialsProvider(ProfileCredentialsProvider.create())
                    .build();
            ListStorageLensGroupsResponse response = s3ControlClient.listStorageLensGroups(listStorageLensGroupsRequest);
            System.out.println(response);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

# Viewing Storage Lens group details
<a name="storage-lens-groups-view"></a>

The following examples demonstrate how to view Amazon S3 Storage Lens group configuration details. You can view these details by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and AWS SDK for Java.

## Using the S3 console
<a name="view-storage-lens-group-console"></a>



**To view Storage Lens group configuration details**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens groups**.

1. Under **Storage Lens groups**, choose the option button next to the Storage Lens group that you're interested in.

1. Choose **View details**. You can now review the details of your Storage Lens group.

## Using the AWS CLI
<a name="view-storage-lens-group-cli"></a>

The following AWS CLI example returns the configuration details for a Storage Lens group. To use this example command, replace the `user input placeholders` with your own information.

```
aws s3control get-storage-lens-group --account-id 111122223333 \ 
--region us-east-1 --name marketing-department
```

## Using the AWS SDK for Java
<a name="view-storage-lens-group-sdk-java"></a>

The following AWS SDK for Java example returns the configuration details for the Storage Lens group named `Marketing-Department` in account `111122223333`. To use this example, replace the `user input placeholders` with your own information.

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.GetStorageLensGroupRequest;
import software.amazon.awssdk.services.s3control.model.GetStorageLensGroupResponse;

public class GetStorageLensGroup {
    public static void main(String[] args) {
        String storageLensGroupName = "Marketing-Department";
        String accountId = "111122223333";

        try {
            GetStorageLensGroupRequest getRequest = GetStorageLensGroupRequest.builder()
                    .name(storageLensGroupName)
                    .accountId(accountId).build();
            S3ControlClient s3ControlClient = S3ControlClient.builder()
                    .region(Region.US_WEST_2)
                    .credentialsProvider(ProfileCredentialsProvider.create())
                    .build();
            GetStorageLensGroupResponse response = s3ControlClient.getStorageLensGroup(getRequest);
            System.out.println(response);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

# Deleting a Storage Lens group
<a name="storage-lens-groups-delete"></a>

The following examples demonstrate how to delete an Amazon S3 Storage Lens group by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and AWS SDK for Java.

## Using the S3 console
<a name="delete-storage-lens-group-console"></a>

**To delete a Storage Lens group**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Storage Lens groups**.

1. Under **Storage Lens groups**, choose the option button next to the Storage Lens group that you want to delete.

1. Choose **Delete**. A **Delete Storage Lens group** dialog box displays.

1. Choose **Delete** again to permanently delete your Storage Lens group.
**Note**  
After you delete a Storage Lens group, it can't be restored.

## Using the AWS CLI
<a name="delete-storage-lens-group-cli"></a>

The following AWS CLI example deletes the Storage Lens group named `marketing-department`. To use this example command, replace the `user input placeholders` with your own information.

```
aws s3control delete-storage-lens-group --account-id 111122223333 \ 
--region us-east-1 --name marketing-department
```

## Using the AWS SDK for Java
<a name="delete-storage-lens-group-sdk-java"></a>

The following AWS SDK for Java example deletes the Storage Lens group named `Marketing-Department` in account `111122223333`. To use this example, replace the `user input placeholders` with your own information.

```
package aws.example.s3control;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.DeleteStorageLensGroupRequest;

public class DeleteStorageLensGroup {
    public static void main(String[] args) {
        String storageLensGroupName = "Marketing-Department";
        String accountId = "111122223333";

        try {
            DeleteStorageLensGroupRequest deleteStorageLensGroupRequest = DeleteStorageLensGroupRequest.builder()
                    .name(storageLensGroupName)
                    .accountId(accountId).build();
            S3ControlClient s3ControlClient = S3ControlClient.builder()
                    .region(Region.US_WEST_2)
                    .credentialsProvider(ProfileCredentialsProvider.create())
                    .build();
            s3ControlClient.deleteStorageLensGroup(deleteStorageLensGroupRequest);
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it and returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

# Cataloging and analyzing your data with S3 Inventory
<a name="storage-inventory"></a>

You can use Amazon S3 Inventory to help manage your storage. For example, you can use it to audit and report on the replication and encryption status of your objects for business, compliance, and regulatory needs. You can also simplify and speed up business workflows and big data jobs by using Amazon S3 Inventory, which provides a scheduled alternative to the Amazon S3 synchronous `List` API operations. Amazon S3 Inventory does not use the `List` API operations to audit your objects and does not affect the request rate of your bucket.

Amazon S3 Inventory provides comma-separated values (CSV), [Apache optimized row columnar (ORC)](https://orc.apache.org/) or [https://parquet.apache.org/](https://parquet.apache.org/) output files that list your objects and their corresponding metadata on a daily or weekly basis for an S3 bucket or objects with a shared prefix (that is, objects that have names that begin with a common string). If you set up a weekly inventory, a report is generated every Sunday (UTC time zone) after the initial report. For information about Amazon S3 Inventory pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

You can configure multiple inventory lists for a bucket. When you're configuring an inventory list, you can specify the following: 
+ What object metadata to include in the inventory
+ Whether to list all object versions or only current versions
+ Where to store the inventory list file output
+ Whether to generate the inventory on a daily or weekly basis
+ Whether to encrypt the inventory list file

You can query Amazon S3 Inventory with standard SQL queries by using [Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/what-is.html), [Amazon Redshift Spectrum](https://docs.aws.amazon.com/redshift/latest/dg/c-getting-started-using-spectrum.html), and other tools, such as [https://prestodb.io/](https://prestodb.io/), [https://hive.apache.org/](https://hive.apache.org/), and [https://databricks.com/spark/about/](https://databricks.com/spark/about/). For more information about using Athena to query your inventory files, see [Querying Amazon S3 Inventory with Amazon Athena](storage-inventory-athena-query.md). 

**Note**  
It might take up to 48 hours for Amazon S3 to deliver the first inventory report.

**Note**  
After deleting an inventory configuration, Amazon S3 might still deliver one additional inventory report during a brief transition period while the system processes the deletion.

## Source and destination buckets
<a name="storage-inventory-buckets"></a>

The bucket that the inventory lists objects for is called the *source bucket*. The bucket where the inventory list file is stored is called the *destination bucket*. 

**Source bucket**

The inventory lists the objects that are stored in the source bucket. You can get an inventory list for an entire bucket, or you can filter the list by object key name prefix.

The source bucket:
+ Contains the objects that are listed in the inventory
+ Contains the configuration for the inventory

**Destination bucket**

Amazon S3 Inventory list files are written to the destination bucket. To group all the inventory list files in a common location in the destination bucket, you can specify a destination prefix in the inventory configuration.

The destination bucket:
+ Contains the inventory file lists. 
+ Contains the manifest files that list all the inventory list files that are stored in the destination bucket. For more information, see [Inventory manifest](storage-inventory-location.md#storage-inventory-location-manifest).
+ Must have a bucket policy to give Amazon S3 permission to verify ownership of the bucket and permission to write files to the bucket. 
+ Must be in the same AWS Region as the source bucket.
+ Can be the same as the source bucket.
+ Can be owned by a different AWS account than the account that owns the source bucket.

## Amazon S3 Inventory list
<a name="storage-inventory-contents"></a>

An inventory list file contains a list of the objects in the source bucket and metadata for each object. An inventory list file is stored in the destination bucket with one of the following formats:
+ As a CSV file compressed with GZIP
+ As an Apache optimized row columnar (ORC) file compressed with ZLIB
+ As an Apache Parquet file compressed with Snappy

**Note**  
Objects in Amazon S3 Inventory reports aren't guaranteed to be sorted in any order.

An inventory list file contains a list of the objects in the source bucket and metadata for each listed object. These default fields are always included:
+ **Bucket name** – The name of the bucket that the inventory is for.
+ **ETag** – The entity tag (ETag) is a hash of the object. The ETag reflects changes only to the contents of an object, not to its metadata. The ETag can be an MD5 digest of the object data. Whether it is depends on how the object was created and how it is encrypted. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_Object.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Object.html) in the *Amazon Simple Storage Service API Reference*.
+ **Key name** – The object key name (or key) that uniquely identifies the object in the bucket. When you're using the CSV file format, the key name is URL-encoded and must be decoded before you can use it.
+ **Last modified date** – The object creation date or the last modified date, whichever is the latest.
+ **Size** – The object size in bytes, not including the size of incomplete multipart uploads, object metadata, and delete markers.
+ **Storage class** – The storage class that's used for storing the object. Set to `STANDARD`, `REDUCED_REDUNDANCY`, `STANDARD_IA`, `ONEZONE_IA`, `INTELLIGENT_TIERING`, `GLACIER`, `DEEP_ARCHIVE`, `OUTPOSTS`, `GLACIER_IR`, or `SNOW`. For more information, see [Understanding and managing Amazon S3 storage classes](storage-class-intro.md).
**Note**  
S3 Inventory does not support S3 Express One Zone.

You can choose to include the following additional metadata fields in the report:
+ **Checksum algorithm** – Indicates the algorithm that's used to create the checksum for the object. For more information, see [Using supported checksum algorithms](checking-object-integrity-upload.md#using-additional-checksums).
+ **Encryption status** – The server-side encryption status, depending on what kind of encryption key is used— server-side encryption with Amazon S3 managed keys (SSE-S3), server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), dual-layer server-side encryption with AWS KMS keys (DSSE-KMS), or server-side encryption with customer-provided keys (SSE-C). Set to `SSE-S3`, `SSE-KMS`, `DSSE-KMS`, `SSE-C`, or `NOT-SSE`. A status of `NOT-SSE` means that the object is not encrypted with server-side encryption. For more information, see [Protecting data with encryption](UsingEncryption.md).
+ **S3 Intelligent-Tiering access tier** – Access tier (frequent or infrequent) of the object if it is stored in the S3 Intelligent-Tiering storage class. Set to `FREQUENT`, `INFREQUENT`, `ARCHIVE_INSTANT_ACCESS`, `ARCHIVE`, or `DEEP_ARCHIVE`. For more information, see [Storage class for automatically optimizing data with changing or unknown access patterns](storage-class-intro.md#sc-dynamic-data-access).
+ **S3 Object Lock retain until date** – The date until which the locked object cannot be deleted. For more information, see [Locking objects with Object Lock](object-lock.md).
+ **S3 Object Lock retention mode** – Set to `Governance` or `Compliance` for objects that are locked. For more information, see [Locking objects with Object Lock](object-lock.md).
+ **S3 Object Lock legal hold status ** – Set to `On` if a legal hold has been applied to an object. Otherwise, it is set to `Off`. For more information, see [Locking objects with Object Lock](object-lock.md).
+ **Version ID** – The object version ID. When you enable versioning on a bucket, Amazon S3 assigns a version number to objects that are added to the bucket. For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md). (This field is not included if the list is configured only for the current version of the objects.)
+ **IsLatest** – Set to `True` if the object is the current version of the object. (This field is not included if the list is configured only for the current version of the objects.)
+ **Delete marker** – Set to `True` if the object is a delete marker. For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md). (This field is automatically added to your report if you've configured the report to include all versions of your objects).
+ **Multipart upload flag** – Set to `True` if the object was uploaded as a multipart upload. For more information, see [Uploading and copying objects using multipart upload in Amazon S3](mpuoverview.md).
+ **Object owner** – The canonical user ID of the owner of the object. For more information, see [Find the canonical user ID for your AWS account ](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-identifiers.html#FindCanonicalId) in the *AWS Account Management Reference Guide*.
+ **Replication status** – Set to `PENDING`, `COMPLETED`, `FAILED`, or `REPLICA`. For more information, see [Getting replication status information](replication-status.md).
+ **S3 Bucket Key status** – Set to `ENABLED` or `DISABLED`. Indicates whether the object uses an S3 Bucket Key for SSE-KMS. For more information, see [Using Amazon S3 Bucket Keys](bucket-key.md).
+ **Object access control list** – An access control list (ACL) for each object that defines which AWS accounts or groups are granted access to this object and the type of access that is granted. The Object ACL field is defined in JSON format. An S3 Inventory report includes ACLs that are associated with objects in your source bucket, even when ACLs are disabled for the bucket. For more information, see [Working with the Object ACL field](objectacl.md) and [Access control list (ACL) overview](acl-overview.md).
**Note**  
The Object ACL field is defined in JSON format. An inventory report displays the value for the Object ACL field as a base64-encoded string.  
For example, suppose that you have the following Object ACL field in JSON format:  

  ```
  {
          "version": "2022-11-10",
          "status": "AVAILABLE",
          "grants": [{
              "canonicalId": "example-canonical-user-ID",
              "type": "CanonicalUser",
              "permission": "READ"
          }]
  }
  ```
The Object ACL field is encoded and shown as the following base64-encoded string:  

  ```
  eyJ2ZXJzaW9uIjoiMjAyMi0xMS0xMCIsInN0YXR1cyI6IkFWQUlMQUJMRSIsImdyYW50cyI6W3siY2Fub25pY2FsSWQiOiJleGFtcGxlLWNhbm9uaWNhbC11c2VyLUlEIiwidHlwZSI6IkNhbm9uaWNhbFVzZXIiLCJwZXJtaXNzaW9uIjoiUkVBRCJ9XX0=
  ```
To get the decoded value in JSON for the Object ACL field, you can query this field in Amazon Athena. For query examples, see [Querying Amazon S3 Inventory with Amazon Athena](storage-inventory-athena-query.md).
+ **Lifecycle Expiration Date** – Set to the lifecycle expiration timestamp of the object. This field will only be populated, if the object is to be expired by an applicable lifecycle rule. In other cases, the field will be empty. Objects with `FAILED` replication status will not have an expiration date populated, as S3 Lifecycle prevents expiration and transition actions on these objects until replication has succeeded. For more information, see [Expiring objects](lifecycle-expire-general-considerations.md).

**Note**  
When an object reaches the end of its lifetime based on its lifecycle configuration, Amazon S3 queues the object for removal and removes it asynchronously. Therefore, there might be a delay between the expiration date and the date when Amazon S3 removes an object. The inventory report includes the objects that have expired but haven't been removed yet. For more information about expiration actions in S3 Lifecycle, see [Expiring objects](lifecycle-expire-general-considerations.md).

The following is an example inventory report with additional metadata fields consisting of four records.

```
amzn-s3-demo-bucket1    example-object-1    EXAMPLEDC8l.XJCENlF7LePaNIIvs001    TRUE        1500    2024-08-15T15:28:26.0004    EXAMPLE21e1518b92f3d92773570f600    STANDARD    FALSE    COMPLETED    SSE-KMS    2025-01-25T15:28:26.000Z    COMPLIANCE    Off        ENABLED        eyJ2ZXJzaW9uIjoiMjAyMi0xMS0xMCIsInN0YXR1cyI6IkFWQUlMQUJMRSIsImdyYW50cyI6W3sicGVybWlzc2lvbiI6IkZVTExfQ09OVFJPTCIsInR5cGUiOiJDYW5vbmljYWxVc2VyIiwiY2Fub25pY2FsSWQiOiJFWEFNUExFNzY2ZThmNmIxMTVkOTNkNDFkZjJlYWM0MjBhYTRhNDY1ZDE3N2MxMzk4YmM2YTA4OGM3NmI3MDAwIn1dfQ==    EXAMPLE766e8f6b115d93d41df2eac420aa4a465d177c1398bc6a088c76b7000
amzn-s3-demo-bucket1    example-object-2    EXAMPLEDC8l.XJCENlF7LePaNIIvs002    TRUE        200    2024-08-21T15:28:26.000Z    EXAMPLE21e1518b92f3d92773570f601    INTELLIGENT_TIERING    FALSE    COMPLETED    SSE-KMS    2025-01-25T15:28:26.000Z    COMPLIANCE    Off    INFREQUENT    ENABLED    SHA-256    eyJ2ZXJzaW9uIjoiMjAyMi0xMS0xMCIsInN0YXR1cyI6IkFWQUlMQUJMRSIsImdyYW50cyI6W3sicGVybWlzc2lvbiI6IkZVTExfQ09OVFJPTCIsInR5cGUiOiJDYW5vbmljYWxVc2VyIiwiY2Fub25pY2FsSWQiOiJFWEFNUExFNzY2ZThmNmIxMTVkOTNkNDFkZjJlYWM0MjBhYTRhNDY1ZDE3N2MxMzk4YmM2YTA4OGM3NmI3MDAwIn1dfQ==    EXAMPLE766e8f6b115d93d41df2eac420aa4a465d177c1398bc6a088c76b7001
amzn-s3-demo-bucket1    example-object-3    EXAMPLEDC8l.XJCENlF7LePaNIIvs003    TRUE        12500    2023-01-15T15:28:30.000Z    EXAMPLE21e1518b92f3d92773570f602    STANDARD    FALSE    REPLICA    SSE-KMS    2025-01-25T15:28:26.000Z    GOVERNANCE    On        ENABLED        eyJ2ZXJzaW9uIjoiMjAyMi0xMS0xMCIsInN0YXR1cyI6IkFWQUlMQUJMRSIsImdyYW50cyI6W3sicGVybWlzc2lvbiI6IkZVTExfQ09OVFJPTCIsInR5cGUiOiJDYW5vbmljYWxVc2VyIiwiY2Fub25pY2FsSWQiOiJFWEFNUExFNzY2ZThmNmIxMTVkOTNkNDFkZjJlYWM0MjBhYTRhNDY1ZDE3N2MxMzk4YmM2YTA4OGM3NmI3MDAwIn1dfQ==    EXAMPLE766e8f6b115d93d41df2eac420aa4a465d177c1398bc6a088c76b7002
amzn-s3-demo-bucket1    example-object-4    EXAMPLEDC8l.XJCENlF7LePaNIIvs004    TRUE        100    2021-02-15T15:28:27.000Z    EXAMPLE21e1518b92f3d92773570f603    STANDARD    FALSE    COMPLETED    SSE-KMS    2025-01-25T15:28:26.000Z    COMPLIANCE    Off        ENABLED        eyJ2ZXJzaW9uIjoiMjAyMi0xMS0xMCIsInN0YXR1cyI6IkFWQUlMQUJMRSIsImdyYW50cyI6W3sicGVybWlzc2lvbiI6IkZVTExfQ09OVFJPTCIsInR5cGUiOiJDYW5vbmljYWxVc2VyIiwiY2Fub25pY2FsSWQiOiJFWEFNUExFNzY2ZThmNmIxMTVkOTNkNDFkZjJlYWM0MjBhYTRhNDY1ZDE3N2MxMzk4YmM2YTA4OGM3NmI3MDAwIn1dfQ==    EXAMPLE766e8f6b115d93d41df2eac420aa4a465d177c1398bc6a088c76b7003
```

We recommend that you create a lifecycle policy that deletes old inventory lists. For more information, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md).

The `s3:PutInventoryConfiguration` permission allows a user to both select all the metadata fields that are listed earlier for each object when configuring an inventory list and to specify the destination bucket to store the inventory. A user with read access to objects in the destination bucket can access all object metadata fields that are available in the inventory list. To restrict access to an inventory report, see [Grant permissions for S3 Inventory and S3 analytics](example-bucket-policies.md#example-bucket-policies-s3-inventory-1).

### Inventory consistency
<a name="storage-inventory-contents-consistency"></a>

All of your objects might not appear in each inventory list. The inventory list provides eventual consistency for `PUT` requests (of both new objects and overwrites) and for `DELETE` requests. Each inventory list for a bucket is a snapshot of bucket items. These lists are eventually consistent (that is, a list might not include recently added or deleted objects). 

To validate the state of an object before you take action on the object, we recommend that you perform a `HeadObject` REST API request to retrieve metadata for the object, or to check the object's properties in the Amazon S3 console. You can also check object metadata with the AWS CLI or the AWS SDKS. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html) in the *Amazon Simple Storage Service API Reference*.

For more information about working with Amazon S3 Inventory, see the following topics.

**Topics**
+ [Source and destination buckets](#storage-inventory-buckets)
+ [Amazon S3 Inventory list](#storage-inventory-contents)
+ [Configuring Amazon S3 Inventory](configure-inventory.md)
+ [Locating your inventory list](storage-inventory-location.md)
+ [Setting up Amazon S3 Event Notifications for inventory completion](storage-inventory-notification.md)
+ [Querying Amazon S3 Inventory with Amazon Athena](storage-inventory-athena-query.md)
+ [Converting empty version ID strings in Amazon S3 Inventory reports to null strings](inventory-configure-bops.md)
+ [Working with the Object ACL field](objectacl.md)

# Configuring Amazon S3 Inventory
<a name="configure-inventory"></a>

Amazon S3 Inventory provides a flat file list of your objects and metadata, on a schedule that you define. You can use S3 Inventory as a scheduled alternative to the Amazon S3 synchronous `List` API operation. S3 Inventory provides comma-separated values (CSV), [Apache optimized row columnar (ORC)](https://orc.apache.org/), or [https://parquet.apache.org/](https://parquet.apache.org/) output files that list your objects and their corresponding metadata. 

You can configure S3 Inventory to create inventory lists on a daily or weekly basis for an S3 bucket or for objects that share a prefix (objects that have names that begin with the same string). For more information, see [Cataloging and analyzing your data with S3 Inventory](storage-inventory.md).

This section describes how to configure an inventory, including details about the inventory source and destination buckets.

**Topics**
+ [Overview](#storage-inventory-setting-up)
+ [Creating a destination bucket policy](#configure-inventory-destination-bucket-policy)
+ [Granting Amazon S3 permission to use your customer managed key for encryption](#configure-inventory-kms-key-policy)
+ [Configuring inventory by using the S3 console](#configure-inventory-console)
+ [Using the REST API to work with S3 Inventory](#rest-api-inventory)

## Overview
<a name="storage-inventory-setting-up"></a>

Amazon S3 Inventory helps you manage your storage by creating lists of the objects in an S3 bucket on a defined schedule. You can configure multiple inventory lists for a bucket. The inventory lists are published to CSV, ORC, or Parquet files in a destination bucket. 

The easiest way to set up an inventory is by using the Amazon S3 console, but you can also use the Amazon S3 REST API, AWS Command Line Interface (AWS CLI), or AWS SDKs. The console performs the first step of the following procedure for you: adding a bucket policy to the destination bucket.

**To set up Amazon S3 Inventory for an S3 bucket**

1. **Add a bucket policy for the destination bucket.**

   You must create a bucket policy on the destination bucket that grants permissions to Amazon S3 to write objects to the bucket in the defined location. For an example policy, see [Grant permissions for S3 Inventory and S3 analytics](example-bucket-policies.md#example-bucket-policies-s3-inventory-1). 

1. **Configure an inventory to list the objects in a source bucket and publish the list to a destination bucket.**

   When you configure an inventory list for a source bucket, you specify the destination bucket where you want the list to be stored, and whether you want to generate the list daily or weekly. You can also configure whether to list all object versions or only current versions and what object metadata to include. 

   Some object metadata fields in S3 Inventory report configurations are optional, meaning that they're available by default but they can be restricted when you grant a user the `s3:PutInventoryConfiguration` permission. You can control whether users can include these optional metadata fields in their reports by using the `s3:InventoryAccessibleOptionalFields` condition key.

   For more information about the optional metadata fields available in S3 Inventory, see [https://docs.aws.amazon.com//AmazonS3/latest/API/API_PutBucketInventoryConfiguration.html#API_PutBucketInventoryConfiguration_RequestBody](https://docs.aws.amazon.com//AmazonS3/latest/API/API_PutBucketInventoryConfiguration.html#API_PutBucketInventoryConfiguration_RequestBody) in the *Amazon Simple Storage Service API Reference*. For more information about restricting access to certain optional metadata fields in an inventory configuration, see [Control S3 Inventory report configuration creation](example-bucket-policies.md#example-bucket-policies-s3-inventory-2).

   You can specify that the inventory list file be encrypted by using server-side encryption with an Amazon S3 managed key (SSE-S3) or an AWS Key Management Service (AWS KMS) customer managed key (SSE-KMS). 
**Note**  
The AWS managed key (`aws/s3`) is not supported for SSE-KMS encryption with S3 Inventory. 

   For more information about SSE-S3 and SSE-KMS, see [Protecting data with server-side encryption](serv-side-encryption.md). If you plan to use SSE-KMS encryption, see Step 3.
   + For information about how to use the console to configure an inventory list, see [Configuring inventory by using the S3 console](#configure-inventory-console).
   + To use the Amazon S3 API to configure an inventory list, use the [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTInventoryConfig.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTInventoryConfig.html) REST API operation or the equivalent from the AWS CLI or AWS SDKs. 

1. **To encrypt the inventory list file with SSE-KMS, grant Amazon S3 permission to use the AWS KMS key.**

   You can configure encryption for the inventory list file by using the Amazon S3 console, Amazon S3 REST API, AWS CLI, or AWS SDKs. Whichever way you choose, you must grant Amazon S3 permission to use the customer managed key to encrypt the inventory file. You [grant Amazon S3 permission by modifying the key policy for the customer managed key](https://docs.aws.amazon.com/AmazonS3/latest/userguide/configure-inventory.html#configure-inventory-kms-key-policy) that you want to use to encrypt the inventory file. Make sure that you've provided a KMS key ARN in the S3 Inventory configuration or the destination bucket’s encryption settings. If no KMS key ARN has been specified and the default encryption settings are being used, you won’t be able to access your S3 Inventory report.

   The destination bucket that stores the inventory list file can be owned by a different AWS account than the account that owns the source bucket. If you use SSE-KMS encryption for the cross-account operations of Amazon S3 Inventory, we recommend that you use a fully qualified KMS key ARN when you configure S3 inventory. For more information, see [Using SSE-KMS encryption for cross-account operations](bucket-encryption.md#bucket-encryption-update-bucket-policy) and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ServerSideEncryptionByDefault.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ServerSideEncryptionByDefault.html) in the *Amazon Simple Storage Service API Reference*.
**Note**  
If you can’t access your S3 Inventory report, use the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html) API, and check whether the destination bucket has the default SSE-KMS encryption enabled. If no KMS key ARN has been specified and the default encryption settings are being used, you won’t be able to access your S3 Inventory report. To access S3 Inventory reports again, either provide a KMS key ARN in the S3 Inventory configuration or in the destination bucket’s encryption settings.

**Directory buckets**  
S3 Inventory is supported for directory buckets. When configuring S3 Inventory for a directory bucket, note the following differences:  
**Permissions** – For directory buckets, you must use the `s3express:PutInventoryConfiguration` and `s3express:GetInventoryConfiguration` permissions in an IAM identity-based policy instead of a bucket policy. These permissions use the `s3express:` namespace rather than the `s3:` namespace used for general purpose buckets. For more information, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md).
**Supported optional fields** – The following optional fields are supported for directory buckets: `Size`, `LastModifiedDate`, `StorageClass`, `ETag`, `IsMultipartUploaded`, `EncryptionStatus`, `BucketKeyStatus`, `ChecksumAlgorithm`, and `LifecycleExpirationDate`.
**Condition key** – For directory buckets, use the `s3express:InventoryAccessibleOptionalFields` condition key to control access to optional metadata fields in inventory reports.

## Creating a destination bucket policy
<a name="configure-inventory-destination-bucket-policy"></a>

If you create your inventory configuration through the Amazon S3 console, Amazon S3 automatically creates a bucket policy on the destination bucket that grants Amazon S3 write permission to the bucket. However, if you create your inventory configuration through the AWS CLI, AWS SDKs, or the Amazon S3 REST API, you must manually add a bucket policy on the destination bucket. The S3 Inventory destination bucket policy allows Amazon S3 to write data for the inventory reports to the bucket. 

The following is the example bucket policy. 

------
#### [ JSON ]

****  

```
{  
      "Version":"2012-10-17",		 	 	 
      "Statement": [
        {
            "Sid": "InventoryExamplePolicy",
            "Effect": "Allow",
            "Principal": {
                "Service": "s3.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": [
                "arn:aws:s3:::DOC-EXAMPLE-DESTINATION-BUCKET/*"
            ],
            "Condition": {
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:s3:::DOC-EXAMPLE-SOURCE-BUCKET"
                },
                "StringEquals": {
                    "aws:SourceAccount": "source-123456789012",
                    "s3:x-amz-acl": "bucket-owner-full-control"
                }
            }
        }
    ]
}
```

------

For more information, see [Grant permissions for S3 Inventory and S3 analytics](example-bucket-policies.md#example-bucket-policies-s3-inventory-1).

**Directory buckets**  
For directory buckets, you must manually add a destination bucket policy. The destination bucket policy uses a different service principal and ARN format than general purpose buckets. Specify `s3express.amazonaws.com` as the service principal, and use the directory bucket ARN format for the source bucket. The following example shows a destination bucket policy for directory buckets.  

```
{
    "Version": "2012-10-17", 		 	 	 
    "Statement": [
        {
            "Sid": "InventoryExamplePolicy",
            "Effect": "Allow",
            "Principal": {
                "Service": "s3express.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": [
                "arn:aws:s3:::DOC-EXAMPLE-DESTINATION-BUCKET/*"
            ],
            "Condition": {
                "ArnLike": {
                    "aws:SourceARN": "arn:aws:s3express:region:source-account-id:bucket/DOC-EXAMPLE-SOURCE-BUCKET--zone-id--x-s3"
                },
                "StringEquals": {
                    "aws:SourceAccount": "source-account-id",
                    "s3:x-amz-acl": "bucket-owner-full-control"
                }
            }
        }
    ]
}
```

If an error occurs when you try to create the bucket policy, you are given instructions on how to fix it. For example, if you choose a destination bucket in another AWS account and don't have permissions to read and write to the bucket policy, you see an error message.

In this case, the destination bucket owner must add the bucket policy to the destination bucket. If the policy is not added to the destination bucket, you won't get an inventory report because Amazon S3 doesn't have permission to write to the destination bucket. If the source bucket is owned by a different account than that of the current user, the correct account ID of the source bucket owner must be substituted in the policy.

**Note**  
Ensure that there are no Deny statements added to the destination bucket policy that would prevent the delivery of inventory reports into this bucket. For more information, see [Why can't I generate an Amazon S3 Inventory Report? ](https://repost.aws/knowledge-center/s3-inventory-report).

## Granting Amazon S3 permission to use your customer managed key for encryption
<a name="configure-inventory-kms-key-policy"></a>

To grant Amazon S3 permission to use your AWS Key Management Service (AWS KMS) customer managed key for server-side encryption, you must use a key policy. To update your key policy so that you can use your customer managed key, use the following procedure.

**To grant Amazon S3 permissions to encrypt by using your customer managed key**

1. Using the AWS account that owns the customer managed key, sign into the AWS Management Console.

1. Open the AWS KMS console at [https://console.aws.amazon.com/kms](https://console.aws.amazon.com/kms).

1. To change the AWS Region, use the Region selector in the upper-right corner of the page.

1. In the left navigation pane, choose **Customer managed keys**.

1. Under **Customer managed keys**, choose the customer managed key that you want to use to encrypt your inventory files.

1. In the **Key policy** section, choose **Switch to policy view**.

1. To update the key policy, choose **Edit**.

1. On the **Edit key policy** page, add the following lines to the existing key policy. For `source-account-id` and `amzn-s3-demo-source-bucket`, supply the appropriate values for your use case.

   ```
   {
       "Sid": "Allow Amazon S3 use of the customer managed key",
       "Effect": "Allow",
       "Principal": {
           "Service": "s3.amazonaws.com"
       },
       "Action": [
           "kms:GenerateDataKey"
       ],
       "Resource": "*",
       "Condition":{
         "StringEquals":{
            "aws:SourceAccount":"source-account-id"
        },
         "ArnLike":{
           "aws:SourceARN": "arn:aws:s3:::amzn-s3-demo-source-bucket"
        }
      }
   }
   ```

1. Choose **Save changes**.

For more information about creating customer managed keys and using key policies, see the following links in the *AWS Key Management Service Developer Guide*:
+ [Managing keys](https://docs.aws.amazon.com/kms/latest/developerguide/getting-started.html)
+ [Key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html)

**Directory buckets**  
For directory buckets, the KMS key policy uses a different service principal and source ARN format than general purpose buckets. Specify `s3express.amazonaws.com` as the service principal, and use the directory bucket ARN format for the source ARN. The following example shows the key policy statement for directory buckets.  

```
{
    "Sid": "Allow S3 Express use of the KMS key",
    "Effect": "Allow",
    "Principal": {
        "Service": "s3express.amazonaws.com"
    },
    "Action": [
        "kms:GenerateDataKey"
    ],
    "Resource": "*",
    "Condition": {
        "StringEquals": {
            "aws:SourceAccount": "source-account-id"
        },
        "ArnLike": {
            "aws:SourceARN": "arn:aws:s3express:region:source-account-id:bucket/DOC-EXAMPLE-SOURCE-BUCKET--zone-id--x-s3"
        }
    }
}
```

**Note**  
Ensure that there are no Deny statements added to the destination bucket policy that would prevent the delivery of inventory reports into this bucket. For more information, see [Why can't I generate an Amazon S3 Inventory Report? ](https://repost.aws/knowledge-center/s3-inventory-report).

## Configuring inventory by using the S3 console
<a name="configure-inventory-console"></a>

Use these instructions to configure inventory by using the S3 console.
**Note**  
It might take up to 48 hours for Amazon S3 to deliver the first inventory report.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.
**Note**  
Configuring S3 Inventory for directory buckets is not supported in the Amazon Simple Storage Service console. To configure S3 Inventory for directory buckets, use the Amazon S3 REST API, AWS Command Line Interface (AWS CLI), or AWS SDKs.

1. In the buckets list, choose the name of the bucket that you want to configure Amazon S3 Inventory for.

1. Choose the **Management** tab.

1. Under **Inventory configurations**, choose **Create inventory configuration**.

1. For **Inventory configuration name**, enter a name.

1. For **Inventory scope**, do the following:
   + Enter an optional prefix.
   + Choose which object versions to include, either **Current versions only** or **Include all versions**.

1. Under **Report details**, choose the location of the AWS account that you want to save the reports to: **This account** or **A different account**.

1. Under **Destination**, choose the destination bucket where you want the inventory reports to be saved.

   The destination bucket must be in the same AWS Region as the bucket for which you are setting up the inventory. The destination bucket can be in a different AWS account. When specifying the destination bucket, you can also include an optional prefix to group your inventory reports together. 

   Under the **Destination** bucket field, you see the **Destination bucket permission** statement that is added to the destination bucket policy to allow Amazon S3 to place data in that bucket. For more information, see [Creating a destination bucket policy](#configure-inventory-destination-bucket-policy).

1. Under **Frequency**, choose how often the report will be generated, **Daily** or **Weekly**.

1. For **Output format**, choose one of the following formats for the report:
   + **CSV** – If you plan to use this inventory report with S3 Batch Operations or if you want to analyze this report in another tool, such as Microsoft Excel, choose **CSV**.
   + **Apache ORC**
   + **Apache Parquet**

1. Under **Status**, choose **Enable** or **Disable**.

1. To configure server-side encryption, under **Inventory report encryption**, follow these steps:

   1. Under **Server-side encryption**, choose either **Do not specify an encryption key** or **Specify an encryption key** to encrypt data.
      + To keep the bucket settings for default server-side encryption of objects when storing them in Amazon S3, choose **Do not specify an encryption key**. As long as the bucket destination has S3 Bucket Keys enabled, the copy operation applies an S3 Bucket Key at the destination bucket.
**Note**  
If the bucket policy for the specified destination requires objects to be encrypted before storing them in Amazon S3, you must choose **Specify an encryption key**. Otherwise, copying objects to the destination will fail.
      + To encrypt objects before storing them in Amazon S3, choose **Specify an encryption key**.

   1. If you chose **Specify an encryption key**, under **Encryption type**, you must choose either **Amazon S3 managed key (SSE-S3)** or **AWS Key Management Service key (SSE-KMS)**.

      SSE-S3 uses one of the strongest block ciphers—256-bit Advanced Encryption Standard (AES-256) to encrypt each object. SSE-KMS provides you with more control over your key. For more information about SSE-S3, see [Using server-side encryption with Amazon S3 managed keys (SSE-S3)](UsingServerSideEncryption.md). For more information about SSE-KMS, see [Using server-side encryption with AWS KMS keys (SSE-KMS)](UsingKMSEncryption.md).
**Note**  
To encrypt the inventory list file with SSE-KMS, you must grant Amazon S3 permission to use the customer managed key. For instructions, see [Grant Amazon S3 Permission to Encrypt Using Your KMS Keys](#configure-inventory-kms-key-policy).

   1. If you chose **AWS Key Management Service key (SSE-KMS)**, under **AWS KMS key**, you can specify your AWS KMS key through one of the following options.
**Note**  
If the destination bucket that stores the inventory list file is owned by a different AWS account, make sure that you use a fully qualified KMS key ARN to specify your KMS key.
      + To choose from a list of available KMS keys, choose **Choose from your AWS KMS keys**, and choose a symmetric encryption KMS key from the list of available keys. Make sure the KMS key is in the same Region as your bucket. 
**Note**  
Both the AWS managed key (`aws/s3`) and your customer managed keys appear in the list. However, the AWS managed key (`aws/s3`) is not supported for SSE-KMS encryption with S3 Inventory. 
      + To enter the KMS key ARN, choose **Enter AWS KMS key ARN**, and enter your KMS key ARN in the field that appears.
      + To create a new customer managed key in the AWS KMS console, choose **Create a KMS key**.

1. For **Additional metadata fields**, select one or more of the following to add to the inventory report:
   + **Size** – The object size in bytes, not including the size of incomplete multipart uploads, object metadata, and delete markers.
   + **Last modified date** – The object creation date or the last modified date, whichever is the latest.
   +  **Multipart upload** – Specifies that the object was uploaded as a multipart upload. For more information, see [Uploading and copying objects using multipart upload in Amazon S3](mpuoverview.md).
   + **Replication status** – The replication status of the object. For more information, see [Getting replication status information](replication-status.md).
   + **Encryption status** – The server-side encryption type that's used to encrypt the object. For more information, see [Protecting data with server-side encryption](serv-side-encryption.md).
   + **Bucket Key status** – Indicates whether a bucket-level key generated by AWS KMS applies to the object. For more information, see [Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys](bucket-key.md).
   + **Object access control list** – An access control list (ACL) for each object that defines which AWS accounts or groups are granted access to this object and the type of access that is granted. For more information about this field, see [Working with the Object ACL field](objectacl.md). For more information about ACLs, see [Access control list (ACL) overview](acl-overview.md). 
   + **Object owner** – The owner of the object.
   + **Storage class** – The storage class that's used for storing the object. 
   + **Intelligent-Tiering: Access tier** – Indicates the access tier (frequent or infrequent) of the object if it was stored in the S3 Intelligent-Tiering storage class. For more information, see [Storage class for automatically optimizing data with changing or unknown access patterns](storage-class-intro.md#sc-dynamic-data-access).
   + **ETag** – The entity tag (ETag) is a hash of the object. The ETag reflects changes only to the contents of an object, not to its metadata. The ETag might or might not be an MD5 digest of the object data. Whether it is depends on how the object was created and how it is encrypted. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_Object.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Object.html) in the *Amazon Simple Storage Service API Reference*.
   + **Checksum algorithm** – Indicates the algorithm that is used to create the checksum for the object. For more information, see [Using supported checksum algorithms](checking-object-integrity-upload.md#using-additional-checksums).
   + **All Object Lock configurations** – The Object Lock status of the object, including the following settings: 
     + **Object Lock: Retention mode** – The level of protection applied to the object, either *Governance* or *Compliance*.
     + **Object Lock: Retain until date** – The date until which the locked object cannot be deleted.
     + **Object Lock: Legal hold status** – The legal hold status of the locked object. 

     For information about S3 Object Lock, see [How S3 Object Lock works](object-lock.md#object-lock-overview).
   + **Lifecycle Expiration Date** – The lifecycle expiration timestamp for objects in your Inventory report. This field will only be populated, if the object is to be expired by an applicable lifecycle rule. In other cases, the field will be empty. For more information, see [Expiring objects](lifecycle-expire-general-considerations.md).

   For more information about the contents of an inventory report, see [Amazon S3 Inventory list](storage-inventory.md#storage-inventory-contents). 

   For more information about restricting access to certain optional metadata fields in an inventory configuration, see [Control S3 Inventory report configuration creation](example-bucket-policies.md#example-bucket-policies-s3-inventory-2).

1. Choose **Create**.

## Using the REST API to work with S3 Inventory
<a name="rest-api-inventory"></a>

The following are the REST operations that you can use to work with Amazon S3 Inventory.
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETEInventoryConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETEInventoryConfiguration.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETInventoryConfig.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETInventoryConfig.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketListInventoryConfigs.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketListInventoryConfigs.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTInventoryConfig.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTInventoryConfig.html) 

# Locating your inventory list
<a name="storage-inventory-location"></a>

When an inventory list is published, the manifest files are published to the following location in the destination bucket.

```
destination-prefix/amzn-s3-demo-source-bucket/config-ID/YYYY-MM-DDTHH-MMZ/manifest.json
 destination-prefix/amzn-s3-demo-source-bucket/config-ID/YYYY-MM-DDTHH-MMZ/manifest.checksum
 destination-prefix/amzn-s3-demo-source-bucket/config-ID/hive/dt=YYYY-MM-DD-HH-MM/symlink.txt
```
+ `destination-prefix` is the object key name prefix that is optionally specified in the inventory configuration. You can use this prefix to group all the inventory list files in a common location within the destination bucket.
+ `amzn-s3-demo-source-bucket` is the source bucket that the inventory list is for. The source bucket name is added to prevent collisions when multiple inventory reports from different source buckets are sent to the same destination bucket.
+ `config-ID` is added to prevent collisions with multiple inventory reports from the same source bucket that are sent to the same destination bucket. The `config-ID` comes from the inventory report configuration, and is the name for the report that is defined during setup.
+ `YYYY-MM-DDTHH-MMZ` is the timestamp that consists of the start time and the date when the inventory report generation process begins scanning the bucket; for example, `2016-11-06T21-32Z`.
+ `manifest.json` is the manifest file. 
+ `manifest.checksum` is the MD5 hash of the content of the `manifest.json` file. 
+ `symlink.txt` is the Apache Hive-compatible manifest file. 

The inventory lists are published daily or weekly to the following location in the destination bucket.

```
destination-prefix/amzn-s3-demo-source-bucket/config-ID/data/example-file-name.csv.gz
...
destination-prefix/amzn-s3-demo-source-bucket/config-ID/data/example-file-name-1.csv.gz
```
+ `destination-prefix` is the object key name prefix that is optionally specified in the inventory configuration. You can use this prefix to group all the inventory list files in a common location in the destination bucket.
+ `amzn-s3-demo-source-bucket` is the source bucket that the inventory list is for. The source bucket name is added to prevent collisions when multiple inventory reports from different source buckets are sent to the same destination bucket.
+ `example-file-name``.csv.gz` is one of the CSV inventory files. ORC inventory names end with the file name extension `.orc`, and Parquet inventory names end with the file name extension `.parquet`.

## Inventory manifest
<a name="storage-inventory-location-manifest"></a>

The manifest files `manifest.json` and `symlink.txt` describe where the inventory files are located. Whenever a new inventory list is delivered, it is accompanied by a new set of manifest files. These files might overwrite each other. In versioning-enabled buckets, Amazon S3 creates new versions of the manifest files. 

Each manifest contained in the `manifest.json` file provides metadata and other basic information about an inventory. This information includes the following:
+ The source bucket name
+ The destination bucket name
+ The version of the inventory
+ The creation timestamp in the epoch date format that consists of the start time and the date when the inventory report generation process begins scanning the bucket
+ The format and schema of the inventory files
+ A list of the inventory files that are in the destination bucket

Whenever a `manifest.json` file is written, it is accompanied by a `manifest.checksum` file that is the MD5 hash of the content of the `manifest.json` file.

**Example Inventory manifest in a `manifest.json` file**  
The following examples show an inventory manifest in a `manifest.json` file for CSV, ORC, and Parquet-formatted inventories.  
The following is an example of a manifest in a `manifest.json` file for a CSV-formatted inventory.  

```
{
    "sourceBucket": "amzn-s3-demo-source-bucket",
    "destinationBucket": "arn:aws:s3:::example-inventory-destination-bucket",
    "version": "2016-11-30",
    "creationTimestamp" : "1514944800000",
    "fileFormat": "CSV",
    "fileSchema": "Bucket, Key, VersionId, IsLatest, IsDeleteMarker, Size, LastModifiedDate, ETag, StorageClass, IsMultipartUploaded, ReplicationStatus, EncryptionStatus, ObjectLockRetainUntilDate, ObjectLockMode, ObjectLockLegalHoldStatus, IntelligentTieringAccessTier, BucketKeyStatus, ChecksumAlgorithm, ObjectAccessControlList, ObjectOwner",
    "files": [
        {
            "key": "Inventory/amzn-s3-demo-source-bucket/2016-11-06T21-32Z/files/939c6d46-85a9-4ba8-87bd-9db705a579ce.csv.gz",
            "size": 2147483647,
            "MD5checksum": "f11166069f1990abeb9c97ace9cdfabc"
        }
    ]
}
```
The following is an example of a manifest in a `manifest.json` file for an ORC-formatted inventory.  

```
{
    "sourceBucket": "amzn-s3-demo-source-bucket",
    "destinationBucket": "arn:aws:s3:::example-destination-bucket",
    "version": "2016-11-30",
    "creationTimestamp" : "1514944800000",
    "fileFormat": "ORC",
    "fileSchema": "struct<bucket:string,key:string,version_id:string,is_latest:boolean,is_delete_marker:boolean,size:bigint,last_modified_date:timestamp,e_tag:string,storage_class:string,is_multipart_uploaded:boolean,replication_status:string,encryption_status:string,object_lock_retain_until_date:timestamp,object_lock_mode:string,object_lock_legal_hold_status:string,intelligent_tiering_access_tier:string,bucket_key_status:string,checksum_algorithm:string,object_access_control_list:string,object_owner:string>",
    "files": [
        {
            "key": "inventory/amzn-s3-demo-source-bucket/data/d794c570-95bb-4271-9128-26023c8b4900.orc",
            "size": 56291,
            "MD5checksum": "5925f4e78e1695c2d020b9f6eexample"
        }
    ]
}
```
The following is an example of a manifest in a `manifest.json` file for a Parquet-formatted inventory.  

```
{
    "sourceBucket": "amzn-s3-demo-source-bucket",
    "destinationBucket": "arn:aws:s3:::example-destination-bucket",
    "version": "2016-11-30",
    "creationTimestamp" : "1514944800000",
    "fileFormat": "Parquet",
    "fileSchema": "message s3.inventory { required binary bucket (UTF8); required binary key (UTF8); optional binary version_id (UTF8); optional boolean is_latest; optional boolean is_delete_marker; optional int64 size; optional int64 last_modified_date (TIMESTAMP_MILLIS); optional binary e_tag (UTF8); optional binary storage_class (UTF8); optional boolean is_multipart_uploaded; optional binary replication_status (UTF8); optional binary encryption_status (UTF8); optional int64 object_lock_retain_until_date (TIMESTAMP_MILLIS); optional binary object_lock_mode (UTF8); optional binary object_lock_legal_hold_status (UTF8); optional binary intelligent_tiering_access_tier (UTF8); optional binary bucket_key_status (UTF8); optional binary checksum_algorithm (UTF8); optional binary object_access_control_list (UTF8); optional binary object_owner (UTF8);}",
    "files": [
        {
           "key": "inventory/amzn-s3-demo-source-bucket/data/d754c470-85bb-4255-9218-47023c8b4910.parquet",
            "size": 56291,
            "MD5checksum": "5825f2e18e1695c2d030b9f6eexample"
        }
    ]
}
```
The `symlink.txt` file is an Apache Hive-compatible manifest file that allows Hive to automatically discover inventory files and their associated data files. The Hive-compatible manifest works with the Hive-compatible services Athena and Amazon Redshift Spectrum. It also works with Hive-compatible applications, including [https://prestodb.io/](https://prestodb.io/), [https://hive.apache.org/](https://hive.apache.org/), [https://databricks.com/spark/about/](https://databricks.com/spark/about/), and many others.  
The `symlink.txt` Apache Hive-compatible manifest file does not currently work with AWS Glue.  
Reading the `symlink.txt` file with [https://hive.apache.org/](https://hive.apache.org/) and [https://databricks.com/spark/about/](https://databricks.com/spark/about/) is not supported for ORC and Parquet-formatted inventory files. 

# Setting up Amazon S3 Event Notifications for inventory completion
<a name="storage-inventory-notification"></a>

You can set up an Amazon S3 event notification to receive notice when the manifest checksum file is created, which indicates that an inventory list has been added to the destination bucket. The manifest is an up-to-date list of all the inventory lists at the destination location.

Amazon S3 can publish events to an Amazon Simple Notification Service (Amazon SNS) topic, an Amazon Simple Queue Service (Amazon SQS) queue, or an AWS Lambda function. For more information, see [Amazon S3 Event Notifications](EventNotifications.md).

The following notification configuration defines that all `manifest.checksum` files newly added to the destination bucket are processed by the AWS Lambda `cloud-function-list-write`.

```
<NotificationConfiguration>
  <QueueConfiguration>
      <Id>1</Id>
      <Filter>
          <S3Key>
              <FilterRule>
                  <Name>prefix</Name>
                  <Value>destination-prefix/source-bucket</Value>
              </FilterRule>
              <FilterRule>
                  <Name>suffix</Name>
                  <Value>checksum</Value>
              </FilterRule>
          </S3Key>
     </Filter>
     <CloudFunction>arn:aws:lambda:us-west-2:222233334444:cloud-function-list-write</CloudFunction>
     <Event>s3:ObjectCreated:*</Event>
  </QueueConfiguration>
  </NotificationConfiguration>
```

For more information, see [Using AWS Lambda with Amazon S3](https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html) in the *AWS Lambda Developer Guide*.

# Querying Amazon S3 Inventory with Amazon Athena
<a name="storage-inventory-athena-query"></a>

You can query Amazon S3 Inventory files with standard SQL queries by using Amazon Athena in all Regions where Athena is available. To check for AWS Region availability, see the [AWS Region Table](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). 

Athena can query Amazon S3 Inventory files in [Apache optimized row columnar (ORC)](https://orc.apache.org/), [https://parquet.apache.org/](https://parquet.apache.org/), or comma-separated values (CSV) format. When you use Athena to query inventory files, we recommend that you use ORC-formatted or Parquet-formatted inventory files. The ORC and Parquet formats provide faster query performance and lower query costs. ORC and Parquet are self-describing, type-aware columnar file formats designed for [http://hadoop.apache.org/](http://hadoop.apache.org/). The columnar format lets the reader read, decompress, and process only the columns that are required for the current query. The ORC and Parquet formats for Amazon S3 Inventory are available in all AWS Regions.

**To use Athena to query Amazon S3 Inventory files**

1. Create an Athena table. For information about creating a table, see [Creating Tables in Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/creating-tables.html) in the *Amazon Athena User Guide*.

1. Create your query by using one of the following sample query templates, depending on whether you're querying an ORC-formatted, a Parquet-formatted, or a CSV-formatted inventory report. 
   + When you're using Athena to query an ORC-formatted inventory report, use the following sample query as a template.

     The following sample query includes all the optional fields in an ORC-formatted inventory report. 

     To use this sample query, do the following: 
     + Replace `your_table_name` with the name of the Athena table that you created.
     + Remove any optional fields that you did not choose for your inventory so that the query corresponds to the fields chosen for your inventory.
     + Replace the following bucket name and inventory location (the configuration ID) as appropriate for your configuration.

       `s3://amzn-s3-demo-bucket/config-ID/hive/`
     + Replace the `2022-01-01-00-00` date under `projection.dt.range` with the first day of the time range within which you partition the data in Athena. For more information, see [Partitioning data in Athena](https://docs.aws.amazon.com/athena/latest/ug/partitions.html).

     ```
     CREATE EXTERNAL TABLE your_table_name (
              bucket string,
              key string,
              version_id string,
              is_latest boolean,
              is_delete_marker boolean,
              size bigint,
              last_modified_date timestamp,
              e_tag string,
              storage_class string,
              is_multipart_uploaded boolean,
              replication_status string,
              encryption_status string,
              object_lock_retain_until_date bigint,
              object_lock_mode string,
              object_lock_legal_hold_status string,
              intelligent_tiering_access_tier string,
              bucket_key_status string,
              checksum_algorithm string,
              object_access_control_list string,
              object_owner string,
              lifecycle_expiration_date timestamp
     ) PARTITIONED BY (
             dt string
     )
     ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
       STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
       OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat'
       LOCATION 's3://amzn-s3-demo-bucket/config-ID/hive/'
       TBLPROPERTIES (
         "projection.enabled" = "true",
         "projection.dt.type" = "date",
         "projection.dt.format" = "yyyy-MM-dd-HH-mm",
         "projection.dt.range" = "2022-01-01-00-00,NOW",
         "projection.dt.interval" = "1",
         "projection.dt.interval.unit" = "HOURS"
       );
     ```
   + When you're using Athena to query a Parquet-formatted inventory report, use the sample query for an ORC-formatted report. However, use the following Parquet SerDe in place of the ORC SerDe in the `ROW FORMAT SERDE` statement.

     ```
     ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
     ```
   + When you're using Athena to query a CSV-formatted inventory report, use the following sample query as a template.

     The following sample query includes all the optional fields in an CSV-formatted inventory report. 

     To use this sample query, do the following: 
     + Replace `your_table_name` with the name of the Athena table that you created.
     + Remove any optional fields that you did not choose for your inventory so that the query corresponds to the fields chosen for your inventory.
     + Replace the following bucket name and inventory location (the configuration ID) as appropriate for your configuration. 

       `s3://amzn-s3-demo-bucket/config-ID/hive/`
     + Replace the `2022-01-01-00-00` date under `projection.dt.range` with the first day of the time range within which you partition the data in Athena. For more information, see [Partitioning data in Athena](https://docs.aws.amazon.com/athena/latest/ug/partitions.html).

     ```
     CREATE EXTERNAL TABLE your_table_name (
              bucket string,
              key string,
              version_id string,
              is_latest boolean,
              is_delete_marker boolean,
              size string,
              last_modified_date string,
              e_tag string,
              storage_class string,
              is_multipart_uploaded boolean,
              replication_status string,
              encryption_status string,
              object_lock_retain_until_date string,
              object_lock_mode string,
              object_lock_legal_hold_status string,
              intelligent_tiering_access_tier string,
              bucket_key_status string,
              checksum_algorithm string,
              object_access_control_list string,
              object_owner string,
              lifecycle_expiration_date string
     ) PARTITIONED BY (
             dt string
     )
     ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
       STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
       OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat'
       LOCATION 's3://amzn-s3-demo-bucket/config-ID/hive/'
       TBLPROPERTIES (
         "projection.enabled" = "true",
         "projection.dt.type" = "date",
         "projection.dt.format" = "yyyy-MM-dd-HH-mm",
         "projection.dt.range" = "2022-01-01-00-00,NOW",
         "projection.dt.interval" = "1",
         "projection.dt.interval.unit" = "HOURS"
       );
     ```

1. You can now run various queries on your inventory, as shown in the following examples. Replace each `user input placeholder` with your own information.

   ```
   # Get a list of the latest inventory report dates available.
   SELECT DISTINCT dt FROM your_table_name ORDER BY 1 DESC limit 10;
             
   # Get the encryption status for a provided report date.
   SELECT encryption_status, count(*) FROM your_table_name WHERE dt = 'YYYY-MM-DD-HH-MM' GROUP BY encryption_status;
             
   # Get the encryption status for inventory report dates in the provided range.
   SELECT dt, encryption_status, count(*) FROM your_table_name 
   WHERE dt > 'YYYY-MM-DD-HH-MM' AND dt < 'YYYY-MM-DD-HH-MM' GROUP BY dt, encryption_status;
   ```

   When you configure S3 Inventory to add the Object Access Control List (Object ACL) field to an inventory report, the report displays the value for the Object ACL field as a base64-encoded string. To get the decoded value in JSON for the Object ACL field, you can query this field by using Athena. See the following query examples. For more information about the Object ACL field, see [Working with the Object ACL field](objectacl.md).

   ```
   # Get the S3 keys that have Object ACL grants with public access.
   WITH grants AS (
       SELECT key,
           CAST(
               json_extract(from_utf8(from_base64(object_access_control_list)), '$.grants') AS ARRAY(MAP(VARCHAR, VARCHAR))
           ) AS grants_array
       FROM your_table_name
   )
   SELECT key,
          grants_array,
          grant
   FROM grants, UNNEST(grants_array) AS t(grant)
   WHERE element_at(grant, 'uri') = 'http://acs.amazonaws.com/groups/global/AllUsers'
   ```

   ```
   # Get the S3 keys that have Object ACL grantees in addition to the object owner.
   WITH grants AS 
       (SELECT key,
       from_utf8(from_base64(object_access_control_list)) AS object_access_control_list,
            object_owner,
            CAST(json_extract(from_utf8(from_base64(object_access_control_list)),
            '$.grants') AS ARRAY(MAP(VARCHAR, VARCHAR))) AS grants_array
       FROM your_table_name)
   SELECT key,
          grant,
          objectowner
   FROM grants, UNNEST(grants_array) AS t(grant)
   WHERE cardinality(grants_array) > 1 AND element_at(grant, 'canonicalId') != object_owner;
   ```

   ```
   # Get the S3 keys with READ permission that is granted in the Object ACL. 
   WITH grants AS (
       SELECT key,
           CAST(
               json_extract(from_utf8(from_base64(object_access_control_list)), '$.grants') AS ARRAY(MAP(VARCHAR, VARCHAR))
           ) AS grants_array
       FROM your_table_name
   )
   SELECT key,
          grants_array,
          grant
   FROM grants, UNNEST(grants_array) AS t(grant)
   WHERE element_at(grant, 'permission') = 'READ';
   ```

   ```
   # Get the S3 keys that have Object ACL grants to a specific canonical user ID.
   WITH grants AS (
       SELECT key,
           CAST(
               json_extract(from_utf8(from_base64(object_access_control_list)), '$.grants') AS ARRAY(MAP(VARCHAR, VARCHAR))
           ) AS grants_array
       FROM your_table_name
   )
   SELECT key,
          grants_array,
          grant
   FROM grants, UNNEST(grants_array) AS t(grant)
   WHERE element_at(grant, 'canonicalId') = 'user-canonical-id';
   ```

   ```
   # Get the number of grantees on the Object ACL.
   SELECT key,
          object_access_control_list,
          json_array_length(json_extract(object_access_control_list,'$.grants')) AS grants_count
   FROM your_table_name;
   ```

For more information about using Athena, see the [Amazon Athena User Guide](https://docs.aws.amazon.com/athena/latest/ug/).

# Converting empty version ID strings in Amazon S3 Inventory reports to null strings
<a name="inventory-configure-bops"></a>

**Note**  
**The following procedure applies only to Amazon S3 Inventory reports that include all versions, and only if the "all versions" reports are used as manifests for S3 Batch Operations on buckets that have S3 Versioning enabled.** You are not required to convert strings for S3 Inventory reports that specify the current version only.

You can use S3 Inventory reports as manifests for S3 Batch Operations. However, when S3 Versioning is enabled on a bucket, S3 Inventory reports that include all versions mark any null-versioned objects with empty strings in the version ID field. When an Inventory Report includes all object version IDs, Batch Operations recognizes `null` strings as version IDs, but not empty strings. 

When an S3 Batch Operations job uses an "all versions" S3 Inventory report as a manifest, it fails all tasks on objects that have an empty string in the version ID field. To convert empty strings in the version ID field of the S3 Inventory report to `null` strings for Batch Operations, use the following procedure.

**Update an Amazon S3 Inventory report for use with Batch Operations**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Navigate to your S3 Inventory report. The inventory report is located in the destination bucket that you specified while configuring your inventory report. For more information about locating inventory reports, see [Locating your inventory list](storage-inventory-location.md).

   1. Choose the destination bucket.

   1. Choose the folder. The folder is named after the original source bucket.

   1. Choose the folder named after the inventory configuration.

   1. Select the check box next to the folder named **hive**. At the top of the page, choose **Copy S3 URI** to copy the S3 URI for the folder.

1. Open the Amazon Athena console at [https://console.aws.amazon.com/athena/](https://console.aws.amazon.com/athena/home). 

1. In the query editor, choose **Settings**, then choose **Manage**. On the **Manage settings** page, for **Location of query result**, choose an S3 bucket to store your query results in.

1. In the query editor, create an Athena table to hold the data in the inventory report using the following command. Replace `table_name` with a name of your choosing, and in the `LOCATION` clause, insert the S3 URI that you copied earlier. Then choose **Run** to run the query.

   ```
   CREATE EXTERNAL TABLE table_name(bucket string, key string, version_id string) PARTITIONED BY (dt string)ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat' LOCATION 'Copied S3 URI';
   ```

1. To clear the query editor, choose **Clear**. Then load the inventory report into the table using the following command. Replace `table_name` with the one that you chose in the prior step. Then choose **Run** to run the query.

   ```
   MSCK REPAIR TABLE table_name;
   ```

1. To clear the query editor, choose **Clear**. Run the following `SELECT` query to retrieve all entries in the original inventory report and replace any empty version IDs with `null` strings. Replace `table_name` with the one that you chose earlier, and replace `YYYY-MM-DD-HH-MM` in the `WHERE` clause with the date of the inventory report that you want this tool to run on. Then choose **Run** to run the query.

   ```
   SELECT bucket as Bucket, key as Key, CASE WHEN version_id = '' THEN 'null' ELSE version_id END as VersionId FROM table_name WHERE dt = 'YYYY-MM-DD-HH-MM';
   ```

1. Return to the Amazon S3 console ([https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/)), and navigate to the S3 bucket that you chose for **Location of query result** earlier. Inside, there should be a series of folders ending with the date.

   For example, you should see something like **s3://**amzn-s3-demo-bucket**/*query-result-location*/Unsaved/2021/10/07/**. You should see `.csv` files containing the results of the `SELECT` query that you ran. 

   Choose the CSV file with the latest modified date. Download this file to your local machine for the next step.

1. The generated CSV file contains a header row. To use this CSV file as input for an S3 Batch Operations job, you must remove the header row, because Batch Operations doesn't support header rows on CSV manifests. 

   To remove the header row, you can run one of the following commands on the file. Replace *`file.csv`* with the name of your CSV file. 

   **For macOS and Linux machines**, run the `tail` command in a Terminal window. 

   ```
   tail -n +2 file.csv > tmp.csv && mv tmp.csv file.csv 
   ```

   **For Windows machines**, run the following script in a Windows PowerShell window. Replace `File-location` with the path to your file, and `file.csv` with the file name.

   ```
   $ins = New-Object System.IO.StreamReader File-location\file.csv
   $outs = New-Object System.IO.StreamWriter File-location\temp.csv
   try {
       $skip = 0
       while ( !$ins.EndOfStream ) {
           $line = $ins.ReadLine();
           if ( $skip -ne 0 ) {
               $outs.WriteLine($line);
           } else {
               $skip = 1
           }
       }
   } finally {
       $outs.Close();
       $ins.Close();
   }
   Move-Item File-location\temp.csv File-location\file.csv -Force
   ```

1. After removing the header row from the CSV file, you are ready to use it as a manifest in an S3 Batch Operations job. Upload the CSV file to an S3 bucket or location of your choosing, and then create a Batch Operations job using the CSV file as the manifest.

   For more information about creating a Batch Operations job, see [Creating an S3 Batch Operations job](batch-ops-create-job.md).

# Working with the Object ACL field
<a name="objectacl"></a>

An Amazon S3 Inventory report contains a list of the objects in the S3 source bucket and metadata for each object. The Object access control list (ACL) field is a metadata field that is available in Amazon S3 Inventory. Specifically, the Object ACL field contains the access control list (ACL) for each object. The ACL for an object defines which AWS accounts or groups are granted access to this object and the type of access that is granted. For more information, see [Access control list (ACL) overview](acl-overview.md) and [Amazon S3 Inventory list](storage-inventory.md#storage-inventory-contents). 

 The Object ACL field in Amazon S3 Inventory reports is defined in JSON format. The JSON data includes the following fields: 
+ `version` – The version of the Object ACL field format in the inventory reports. It's in date format `yyyy-mm-dd`. 
+ `status` – Possible values are `AVAILABLE` or `UNAVAILABLE` to indicate whether an Object ACL is available for an object. When the status for the Object ACL is `UNAVAILABLE`, the value of the Object Owner field in the inventory report is also `UNAVAILABLE`.
+ `grants` – Grantee-permission pairs that list the permission status of each grantee that is granted by the Object ACL. The available values for a grantee are `CanonicalUser` and `Group`. For more information about grantees, see [Grantees in access control lists](https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#specifying-grantee).

  For a grantee with the `Group` type, a grantee-permission pair includes the following attributes:
  + `uri` – A predefined Amazon S3 group.
  + `permission` – The ACL permissions that are granted on the object. For more information, see [ACL permissions on an object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#permissions).
  + `type` – The type `Group`, which denotes that the grantee is group.

  For a grantee with the `CanonicalUser` type, a grantee-permission pair includes the following attributes:
  + `canonicalId` – An obfuscated form of the AWS account ID. The canonical user ID for an AWS account is specific to that account. You can retrieve the canonical user ID. For more information, see [Find the canonical user ID for your AWS account](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-identifiers.html#FindCanonicalId) in the *AWS Account Management Reference Guide*.
  + `permission` – The ACL permissions that are granted on the object. For more information, see [ACL permissions on an object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#permissions).
  + `type` – The type `CanonicalUser`, which denotes that the grantee is an AWS account.

The following example shows possible values for the Object ACL field in JSON format: 

```
{
    "version": "2022-11-10",
    "status": "AVAILABLE",
    "grants": [{
        "uri": "http://acs.amazonaws.com/groups/global/AllUsers",
        "permission": "READ",
        "type": "Group"
    }, {
        "canonicalId": "example-canonical-id",
        "permission": "FULL_CONTROL",
        "type": "CanonicalUser"
    }]
}
```

**Note**  
The Object ACL field is defined in JSON format. An inventory report displays the value for the Object ACL field as a base64-encoded string.  
For example, suppose that you have the following Object ACL field in JSON format:  

```
{
        "version": "2022-11-10",
        "status": "AVAILABLE",
        "grants": [{
            "canonicalId": "example-canonical-user-ID",
            "type": "CanonicalUser",
            "permission": "READ"
        }]
}
```
The Object ACL field is encoded and shown as the following base64-encoded string:  

```
eyJ2ZXJzaW9uIjoiMjAyMi0xMS0xMCIsInN0YXR1cyI6IkFWQUlMQUJMRSIsImdyYW50cyI6W3siY2Fub25pY2FsSWQiOiJleGFtcGxlLWNhbm9uaWNhbC11c2VyLUlEIiwidHlwZSI6IkNhbm9uaWNhbFVzZXIiLCJwZXJtaXNzaW9uIjoiUkVBRCJ9XX0=
```
To get the decoded value in JSON for the Object ACL field, you can query this field in Amazon Athena. For query examples, see [Querying Amazon S3 Inventory with Amazon Athena](storage-inventory-athena-query.md).