

# Managing the lifecycle of objects
<a name="object-lifecycle-mgmt"></a>

S3 Lifecycle helps you store objects cost effectively throughout their lifecycle by transitioning them to lower-cost storage classes, or, deleting expired objects on your behalf. To manage the lifecycle of your objects, create an *S3 Lifecycle configuration* for your bucket. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions:
+ **Transition actions** – These actions define when objects transition to another storage class. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after creating them, or archive objects to the S3 Glacier Flexible Retrieval storage class one year after creating them. For more information, see [Understanding and managing Amazon S3 storage classes](storage-class-intro.md). 

  There are costs associated with lifecycle transition requests. For pricing information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).
+ **Expiration actions** – These actions define when objects expire. Amazon S3 deletes expired objects on your behalf. For example, you might to choose to expire objects after they have been stored for a regulatory compliance period. For more information, see [Expiring objects](lifecycle-expire-general-considerations.md).

  There are potential costs associated with lifecycle expiration only when you expire objects in a storage class with a minimum storage duration. For more information, see [Minimum storage duration charge](lifecycle-expire-general-considerations.md#lifecycle-expire-minimum-storage).

**Important**  
**General purpose buckets** — You can't use a bucket policy to prevent deletions or transitions by an S3 Lifecycle rule. For example, even if your bucket policy denies all actions for all principals, your S3 Lifecycle configuration still functions as normal.

**Existing and new objects**  
When you add a Lifecycle configuration to a bucket, the configuration rules apply to both existing objects and objects that you add later. For example, if you add a Lifecycle configuration rule today with an expiration action that causes objects to expire 30 days after creation, Amazon S3 will queue for removal any existing objects that are more than 30 days old.

**Changes in billing**  
If there is any delay between when an object becomes eligible for a lifecycle action and when Amazon S3 transfers or expires your object, billing changes are applied as soon as the object becomes eligible for the lifecycle action. For example, if an object is scheduled to expire and Amazon S3 doesn't immediately expire the object, you won't be charged for storage after the expiration time. 

The one exception to this behavior is if you have a lifecycle rule to transition to the S3 Intelligent-Tiering storage class. In that case, billing changes don't occur until the object has transitioned to S3 Intelligent-Tiering. For more information about S3 Lifecycle rules, see [Lifecycle configuration elements](intro-lifecycle-rules.md). 

**Note**  
There are no data retrieval charges for lifecycle transitions. However, there are per-request ingestion charges when using `PUT`, `COPY`, or lifecycle rules to move data into any S3 storage class. Consider the ingestion or transition cost before moving objects into any storage class. For more information about cost considerations, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

**Monitoring the effect of lifecycle rules**  
To monitor the effect of updates made by active lifecycle rules, see [How do I monitor the actions taken by my lifecycle rules?](troubleshoot-lifecycle.md#troubleshoot-lifecycle-2).

## Managing the complete lifecycle of objects
<a name="lifecycle-config-overview-what"></a>

With S3 Lifecycle configuration rules you can tell Amazon S3 to transition objects to less-expensive storage classes, archive or delete them. For example: 
+ If you upload periodic logs to a bucket, your application might need them for a week or a month. After that, you might want to delete them.
+ Some documents are frequently accessed for a limited period of time. After that, they are infrequently accessed. At some point, you might not need real-time access to them, but your organization or regulations might require you to archive them for a specific period. After that, you can delete them. 
+ You might upload some types of data to Amazon S3 primarily for archival purposes. For example, you might archive digital media, financial, and healthcare records, raw genomics sequence data, long-term database backups, and data that must be retained for regulatory compliance.

By combining S3 Lifecycle actions to manage an object's complete lifecycle. For example, suppose that the objects you create have a well-defined lifecycle. Initially, the objects are frequently accessed for a period of 30 days. Then, objects are infrequently accessed for up to 90 days. After that, the objects are no longer needed, so you might choose to archive or delete them. 

In this scenario, you can create an S3 Lifecycle rule in which you specify the initial transition action to S3 Intelligent-Tiering, S3 Standard-IA, or S3 One Zone-IA storage, another transition action to S3 Glacier Flexible Retrieval storage for archiving, and an expiration action. As you move the objects from one storage class to another, you save on storage costs. For more information about cost considerations, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

**Topics**
+ [

## Managing the complete lifecycle of objects
](#lifecycle-config-overview-what)
+ [

# Transitioning objects using Amazon S3 Lifecycle
](lifecycle-transition-general-considerations.md)
+ [

# Expiring objects
](lifecycle-expire-general-considerations.md)
+ [

# Setting an S3 Lifecycle configuration on a bucket
](how-to-set-lifecycle-configuration-intro.md)
+ [

# How S3 Lifecycle interacts with other bucket configurations
](lifecycle-and-other-bucket-config.md)
+ [

# Configuring S3 Lifecycle event notifications
](lifecycle-configure-notification.md)
+ [

# Lifecycle configuration elements
](intro-lifecycle-rules.md)
+ [

# How Amazon S3 handles conflicts in lifecycle configurations
](lifecycle-conflicts.md)
+ [

# Examples of S3 Lifecycle configurations
](lifecycle-configuration-examples.md)
+ [

# Troubleshooting Amazon S3 Lifecycle issues
](troubleshoot-lifecycle.md)

# Transitioning objects using Amazon S3 Lifecycle
<a name="lifecycle-transition-general-considerations"></a>

You can add transition actions to an S3 Lifecycle configuration to tell Amazon S3 to move objects to another Amazon S3 storage class. For more information about storage classes, see [Understanding and managing Amazon S3 storage classes](storage-class-intro.md). Some examples of when you might use S3 Lifecycle configurations in this way include the following:
+ When you know that objects are infrequently accessed, you might transition them to the S3 Standard-IA storage class.
+ You might want to archive objects that you don't need to access in real time to the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes.

**Note**  
Encrypted objects remain encrypted throughout the storage class transition process.

## Supported transitions
<a name="lifecycle-general-considerations-transition-sc"></a>

In an S3 Lifecycle configuration, you can define rules to transition objects from one storage class to another to save on storage costs. When you don't know the access patterns of your objects, or if your access patterns are changing over time, you can transition the objects to the S3 Intelligent-Tiering storage class for automatic cost savings. For information about storage classes, see [Understanding and managing Amazon S3 storage classes](storage-class-intro.md). 

Amazon S3 supports a waterfall model for transitioning between storage classes, as shown in the following diagram. 

![\[Amazon S3 storage class waterfall graphic.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/lifecycle-transitions-v4.png)


### Supported lifecycle transitions
<a name="supported-lifecycle-transitions"></a>

Amazon S3 supports the following lifecycle transitions between storage classes using an S3 Lifecycle configuration. 
+ The S3 Standard storage class to the S3 Standard-IA, S3 Intelligent-Tiering, S3 One Zone-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, or S3 Glacier Deep Archive storage classes.
+ The S3 Standard-IA storage class to the S3 Intelligent-Tiering, S3 One Zone-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, or S3 Glacier Deep Archive storage classes.
+ The S3 Intelligent-Tiering storage class can transition to different storage classes depending on the S3 Intelligent-Tiering access tier. The following transitions are possible for each access tier.
  + Frequent Access tier or Infrequent Access tier to S3 One Zone-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, or S3 Glacier Deep Archive storage classes.
  + Archive Instant Access tier to S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, or S3 Glacier Deep Archive storage classes.
  + Archive Access tier to S3 Glacier Flexible Retrieval, or S3 Glacier Deep Archive storage classes.
  + Deep Archive Access tier to S3 Glacier Deep Archive storage classes.
+ The S3 One Zone-IA storage class to the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes.
+ The S3 Glacier Instant Retrieval storage class to the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes.
+ The S3 Glacier Flexible Retrieval storage class to the S3 Glacier Deep Archive storage class.

**Note**  
For versioning enabled or versioning suspended buckets, you can't transition objects with a `Pending` or `Failed` replication status.

## Constraints and considerations for transitions
<a name="lifecycle-configuration-constraints"></a>

Lifecycle storage class transitions have the following constraints:

**Objects smaller than 128 KB will not transition by default to any storage class**  
Amazon S3 applies a default behavior to S3 Lifecycle configurations that prevents objects smaller than 128 KB from being transitioned to any storage class. We don't recommend transitioning objects less than 128 KB because you are charged a transition request for each object. This means, for smaller objects, the transition costs can outweigh the storage savings. For more information about transition request costs, see **Requests & data retrievals** on the **Storage & requests** tab of the [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/) page.

 To allow smaller objects to transition, you can add an [object size filter](intro-lifecycle-rules.md#intro-lifecycle-rules-filter) to your Lifecycle transition rules that specifies a custom minimum size (`ObjectSizeGreaterThan`) or maximum size (`ObjectSizeLessThan`). For more information, see [Example: Allowing objects smaller than 128 KB to be transitioned](lifecycle-configuration-examples.md#lc-small-objects). 

**Note**  
In September 2024 Amazon S3 updated the default transition behavior for small objects, as follows:  
**Updated default transition behavior** — Starting September 2024, the default behavior prevents objects smaller than 128 KB from being transitioned to any storage class.
**Previous default transition behavior** — Before September 2024, the default behavior allowed objects smaller than 128 KB to be transitioned only to the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes.
Configurations created before September 2024 retain the previous transition behavior unless you modify them. That is, if you create, edit, or delete rules, the default transition behavior for your configuration changes to the updated behavior. If your use case requires, you can change the default transition behavior so that objects smaller than 128KB will transition to S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive. To do this, use the optional `x-amz-transition-default-minimum-object-size` header in a [PutBucketLifecycleConfiguration](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html) request.

**Objects must be stored for at least 30 days before transitioning to S3 Standard-IA or S3 One Zone-IA**  
Before you transition objects to S3 Standard-IA or S3 One Zone-IA, you must store them for at least 30 days in Amazon S3. For example, you cannot create a Lifecycle rule to transition objects to the S3 Standard-IA storage class one day after you create them. Amazon S3 doesn't support this transition within the first 30 days because newer objects are often accessed more frequently or deleted sooner than is suitable for S3 Standard-IA or S3 One Zone-IA storage.

Similarly, if you are transitioning noncurrent objects (in versioned buckets), you can transition only objects that are at least 30 days noncurrent to S3 Standard-IA or S3 One Zone-IA storage. For a list of minimum storage duration for all storage class, see [Comparing the Amazon S3 storage classes](storage-class-intro.md#sc-compare).

**You are charged for transitioning objects before their minimum storage duration**  
Certain storage classes have a minimum object storage duration. If you transition objects out of these storage classes before the minimum duration, you are charged for the remainder of that duration. For more information on which storage classes have minimum storage durations, see [Comparing the Amazon S3 storage classes](storage-class-intro.md#sc-compare).

You can't create a single Lifecycle rule that transitions objects from one storage class to another before the minimum storage duration period has passed.

 For example, S3 Glacier Instant Retrieval has a minimum storage duration of 90 days. You can’t specify a lifecycle rule that transitions objects to S3 Glacier Instant Retrieval after 4 days, and then transitions objects to S3 Glacier Deep Archive after 20 days. In this case the S3 Glacier Deep Archive transition must occur after at least 94 days.

You can specify two rules to accomplish this, but you pay the minimum duration storage charges. For more information about cost considerations, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

For more information about creating a S3 Lifecycle, see [Setting an S3 Lifecycle configuration on a bucket](how-to-set-lifecycle-configuration-intro.md).

## Transitioning to the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes (object archival)
<a name="before-deciding-to-archive-objects"></a>

By using an S3 Lifecycle configuration, you can transition objects to the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes for archiving.

Before you archive objects, review the following sections for relevant considerations.

### General considerations
<a name="transition-glacier-general-considerations"></a>

The following are the general considerations for you to consider before you archive objects:
+ Encrypted objects remain encrypted throughout the storage class transition process.
+ Objects that are stored in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes are not available in real time.

  Archived objects are Amazon S3 objects, but before you can access an archived object, you must first restore a temporary copy of it. The restored object copy is available only for the duration that you specify in the restore request. After that, Amazon S3 deletes the temporary copy, and the object remains archived in S3 Glacier Flexible Retrieval. 

  You can restore an object by using the Amazon S3 console or programmatically by using the AWS SDK wrapper libraries or the Amazon S3 REST API in your code. For more information, see [Restoring an archived object](restoring-objects.md).
+ Objects that are stored in the S3 Glacier Flexible Retrieval storage class can only be transitioned to the S3 Glacier Deep Archive storage class.

  You can use an S3 Lifecycle configuration rule to convert the storage class of an object from S3 Glacier Flexible Retrieval to the S3 Glacier Deep Archive storage class only. If you want to change the storage class of an object that is stored in S3 Glacier Flexible Retrieval to a storage class other than S3 Glacier Deep Archive, you must use the restore operation to make a temporary copy of the object first. Then use the copy operation to overwrite the object specifying S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA, or Reduced Redundancy as the storage class.
+ The transition of objects to the S3 Glacier Deep Archive storage class can go only one way.

  You cannot use an S3 Lifecycle configuration rule to convert the storage class of an object from S3 Glacier Deep Archive to any other storage class. If you want to change the storage class of an archived object to another storage class, you must use the restore operation to make a temporary copy of the object first. Then use the copy operation to overwrite the object specifying S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, or Reduced Redundancy Storage as the storage class.
**Note**  
The Copy operation for restored objects isn't supported in the Amazon S3 console for objects in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes. For this type of Copy operation, use the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the REST API.

  The objects that are stored in the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes are visible and available only through Amazon S3. They are not available through the separate Amazon Glacier service.

  These are Amazon S3 objects, and you can access them only by using the Amazon S3 console or the Amazon S3 API. You cannot access the archived objects through the separate Amazon Glacier console or the Amazon Glacier API.

### Cost considerations
<a name="glacier-pricing-considerations"></a>

If you are planning to archive infrequently accessed data for a period of months or years, the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes can reduce your storage costs. However, to ensure that the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage class is appropriate for you, consider the following:
+ **Storage overhead charges** – When you transition objects to the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage class, a fixed amount of storage is added to each object to accommodate metadata for managing the object.
  + For each object archived to S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive, Amazon S3 uses 8 KB of storage for the name of the object and other metadata. Amazon S3 stores this metadata so that you can get a real-time list of your archived objects by using the Amazon S3 API. For more information, see [Get Bucket (List Objects)](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html). You are charged S3 Standard rates for this additional storage.
  +  For each object that is archived to S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive, Amazon S3 adds 32 KB of storage for index and related metadata. This extra data is necessary to identify and restore your object. You are charged S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive rates for this additional storage.

  If you are archiving small objects, consider these storage charges. Also consider aggregating many small objects into a smaller number of large objects to reduce overhead costs.
+ **Number of days you plan to keep objects archived** – S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive are long-term archival solutions. The minimal storage duration period is 90 days for the S3 Glacier Flexible Retrieval storage class and 180 days for S3 Glacier Deep Archive. Deleting data that is archived to Amazon Glacier doesn't incur charges if the objects you delete are archived for more than the minimal storage duration period. If you delete or overwrite an archived object within the minimal duration period, Amazon S3 charges a prorated early deletion fee. For information about the early deletion fee, see the "How am I charged for deleting objects from Amazon Glacier that are less than 90 days old?" question on the [Amazon S3 FAQ](https://aws.amazon.com/s3/faqs/#Amazon_S3_Glacier). 
+ ** S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive transition request charges** – Each object that you transition to the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage class constitutes one transition request. There is a cost for each such request. If you plan to transition a large number of objects, consider the request costs. If you are archiving a mix of objects that includes small objects, especially those under 128KB, we recommend using the lifecycle object size filter to filter out small objects from your transition to reduce request costs.
+ ** S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive data restore charges** – S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive are designed for long-term archival of data that you access infrequently. For information about data restoration charges, see the "How much does it cost to retrieve data from Amazon Glacier?" question on the [Amazon S3 FAQ](https://aws.amazon.com/s3/faqs/#Amazon_S3_Glacier). For information about how to restore data from Amazon Glacier, see [Restoring an archived object](restoring-objects.md). 

**Note**  
S3 Lifecycle transitions objects to S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive asynchronously. There might be a delay between the transition date in the S3 Lifecycle configuration rule and the date of the physical transition. In this case you are charged the default rate of the storage class you transitioned from based on the transition date specified in the rule.

The Amazon S3 product detail page provides pricing information and example calculations for archiving Amazon S3 objects. For more information, see the following topics:
+  "How is my storage charge calculated for Amazon S3 objects archived to Amazon Glacier?" on the [Amazon S3 FAQ](https://aws.amazon.com/s3/faqs/#Amazon_S3_Glacier). 
+  "How am I charged for deleting objects from Amazon Glacier that are less than 90 days old?" on the [Amazon S3 FAQ](https://aws.amazon.com/s3/faqs/#Amazon_S3_Glacier). 
+  "How much does it cost to retrieve data from Amazon Glacier?" on the [Amazon S3 FAQ](https://aws.amazon.com/s3/faqs/#Amazon_S3_Glacier). 
+  [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/) for storage costs for the different storage classes. 

### Restoring archived objects
<a name="restore-glacier-objects-concepts"></a>

Archived objects aren't accessible in real time. You must first initiate a restore request and then wait until a temporary copy of the object is available for the duration that you specify in the request. After you receive a temporary copy of the restored object, the object's storage class remains S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive. (A [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html) or [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html) API operation request will return S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive as the storage class.) 

**Note**  
When you restore an archive, you are paying for both the archive (S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive rate) and a copy that you restored temporarily (S3 Standard storage rate). For information about pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). 

You can restore an object copy programmatically or by using the Amazon S3 console. Amazon S3 processes only one restore request at a time per object. For more information, see [Restoring an archived object](restoring-objects.md).

# Expiring objects
<a name="lifecycle-expire-general-considerations"></a>

You can add transition actions to an S3 Lifecycle configuration to tell Amazon S3 to delete objects at the end of their lifetime. When an object reaches the end of its lifetime based on its lifecycle configuration, Amazon S3 takes an `Expiration` action based on which [S3 Versioning](Versioning.md) state the bucket is in:
+ **Nonversioned bucket** – Amazon S3 queues the object for removal and removes it asynchronously, permanently removing the object. 
+ **Versioning-enabled bucket** – If the current object version is not a delete marker, Amazon S3 adds a delete marker with a unique version ID. This makes the current version noncurrent, and the delete marker the current version. 
+ **Versioning-suspended bucket** – Amazon S3 creates a delete marker with null as the version ID. This delete marker replaces any object version with a null version ID in the version hierarchy, which effectively deletes the object. 

For a versioned bucket (that is, versioning-enabled or versioning-suspended), there are several considerations that guide how Amazon S3 handles the `Expiration` action. For versioning-enabled or versioning-suspended buckets, the following applies:
+ Object expiration applies only to an object's current version (it has no impact on noncurrent object versions).
+ Amazon S3 doesn't take any action if there are one or more object versions and the delete marker is the current version.
+ If the current object version is the only object version and it is also a delete marker (also referred as an *expired object delete marker*, where all object versions are deleted and you only have a delete marker remaining), Amazon S3 removes the expired object delete marker. You can also use the `Expiration` action to direct Amazon S3 to remove any expired object delete markers. For example, see [Removing expired object delete markers in a versioning-enabled bucket](lifecycle-configuration-examples.md#lifecycle-config-conceptual-ex7).
+ You can use the `NoncurrentVersionExpiration` action element to direct Amazon S3 to permanently delete noncurrent versions of objects. These deleted objects can't be recovered. You can base this expiration on a certain number of days since the objects became noncurrent. In addition to the number of days, you can also provide a maximum number of noncurrent versions to retain (between 1 and 100). This value specifies how many newer noncurrent versions must exist before Amazon S3 can perform the associated action on a given version. To specify the maximum number of noncurrent versions, you must also provide a `Filter` element. If you don't specify a `Filter` element, Amazon S3 generates an `InvalidRequest` error when you provide a maximum number of noncurrent versions. For more information about using the `NoncurrentVersionExpiration` action element, see [Elements to describe lifecycle actions](intro-lifecycle-rules.md#intro-lifecycle-rules-actions).
+ Amazon S3 doesn't take any action on noncurrent versions of objects that have the S3 Object Lock configuration applied.
+ For objects with a `Pending` or `Failed` replication status, Amazon S3 doesn't take any action on current or non-current versions of objects.

For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).

**Important**  
When you have multiple rules in an S3 Lifecycle configuration, an object can become eligible for multiple S3 Lifecycle actions on the same day. In such cases, Amazon S3 follows these general rules:  
Permanent deletion takes precedence over transition.
Transition takes precedence over creation of [delete markers](DeleteMarker.md).
When an object is eligible for both an S3 Glacier Flexible Retrieval and an S3 Standard-IA (or an S3 One Zone-IA) transition, Amazon S3 chooses the S3 Glacier Flexible Retrieval transition.
 For examples, see [Examples of overlapping filters and conflicting lifecycle actions](lifecycle-conflicts.md#lifecycle-config-conceptual-ex5). 

**Existing and new objects**  
When you add a Lifecycle configuration to a bucket, the configuration rules apply to both existing objects and objects that you add later. For example, if you add a Lifecycle configuration rule today with an expiration action that causes objects with a specific prefix to expire 30 days after creation, Amazon S3 will queue for removal any existing objects that are more than 30 days old and that have the specified prefix.

**Important**  
You can't use a bucket policy to prevent deletions or transitions by an S3 Lifecycle rule. For example, even if your bucket policy denies all actions for all principals, your S3 Lifecycle configuration still functions as normal.

## How to find when objects will expire
<a name="lifecycle-expire-when"></a>

To find when the current version of an object is scheduled to expire, use the [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html) or [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html) API operation. These API operations return response headers that provide the date and time at which the current version of the object is no longer cacheable. 

**Note**  
There may be a delay between the expiration date and the date at which Amazon S3 removes an object. You are not charged for expiration or the storage time associated with an object that has expired. 
Before updating, disabling, or deleting Lifecycle rules, use the `LIST` API operations (such as [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html), and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html)) or [Cataloging and analyzing your data with S3 Inventory](storage-inventory.md) to verify that Amazon S3 has transitioned and expired eligible objects based on your use cases.

## Minimum storage duration charge
<a name="lifecycle-expire-minimum-storage"></a>

If you create an S3 Lifecycle expiration rule that causes objects that have been in S3 Standard-IA or S3 One Zone-IA storage for less than 30 days to expire, you are charged for 30 days. If you create a Lifecycle expiration rule that causes objects that have been in S3 Glacier Flexible Retrieval storage for less than 90 days to expire, you are charged for 90 days. If you create a Lifecycle expiration rule that causes objects that have been in S3 Glacier Deep Archive storage for less than 180 days to expire, you are charged for 180 days.

For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

# Setting an S3 Lifecycle configuration on a bucket
<a name="how-to-set-lifecycle-configuration-intro"></a>

You can set an Amazon S3 Lifecycle configuration on a bucket by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the Amazon S3 REST API. For information about S3 Lifecycle configuration, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md).

**Note**  
To view or edit the lifecycle configuration for a directory bucket, use the AWS CLI, AWS SDKs, or the Amazon S3 REST API. For more information, see [Working with S3 Lifecycle for directory buckets](directory-buckets-objects-lifecycle.md).

In your S3 Lifecycle configuration, you use *lifecycle rules* to define actions that you want Amazon S3 to take during an object's lifetime. For example, you can define rules to transition objects to another storage class, archive objects, or expire (delete) objects after a specified period of time.

## S3 Lifecycle considerations
<a name="lifecycle-considerations"></a>

Before you set a lifecycle configuration, note the following:

**Lifecycle configuration propagation delay**  
When you add an S3 Lifecycle configuration to a bucket, there is usually some lag before a new or updated Lifecycle configuration is fully propagated to all the Amazon S3 systems. Expect a delay of a few minutes before the configuration fully takes effect. This delay can also occur when you delete an S3 Lifecycle configuration.

**Transition or expiration delay**  
There's a delay between when a lifecycle rule is satisfied and when the action for the rule is completed. For example, suppose that a set of objects is expired by a lifecycle rule on January 1. Even though the expiration rule has been satisfied on January 1, Amazon S3 might not actually delete these objects until days or even weeks later. This delay occurs because S3 Lifecycle queues objects for transitions or expirations asynchronously. When you add or modify a Lifecycle rule, S3 Lifecycle may begin processing eligible objects immediately or with some delay. When S3 Lifecycle creates a delete marker or transitions an object, the timestamp is set to midnight UTC on the day the action occurred, regardless of the actual time the action was taken. However, changes in billing are usually applied when the lifecycle rule is satisfied, even if the action isn't complete. For more information, see [Changes in billing](#lifecycle-billing). To monitor the effect of updates made by active lifecycle rules, see [How do I monitor the actions taken by my lifecycle rules?](troubleshoot-lifecycle.md#troubleshoot-lifecycle-2)

**Note**  
When a lifecycle rule is created or modified, objects that already meet the eligibility criteria may be processed immediately.

**Updating, disabling, or deleting lifecycle rules**  
When you disable or delete lifecycle rules, Amazon S3 stops scheduling new objects for deletion or transition after a small delay. Any objects that were already scheduled are unscheduled and are not deleted or transitioned.

**Note**  
Before updating, disabling, or deleting lifecycle rules, use the `LIST` API operations (such as [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html), and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html)) or [Cataloging and analyzing your data with S3 Inventory](storage-inventory.md) to verify that Amazon S3 has transitioned and expired eligible objects based on your use cases. If you're experiencing any issues with updating, disabling, or deleting lifecycle rules, see [Troubleshooting Amazon S3 Lifecycle issues](troubleshoot-lifecycle.md).

**Existing and new objects**  
When you add a Lifecycle configuration to a bucket, the configuration rules apply to both existing objects and objects that you add later. For example, if you add a Lifecycle configuration rule today with an expiration action that causes objects with a specific prefix to expire 30 days after creation, Amazon S3 will queue for removal any existing objects that are more than 30 days old and that have the specified prefix.

**Monitoring the effect of lifecycle rules**  
To monitor the effect of updates made by active lifecycle rules, see [How do I monitor the actions taken by my lifecycle rules?](troubleshoot-lifecycle.md#troubleshoot-lifecycle-2)

**Changes in billing**  
There might be a lag between when the Lifecycle configuration rules are satisfied and when the action triggered by satisfying the rule is taken. However, changes in billing happen as soon as the Lifecycle configuration rule is satisfied, even if the action isn't yet taken.

For example, after the object expiration time, you aren't charged for storage, even if the object isn't deleted immediately. Likewise, as soon as the object transition time elapses, you're charged S3 Glacier Flexible Retrieval storage rates, even if the object isn't immediately transitioned to the S3 Glacier Flexible Retrieval storage class. 

However, lifecycle transitions to the S3 Intelligent-Tiering storage class are the exception. Changes in billing don't happen until after the object has transitioned into the S3 Intelligent-Tiering storage class. 

**Multiple or conflicting rules**  
When you have multiple rules in an S3 Lifecycle configuration, an object can become eligible for multiple S3 Lifecycle actions on the same day. In such cases, Amazon S3 follows these general rules:
+ Permanent deletion takes precedence over transition.
+ Transition takes precedence over creation of [delete markers](DeleteMarker.md).
+ When an object is eligible for both an S3 Glacier Flexible Retrieval and an S3 Standard-IA (or an S3 One Zone-IA) transition, Amazon S3 chooses the S3 Glacier Flexible Retrieval transition.

 For examples, see [Examples of overlapping filters and conflicting lifecycle actions](lifecycle-conflicts.md#lifecycle-config-conceptual-ex5). 

## How to set an S3 Lifecycle configuration
<a name="how-to-set-lifecycle-configuration"></a>

You can set an Amazon S3 Lifecycle configuration on a general purpose bucket by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the Amazon S3 REST API. 

For information about AWS CloudFormation templates and examples, see [Working with AWS CloudFormation templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-guide.html) and [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-s3-bucket.html#aws-resource-s3-bucket--examples](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-s3-bucket.html#aws-resource-s3-bucket--examples) in the *CloudFormation User Guide*.

### Using the S3 console
<a name="create-lifecycle"></a>

You can define lifecycle rules for all objects or a subset of objects in a bucket by using a shared prefix (objects names that begin with a common string) or a tag. In your lifecycle rule, you can define actions specific to current and noncurrent object versions. For more information, see the following:
+ [Managing the lifecycle of objects](object-lifecycle-mgmt.md)
+ [Retaining multiple versions of objects with S3 Versioning](Versioning.md)

**To create a lifecycle rule**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to create a lifecycle rule for.

1. Choose the **Management** tab, and choose **Create lifecycle rule**.

1. In **Lifecycle rule name**, enter a name for your rule. 

   The name must be unique within the bucket. 

1. Choose the scope of the lifecycle rule: 
   + To apply this lifecycle rule to *all objects with a specific prefix or tag*, choose **Limit the scope to specific prefixes or tags**. 
     + To limit the scope by prefix, in **Prefix**, enter the prefix. 
     + To limit the scope by tag, choose **Add tag**, and enter the tag key and value.

     For more information about object name prefixes, see [Naming Amazon S3 objects](object-keys.md). For more information about object tags, see [Categorizing your objects using tags](object-tagging.md). 
   + To apply this lifecycle rule to *all objects in the bucket*, choose **This rule applies to *all* objects in the bucket**, and then choose **I acknowledge that this rule applies to all objects in the bucket**.

1. To filter a rule by object size, you can select **Specify minimum object size**, **Specify maximum object size**, or both options.
   + When you're specifying a value for **Minimum object size** or **Maximum object size**, the value must be larger than 0 bytes and up to 50 TB. You can specify this value in bytes, KB, MB, or GB.
   + When you're specifying both values, the maximum object size must be larger than the minimum object size.
**Note**  
The **Minimum object size** and **Maximum object size** filters exclude the specified values. For example, if you set a filter to expire objects that have a **Minimum object size** of 128 KB, objects that are exactly 128 KB don't expire. Instead, the rule applies only to objects that are greater than 128 KB in size.

1. Under **Lifecycle rule actions**, choose the actions that you want your lifecycle rule to perform:
   + Transition *current* versions of objects between storage classes
   + Transition *previous* versions of objects between storage classes
   + Expire *current* versions of objects
**Note**  
For buckets that don't have [S3 Versioning](Versioning.md) enabled, expiring current versions causes Amazon S3 to permanently delete the objects. For more information, see [Lifecycle actions and bucket versioning state](intro-lifecycle-rules.md#lifecycle-actions-bucket-versioning-state).
   + Permanently delete *previous* versions of objects
   + Delete expired delete markers or incomplete multipart uploads 

   Depending on the actions that you choose, different options appear.

1. To transition *current* versions of objects between storage classes, under **Transition current versions of objects between storage classes**, do the following:

   1. In **Storage class transitions**, choose the storage class to transition to. For a list of possible transitions, see [Supported lifecycle transitions](lifecycle-transition-general-considerations.md#supported-lifecycle-transitions). You can choose from the following storage classes:
      + S3 Standard-IA
      + S3 Intelligent-Tiering
      + S3 One Zone-IA
      + S3 Glacier Instant Retrieval
      + S3 Glacier Flexible Retrieval
      + S3 Glacier Deep Archive

   1. In **Days after object creation**, enter the number of days after creation to transition the object.

   For more information about storage classes, see [Understanding and managing Amazon S3 storage classes](storage-class-intro.md). You can define transitions for current or previous object versions or for both current and previous versions. Versioning enables you to keep multiple versions of an object in one bucket. For more information about versioning, see [Using the S3 console](manage-versioning-examples.md#enable-versioning).
**Important**  
When you choose the S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, or Glacier Deep Archive storage class, your objects remain in Amazon S3. You cannot access them directly through the separate Amazon Glacier service. For more information, see [Transitioning objects using Amazon S3 Lifecycle](lifecycle-transition-general-considerations.md). 

1. To transition *noncurrent* versions of objects between storage classes, under **Transition noncurrent versions of objects between storage classes**, do the following:

   1. In **Storage class transitions**, choose the storage class to transition to. For a list of possible transitions, see [Supported lifecycle transitions](lifecycle-transition-general-considerations.md#supported-lifecycle-transitions). You can choose from the following storage classes:
      + S3 Standard-IA
      + S3 Intelligent-Tiering
      + S3 One Zone-IA
      + S3 Glacier Instant Retrieval
      + S3 Glacier Flexible Retrieval
      + S3 Glacier Deep Archive

   1. In **Days after object becomes noncurrent**, enter the number of days after creation to transition the object.

1. To expire *current* versions of objects, under **Expire current versions of objects**, in **Number of days after object creation**, enter the number of days.
**Important**  
In a nonversioned bucket, the expiration action results in Amazon S3 permanently removing the object. For more information about lifecycle actions, see [Elements to describe lifecycle actions](intro-lifecycle-rules.md#intro-lifecycle-rules-actions).

1. To permanently delete previous versions of objects, under **Permanently delete noncurrent versions of objects**, in **Days after objects become noncurrent**, enter the number of days. You can optionally specify the number of newer versions to retain by entering a value under **Number of newer versions to retain**.

1. Under **Delete expired delete markers or incomplete multipart uploads**, choose **Delete expired object delete markers** and **Delete incomplete multipart uploads**. Then, enter the number of days after the multipart upload initiation that you want to end and clean up incomplete multipart uploads.

   For more information about multipart uploads, see [Uploading and copying objects using multipart upload in Amazon S3](mpuoverview.md).

1. Choose **Create rule**.

   If the rule does not contain any errors, Amazon S3 enables it, and you can see it on the **Management** tab under **Lifecycle rules**.

### Using the AWS CLI
<a name="set-lifecycle-cli"></a>

You can use the following AWS CLI commands to manage S3 Lifecycle configurations:
+ `put-bucket-lifecycle-configuration`
+ `get-bucket-lifecycle-configuration`
+ `delete-bucket-lifecycle`

For instructions on setting up the AWS CLI, see [Developing with Amazon S3 using the AWS CLI](https://docs.aws.amazon.com/AmazonS3/latest/API/setup-aws-cli.html) in the *Amazon S3 API Reference*.

The Amazon S3 Lifecycle configuration is an XML file. But when you're using the AWS CLI, you cannot specify the XML format. You must specify the JSON format instead. The following are example XML lifecycle configurations and the equivalent JSON configurations that you can specify in an AWS CLIcommand.

Consider the following example S3 Lifecycle configuration.

**Example 1**  

**Example**  

```
<LifecycleConfiguration>
    <Rule>
        <ID>ExampleRule</ID>
        <Filter>
           <Prefix>documents/</Prefix>
        </Filter>
        <Status>Enabled</Status>
        <Transition>        
           <Days>365</Days>        
           <StorageClass>GLACIER</StorageClass>
        </Transition>    
        <Expiration>
             <Days>3650</Days>
        </Expiration>
    </Rule>
</LifecycleConfiguration>
```

```
{
    "Rules": [
        {
            "Filter": {
                "Prefix": "documents/"
            },
            "Status": "Enabled",
            "Transitions": [
                {
                    "Days": 365,
                    "StorageClass": "GLACIER"
                }
            ],
            "Expiration": {
                "Days": 3650
            },
            "ID": "ExampleRule"
        }
    ]
}
```

**Example 2**  

**Example**  

```
<LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <Rule>
        <ID>id-1</ID>
        <Expiration>
            <Days>1</Days>
        </Expiration>
        <Filter>
            <And>
                <Prefix>myprefix</Prefix>
                <Tag>
                    <Key>mytagkey1</Key>
                    <Value>mytagvalue1</Value>
                </Tag>
                <Tag>
                    <Key>mytagkey2</Key>
                    <Value>mytagvalue2</Value>
                </Tag>
            </And>
        </Filter>
        <Status>Enabled</Status>    
    </Rule>
</LifecycleConfiguration>
```

```
{
    "Rules": [
        {
            "ID": "id-1",
            "Filter": {
                "And": {
                    "Prefix": "myprefix", 
                    "Tags": [
                        {
                            "Value": "mytagvalue1", 
                            "Key": "mytagkey1"
                        }, 
                        {
                            "Value": "mytagvalue2", 
                            "Key": "mytagkey2"
                        }
                    ]
                }
            }, 
            "Status": "Enabled", 
            "Expiration": {
                "Days": 1
            }
        }
    ]
}
```

You can test the `put-bucket-lifecycle-configuration` as follows.

**To test the configuration**

1. Save the JSON Lifecycle configuration in a file (for example, *`lifecycle.json`*). 

1. Run the following AWS CLI command to set the Lifecycle configuration on your bucket. Replace the `user input placeholders` with your own information.

   ```
   $ aws s3api put-bucket-lifecycle-configuration  \
   --bucket amzn-s3-demo-bucket  \
   --lifecycle-configuration file://lifecycle.json
   ```

1. To verify, retrieve the S3 Lifecycle configuration by using the `get-bucket-lifecycle-configuration` AWS CLI command as follows:

   ```
   $ aws s3api get-bucket-lifecycle-configuration  \
   --bucket amzn-s3-demo-bucket
   ```

1. To delete the S3 Lifecycle configuration, use the `delete-bucket-lifecycle` AWS CLI command as follows:

   ```
   aws s3api delete-bucket-lifecycle \
   --bucket amzn-s3-demo-bucket
   ```

### Using the AWS SDKs
<a name="manage-lifecycle-using-sdk"></a>

------
#### [ Java ]

You can use the AWS SDK for Java to manage the S3 Lifecycle configuration of a bucket. For more information about managing S3 Lifecycle configuration, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md).

**Note**  
When you add an S3 Lifecycle configuration to a bucket, Amazon S3 replaces the bucket's current Lifecycle configuration, if there is one. To update a configuration, you retrieve it, make the desired changes, and then add the revised configuration to the bucket.

To manage lifecycle configuration using the AWS SDK for Java, you can:
+ Add a Lifecycle configuration to a bucket.
+ Retrieve the Lifecycle configuration and update it by adding another rule.
+ Add the modified Lifecycle configuration to the bucket. Amazon S3 replaces the existing configuration.
+ Retrieve the configuration again and verify that it has the right number of rules by printing the number of rules.
+ Delete the Lifecycle configuration and verify that it has been deleted by attempting to retrieve it again.

For examples of how to set lifecycle configuration on a bucket with the AWS SDK for Java, see [Set lifecycle configuration on a bucket](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_PutBucketLifecycleConfiguration_section.html) in the *Amazon S3 API Reference*.

------
#### [ .NET ]

You can use the AWS SDK for .NET to manage the S3 Lifecycle configuration on a bucket. For more information about managing Lifecycle configuration, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md). 

**Note**  
When you add a Lifecycle configuration, Amazon S3 replaces the existing configuration on the specified bucket. To update a configuration, you must first retrieve the Lifecycle configuration, make the changes, and then add the revised Lifecycle configuration to the bucket.

The following example shows how to use the AWS SDK for .NET to add, update, and delete a bucket's Lifecycle configuration. The code example does the following:
+ Adds a Lifecycle configuration to a bucket. 
+ Retrieves the Lifecycle configuration and updates it by adding another rule. 
+ Adds the modified Lifecycle configuration to the bucket. Amazon S3 replaces the existing Lifecycle configuration.
+ Retrieves the configuration again and verifies it by printing the number of rules in the configuration.
+ Deletes the Lifecycle configuration and verifies the deletion.

For information about setting up and running the code examples, see [Getting Started with the AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config.html) in the *AWS SDK for .NET Developer Guide*. 

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
    class LifecycleTest
    {
        private const string bucketName = "*** bucket name ***";
        // Specify your bucket region (an example region is shown).
        private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
        private static IAmazonS3 client;
        public static void Main()
        {
            client = new AmazonS3Client(bucketRegion);
            AddUpdateDeleteLifecycleConfigAsync().Wait();
        }

        private static async Task AddUpdateDeleteLifecycleConfigAsync()
        {
            try
            {
                var lifeCycleConfiguration = new LifecycleConfiguration()
                {
                    Rules = new List<LifecycleRule>
                        {
                            new LifecycleRule
                            {
                                 Id = "Archive immediately rule",
                                 Filter = new LifecycleFilter()
                                 {
                                     LifecycleFilterPredicate = new LifecyclePrefixPredicate()
                                     {
                                         Prefix = "glacierobjects/"
                                     }
                                 },
                                 Status = LifecycleRuleStatus.Enabled,
                                 Transitions = new List<LifecycleTransition>
                                 {
                                      new LifecycleTransition
                                      {
                                           Days = 0,
                                           StorageClass = S3StorageClass.Glacier
                                      }
                                  },
                            },
                            new LifecycleRule
                            {
                                 Id = "Archive and then delete rule",
                                  Filter = new LifecycleFilter()
                                 {
                                     LifecycleFilterPredicate = new LifecyclePrefixPredicate()
                                     {
                                         Prefix = "projectdocs/"
                                     }
                                 },
                                 Status = LifecycleRuleStatus.Enabled,
                                 Transitions = new List<LifecycleTransition>
                                 {
                                      new LifecycleTransition
                                      {
                                           Days = 30,
                                           StorageClass = S3StorageClass.StandardInfrequentAccess
                                      },
                                      new LifecycleTransition
                                      {
                                        Days = 365,
                                        StorageClass = S3StorageClass.Glacier
                                      }
                                 },
                                 Expiration = new LifecycleRuleExpiration()
                                 {
                                       Days = 3650
                                 }
                            }
                        }
                };

                // Add the configuration to the bucket. 
                await AddExampleLifecycleConfigAsync(client, lifeCycleConfiguration);

                // Retrieve an existing configuration. 
                lifeCycleConfiguration = await RetrieveLifecycleConfigAsync(client);

                // Add a new rule.
                lifeCycleConfiguration.Rules.Add(new LifecycleRule
                {
                    Id = "NewRule",
                    Filter = new LifecycleFilter()
                    {
                        LifecycleFilterPredicate = new LifecyclePrefixPredicate()
                        {
                            Prefix = "YearlyDocuments/"
                        }
                    },
                    Expiration = new LifecycleRuleExpiration()
                    {
                        Days = 3650
                    }
                });

                // Add the configuration to the bucket. 
                await AddExampleLifecycleConfigAsync(client, lifeCycleConfiguration);

                // Verify that there are now three rules.
                lifeCycleConfiguration = await RetrieveLifecycleConfigAsync(client);
                Console.WriteLine("Expected # of rulest=3; found:{0}", lifeCycleConfiguration.Rules.Count);

                // Delete the configuration.
                await RemoveLifecycleConfigAsync(client);

                // Retrieve a nonexistent configuration.
                lifeCycleConfiguration = await RetrieveLifecycleConfigAsync(client);

            }
            catch (AmazonS3Exception e)
            {
                Console.WriteLine("Error encountered ***. Message:'{0}' when writing an object", e.Message);
            }
            catch (Exception e)
            {
                Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
            }
        }

        static async Task AddExampleLifecycleConfigAsync(IAmazonS3 client, LifecycleConfiguration configuration)
        {

            PutLifecycleConfigurationRequest request = new PutLifecycleConfigurationRequest
            {
                BucketName = bucketName,
                Configuration = configuration
            };
            var response = await client.PutLifecycleConfigurationAsync(request);
        }

        static async Task<LifecycleConfiguration> RetrieveLifecycleConfigAsync(IAmazonS3 client)
        {
            GetLifecycleConfigurationRequest request = new GetLifecycleConfigurationRequest
            {
                BucketName = bucketName
            };
            var response = await client.GetLifecycleConfigurationAsync(request);
            var configuration = response.Configuration;
            return configuration;
        }

        static async Task RemoveLifecycleConfigAsync(IAmazonS3 client)
        {
            DeleteLifecycleConfigurationRequest request = new DeleteLifecycleConfigurationRequest
            {
                BucketName = bucketName
            };
            await client.DeleteLifecycleConfigurationAsync(request);
        }
    }
}
```

------
#### [ Ruby ]

You can use the AWS SDK for Ruby to manage an S3 Lifecycle configuration on a bucket by using the class [https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/BucketLifecycle.html](https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/BucketLifecycle.html). For more information about managing S3 Lifecycle configuration, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md). 

------

### Using the REST API
<a name="manage-lifecycle-using-rest"></a>

The following topics in the *Amazon Simple Storage Service API Reference* describe the REST API operations related to S3 Lifecycle configuration: 
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLifecycleConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLifecycleConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketLifecycle.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketLifecycle.html)

## Troubleshooting S3 Lifecycle
<a name="lifecycle-troubleshoot"></a>

For common issues that might occur when working with S3 Lifecycle, see [Troubleshooting Amazon S3 Lifecycle issues](troubleshoot-lifecycle.md).

# How S3 Lifecycle interacts with other bucket configurations
<a name="lifecycle-and-other-bucket-config"></a>

In addition to S3 Lifecycle configurations, you can associate other configurations with your bucket. This section explains how S3 Lifecycle configuration relates to other bucket configurations.

## S3 Lifecycle and S3 Versioning
<a name="lifecycle-versioning-support-intro"></a>

You can add S3 Lifecycle configurations to unversioned buckets and versioning-enabled buckets. For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md). 

A versioning-enabled bucket maintains one current object version, and zero or more noncurrent object versions. You can define separate lifecycle rules for current and noncurrent object versions.

For more information, see [Lifecycle configuration elements](intro-lifecycle-rules.md).

**Important**  
When you have multiple rules in an S3 Lifecycle configuration, an object can become eligible for multiple S3 Lifecycle actions on the same day. In such cases, Amazon S3 follows these general rules:  
Permanent deletion takes precedence over transition.
Transition takes precedence over creation of [delete markers](DeleteMarker.md).
When an object is eligible for both a S3 Glacier Flexible Retrieval and S3 Standard-IA (or S3 One Zone-IA) transition, Amazon S3 chooses the S3 Glacier Flexible Retrieval transition.
 For examples, see [Examples of overlapping filters and conflicting lifecycle actions](lifecycle-conflicts.md#lifecycle-config-conceptual-ex5). 

## S3 Lifecycle and
<a name="lifecycle-and-replication"></a>

When you have both and S3 Lifecycle enabled on a bucket, S3 Lifecycle blocks expiration and transition actions on objects with `PENDING` or `FAILED` replication status. This ensures that Lifecycle does not act on objects until they have successfully replicated to their destination bucket.

Objects transition to a `FAILED` replication state for issues such as missing replication role permissions, AWS Key Management Service (AWS KMS) permissions, or bucket permissions. For more information, see [Troubleshooting replication](replication-troubleshoot.md).

Objects with `FAILED` replication status will continue to incur storage costs past their Lifecycle expiration or transition eligibility date until the replication issue is resolved. Once you fix the underlying replication configuration or IAM permissions, new objects will replicate automatically. However, objects that already have `FAILED` replication status will not automatically retry—you must use S3 Batch Replication to replicate them, or delete them using S3 Batch Operations with AWS Lambda if no longer needed. After objects successfully replicate (or are deleted), Lifecycle will resume processing them according to your configured rules. To identify objects with `FAILED` replication status, you can use Amazon CloudWatch metrics (`OperationFailedReplication`) to monitor failure counts and trends at the bucket level, or use Amazon S3 Inventory reports, Amazon S3 API (`HeadObject` or `GetObject`), or Amazon S3 Event Notifications for object-level details.

## S3 Lifecycle configuration on MFA-enabled buckets
<a name="lifecycle-general-considerations-mfa-enabled-bucket"></a>

S3 Lifecycle configuration on multi-factor authentication buckets configured for MFA delete isn't supported. For more information, see [Configuring MFA delete](MultiFactorAuthenticationDelete.md).

## S3 Lifecycle and logging
<a name="lifecycle-general-considerations-logging"></a>

Amazon S3 Lifecycle actions aren't captured by AWS CloudTrail object level logging. CloudTrail captures API requests made to external Amazon S3 endpoints, whereas S3 Lifecycle actions are performed by using internal Amazon S3 endpoints. 

You can enable Amazon S3 server access logs in an S3 bucket to capture S3 Lifecycle-related actions, such as object transitions to another storage class and object expirations that result in permanent deletion or logical deletion. For more information, see [Logging requests with server access logging](ServerLogs.md).

If you have logging enabled on your bucket, Amazon S3 server access logs report the results of the following operations.


| Operation log | Description | 
| --- | --- | 
|  `S3.EXPIRE.OBJECT`  |  Amazon S3 permanently deletes the object because of the lifecycle `Expiration` action.  | 
|  `S3.CREATE.DELETEMARKER`  |  Amazon S3 logically deletes the current version by adding a delete marker in a versioning-enabled bucket.  | 
|  `S3.TRANSITION_SIA.OBJECT`  |  Amazon S3 transitions the object to the S3 Standard-IA storage class.  | 
|  `S3.TRANSITION_ZIA.OBJECT`  |  Amazon S3 transitions the object to the S3 One Zone-IA storage class.  | 
|  `S3.TRANSITION_INT.OBJECT`  |  Amazon S3 transitions the object to the S3 Intelligent-Tiering storage class.  | 
|  `S3.TRANSITION_GIR.OBJECT`  |  Amazon S3 initiates the transition of the object to the S3 Glacier Instant Retrieval storage class.  | 
|  `S3.TRANSITION.OBJECT`  |  Amazon S3 initiates the transition of the object to the S3 Glacier Flexible Retrieval storage class.  | 
|  `S3.TRANSITION_GDA.OBJECT`  |  Amazon S3 initiates the transition of the object to the S3 Glacier Deep Archive storage class.  | 
|  `S3.DELETE.UPLOAD`  |  Amazon S3 aborts an incomplete multipart upload.  | 

**Note**  
Amazon S3 server access log records are delivered on a best-effort basis and can't be used for a complete accounting of all Amazon S3 requests. 

# Configuring S3 Lifecycle event notifications
<a name="lifecycle-configure-notification"></a>

To receive notice when Amazon S3 deletes an object or transitions it to another Amazon S3 storage class as the result of following an S3 Lifecycle rule, you can set up an Amazon S3 event notification.

You can receive notifications for the following S3 Lifecycle events:
+ **Transition events** – By using the `s3:LifecycleTransition` event type, you can receive notification when an object is transitioned from one Amazon S3 storage class to another by an S3 Lifecycle configuration.
+ **Expiration (delete) events** – By using the `LifecycleExpiration` event types, you can receive notifications whenever Amazon S3 deletes an object based on your S3 Lifecycle configuration. 

  There are two expiration event types: 
  + The `s3:LifecycleExpiration:Delete` event type notifies you when an object in an unversioned bucket is deleted. `s3:LifecycleExpiration:Delete` also notifies you when an object version is permanently deleted by an S3 Lifecycle configuration.
  +  The `s3:LifecycleExpiration:DeleteMarkerCreated` event type notifies you when S3 Lifecycle creates a delete marker after a current version of an object in a versioned bucket is deleted. S3 Lifecycle sets the delete marker's creation time to 00:00 UTC (midnight) of the current day. This creation time might differ from the event time in the `s3:LifecycleExpiration:DeleteMarkerCreated` notification that S3 sends. For more information, see [Deleting object versions from a versioning-enabled bucket](DeletingObjectVersions.md).

Amazon S3 can publish event notifications to an Amazon Simple Notification Service (Amazon SNS) topic, an Amazon Simple Queue Service (Amazon SQS) queue, or an AWS Lambda function. For more information, see [Amazon S3 Event Notifications](EventNotifications.md).

For instructions on how to configure Amazon S3 Event Notifications, see [Enabling event notifications by using Amazon SQS, Amazon SNS, and AWS Lambda](how-to-enable-disable-notification-intro.md).

The following is an example of a message that Amazon S3 sends to publish an `s3:LifecycleExpiration:Delete` event. For more information, see [Event message structure](notification-content-structure.md).

```
 1. {  
 2.    "Records":[  
 3.       {  
 4.          "eventVersion":"2.3",
 5.          "eventSource":"aws:s3",
 6.          "awsRegion":"us-west-2",
 7.          "eventTime":"1970-01-01T00:00:00.000Z",
 8.          "eventName":"LifecycleExpiration:Delete",
 9.          "userIdentity":{  
10.             "principalId":"s3.amazonaws.com"
11.          },
12.          "requestParameters":{  
13.             "sourceIPAddress":"s3.amazonaws.com"
14.          },
15.          "responseElements":{  
16.             "x-amz-request-id":"C3D13FE58DE4C810",
17.             "x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
18.          },
19.          "s3":{  
20.             "s3SchemaVersion":"1.0",
21.             "configurationId":"testConfigRule",
22.             "bucket":{  
23.                "name":"amzn-s3-demo-bucket",
24.                "ownerIdentity":{  
25.                   "principalId":"A3NL1KOZZKExample"
26.                },
27.                "arn":"arn:aws:s3:::amzn-s3-demo-bucket"
28.             },
29.             "object":{  
30.                "key":"expiration/delete",
31.                "sequencer":"0055AED6DCD90281E5",              
32.             }
33.          }
34.       }
35.    ]
36. }
```

Messages that Amazon S3 sends to publish an `s3:LifecycleTransition` event also include the following information:

```
1. "lifecycleEventData":{
2.     "transitionEventData": {
3.         "destinationStorageClass": the destination storage class for the object
4.     }
5. }
```

# Lifecycle configuration elements
<a name="intro-lifecycle-rules"></a>

A S3 Lifecycle configuration consist of Lifecycle rules that include various elements that describe the actions Amazon S3 takes during an object's lifetime. Each S3 bucket can have one lifecycle configuration assigned to it, which can contain up to 1,000 rules. You specify an Amazon S3 Lifecycle configuration as XML, consisting of one or more Lifecycle rules, where each rule consists of one or more elements.

```
<LifecycleConfiguration>
    <Rule>
         <Element>
    </Rule>
    <Rule>
         <Element>
         <Element>
    </Rule>
</LifecycleConfiguration>
```

Each rule consists of the following:
+ Rule metadata that includes a rule ID, and a status that indicates whether the rule is enabled or disabled. If a rule is disabled, Amazon S3 doesn't perform any actions specified in the rule.
+ A filter that identifies the objects to which the rule applies. You can specify a filter by using the object size, the object key prefix, one or more object tags, or a combination of filters.
+ One or more transition or expiration actions with a date or a time period in the object's lifetime when you want Amazon S3 to perform the specified action. 

**Topics**
+ [

## ID element
](#intro-lifecycle-rule-id)
+ [

## Status element
](#intro-lifecycle-rule-status)
+ [

## Filter element
](#intro-lifecycle-rules-filter)
+ [

## Elements to describe lifecycle actions
](#intro-lifecycle-rules-actions)
+ [

# Adding filters to Lifecycle rules
](intro-lifecycle-filters.md)

The following sections describe the XML elements in an S3 Lifecycle configuration. For example configurations, see [Examples of S3 Lifecycle configurations](lifecycle-configuration-examples.md).

## ID element
<a name="intro-lifecycle-rule-id"></a>

Lifecycle configurations are set at the bucket level, with each bucket having its own lifecycle configuration. An S3 Lifecycle configuration can have up to 1,000 rules per bucket. This limit is not adjustable. The `<ID>` element uniquely identifies a rule within a bucket's lifecycle configuration. ID length is limited to 255 characters.

## Status element
<a name="intro-lifecycle-rule-status"></a>

The `<Status>` element value can be either `Enabled` or `Disabled`. If a rule is disabled, Amazon S3 doesn't perform any of the actions defined in the rule.

## Filter element
<a name="intro-lifecycle-rules-filter"></a>

A S3 Lifecycle rule can apply to all or a subset of objects in a bucket based on the `<Filter>` element that you specify in the rule. 

You can filter objects by key prefix, object tags, or a combination of both (in which case Amazon S3 uses a logical `AND` to combine the filters). For examples and more information about filters see, [Adding filters to Lifecycle rules](intro-lifecycle-filters.md).
+ **Specifying a filter by using key prefixes** – This example shows an S3 Lifecycle rule that applies to a subset of objects based on the key name prefix (`logs/`). For example, the Lifecycle rule applies to the objects `logs/mylog.txt`, `logs/temp1.txt`, and `logs/test.txt`. The rule does not apply to the object `example.jpg`.

  ```
  <LifecycleConfiguration>
      <Rule>
          <Filter>
             <Prefix>logs/</Prefix>
          </Filter>
          transition/expiration actions
           ...
      </Rule>
      ...
  </LifecycleConfiguration>
  ```

  If you want to apply a lifecycle action to a subset of objects based on different key name prefixes, specify separate rules. In each rule, specify a prefix-based filter. For example, to describe a lifecycle action for objects with the key prefixes `projectA/` and `projectB/`, you specify two rules as follows: 

  ```
  <LifecycleConfiguration>
      <Rule>
          <Filter>
             <Prefix>projectA/</Prefix>
          </Filter>
          transition/expiration actions
           ...
      </Rule>
  
      <Rule>
          <Filter>
             <Prefix>projectB/</Prefix>
          </Filter>
          transition/expiration actions
           ...
      </Rule>
  </LifecycleConfiguration>
  ```

  For more information about object keys, see [Naming Amazon S3 objects](object-keys.md). 
+ **Specifying a filter based on object tags** – In the following example, the Lifecycle rule specifies a filter based on a tag (`key`) and value (`value`). The rule then applies only to a subset of objects with the specific tag.

  ```
  <LifecycleConfiguration>
      <Rule>
          <Filter>
             <Tag>
                <Key>key</Key>
                <Value>value</Value>
             </Tag>
          </Filter>
          transition/expiration actions
          ...
      </Rule>
  </LifecycleConfiguration>
  ```

  You can specify a filter based on multiple tags. You must wrap the tags in the `<And>` element, as shown in the following example. The rule directs Amazon S3 to perform lifecycle actions on objects with two tags (with the specific tag key and value).

  ```
  <LifecycleConfiguration>
      <Rule>
        <Filter>
           <And>
              <Tag>
                 <Key>key1</Key>
                 <Value>value1</Value>
              </Tag>
              <Tag>
                 <Key>key2</Key>
                 <Value>value2</Value>
              </Tag>
               ...
            </And>
        </Filter>
        transition/expiration actions
      </Rule>
  </Lifecycle>
  ```

  The Lifecycle rule applies to objects that have both of the tags specified. Amazon S3 performs a logical `AND`. Note the following:
  + Each tag must match *both* the key and value exactly. If you specify only a `<Key>` element and no `<Value>` element, the rule will apply only to objects that match the tag key and that do *not* have a value specified.
  + The rule applies to a subset of objects that has all the tags specified in the rule. If an object has additional tags specified, the rule will still apply.
**Note**  
When you specify multiple tags in a filter, each tag key must be unique.
+ **Specifying a filter based on both the prefix and one or more tags** – In a Lifecycle rule, you can specify a filter based on both the key prefix and one or more tags. Again, you must wrap all of these filter elements in the `<And>` element, as follows:

  ```
  <LifecycleConfiguration>
      <Rule>
          <Filter>
            <And>
               <Prefix>key-prefix</Prefix>
               <Tag>
                  <Key>key1</Key>
                  <Value>value1</Value>
               </Tag>
               <Tag>
                  <Key>key2</Key>
                  <Value>value2</Value>
               </Tag>
                ...
            </And>
          </Filter>
          <Status>Enabled</Status>
          transition/expiration actions
      </Rule>
  </LifecycleConfiguration>
  ```

  Amazon S3 combines these filters by using a logical `AND`. That is, the rule applies to the subset of objects with the specified key prefix and the specified tags. A filter can have only one prefix, and zero or more tags.
+ You can specify an **empty filter**, in which case the rule applies to all objects in the bucket.

  ```
  <LifecycleConfiguration>
      <Rule>
          <Filter>
          </Filter>
          <Status>Enabled</Status>
          transition/expiration actions
      </Rule>
  </LifecycleConfiguration>
  ```
+ To filter a rule by **object size**, you can specify a minimum size (`ObjectSizeGreaterThan`) or a maximum size (`ObjectSizeLessThan`), or you can specify a range of object sizes.

  Object size values are in bytes. By default, objects smaller than 128 KB will not be transitioned to any storage class, unless you specify a smaller minimum size (`ObjectSizeGreaterThan`) or a maximum size (`ObjectSizeLessThan`). For more information, see [Example: Allowing objects smaller than 128 KB to be transitioned](lifecycle-configuration-examples.md#lc-small-objects).

  ```
                      <LifecycleConfiguration>
      <Rule>
          <Filter>
              <ObjectSizeGreaterThan>500</ObjectSizeGreaterThan>   
          </Filter>
          <Status>Enabled</Status>
          transition/expiration actions
      </Rule>
  </LifecycleConfiguration>
  ```
**Note**  
The `ObjectSizeGreaterThan` and `ObjectSizeLessThan` filters exclude the specified values. For example, if you set objects sized 128 KB to 1024 KB to move from the S3 Standard storage class to the S3 Standard-IA storage class, objects that are exactly 1024 KB and 128 KB won't transition to S3 Standard-IA. Instead, the rule will apply only to objects that are greater than 128 KB and less than 1024 KB in size. 

  If you're specifying an object size range, the `ObjectSizeGreaterThan` integer must be less than the `ObjectSizeLessThan` value. When using more than one filter, you must wrap the filters in an `<And>` element. The following example shows how to specify objects in a range between 500 bytes and 64,000 bytes. 

  ```
  <LifecycleConfiguration>
      <Rule>
          <Filter>
              <And>
                  <Prefix>key-prefix</Prefix>
                  <ObjectSizeGreaterThan>500</ObjectSizeGreaterThan>
                  <ObjectSizeLessThan>64000</ObjectSizeLessThan>
              </And>    
          </Filter>
          <Status>Enabled</Status>
          transition/expiration actions
      </Rule>
  </LifecycleConfiguration>
  ```

## Elements to describe lifecycle actions
<a name="intro-lifecycle-rules-actions"></a>

You can direct Amazon S3 to perform specific actions in an object's lifetime by specifying one or more of the following predefined actions in an S3 Lifecycle rule. The effect of these actions depends on the versioning state of your bucket. 
+ **`Transition` action element** – You specify the `Transition` action to transition objects from one storage class to another. For more information about transitioning objects, see [Supported transitions](lifecycle-transition-general-considerations.md#lifecycle-general-considerations-transition-sc). When a specified date or time period in the object's lifetime is reached, Amazon S3 performs the transition. 

  For a versioned bucket (versioning-enabled or versioning-suspended bucket), the `Transition` action applies to the current object version. To manage noncurrent versions, Amazon S3 defines the `NoncurrentVersionTransition` action (described later in this topic).
+ **`Expiration` action element** – The `Expiration` action expires objects identified in the rule and applies to eligible objects in any of the Amazon S3 storage classes. For more information about storage classes, see [Understanding and managing Amazon S3 storage classes](storage-class-intro.md). Amazon S3 makes all expired objects unavailable. Whether the objects are permanently removed depends on the versioning state of the bucket. 
  + **Nonversioned bucket** – The `Expiration` action results in Amazon S3 permanently removing the object. 
  + **Versioned bucket** – For a versioned bucket (that is, versioning-enabled or versioning-suspended), there are several considerations that guide how Amazon S3 handles the `Expiration` action. For versioning-enabled or versioning-suspended buckets, the following applies:
    + The `Expiration` action applies only to the current version (it has no impact on noncurrent object versions).
    + Amazon S3 doesn't take any action if there are one or more object versions and the delete marker is the current version.
    + If the current object version is the only object version and it is also a delete marker (also referred as an *expired object delete marker*, where all object versions are deleted and you only have a delete marker remaining), Amazon S3 removes the expired object delete marker. You can also use the expiration action to direct Amazon S3 to remove any expired object delete markers. For an example, see [Removing expired object delete markers in a versioning-enabled bucket](lifecycle-configuration-examples.md#lifecycle-config-conceptual-ex7). 

    For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).

    Also consider the following when setting up Amazon S3 to manage expiration:
    + **Versioning-enabled bucket** 

      If the current object version is not a delete marker, Amazon S3 adds a delete marker with a unique version ID. This makes the current version noncurrent, and the delete marker the current version. 
    + **Versioning-suspended bucket** 

      In a versioning-suspended bucket, the expiration action causes Amazon S3 to create a delete marker with `null` as the version ID. This delete marker replaces any object version with a null version ID in the version hierarchy, which effectively deletes the object. 

In addition, Amazon S3 provides the following actions that you can use to manage noncurrent object versions in a versioned bucket (that is, versioning-enabled and versioning-suspended buckets).
+ **`NoncurrentVersionTransition` action element** – Use this action to specify when Amazon S3 transitions objects to the specified storage class. You can base this transition on a certain number of days since the objects became noncurrent (`<NoncurrentDays>`). In addition to the number of days, you can also specify the number of noncurrent versions (`<NewerNoncurrentVersions>`) to retain (between 1 and 100). This value determines how many newer noncurrent versions must exist before Amazon S3 can transition a given version. Amazon S3 will transition any additional noncurrent versions beyond the specified number to retain. For the transition to occur, both the `<NoncurrentDays>` **and** the `<NewerNoncurrentVersions>` values must be exceeded.

  To specify the number of noncurrent versions to retain, you must also provide a `<Filter>` element. If you don't specify a `<Filter>` element, Amazon S3 generates an `InvalidRequest` error when you specify the number of noncurrent versions to retain.

  For more information about transitioning objects, see [Supported transitions](lifecycle-transition-general-considerations.md#lifecycle-general-considerations-transition-sc). For details about how Amazon S3 calculates the date when you specify the number of days in the `NoncurrentVersionTransition` action, see [Lifecycle rules: Based on an object's age](#intro-lifecycle-rules-number-of-days).
+ **`NoncurrentVersionExpiration` action element** – Use this action to direct Amazon S3 to permanently delete noncurrent versions of objects. These deleted objects can't be recovered. You can base this expiration on a certain number of days since the objects became noncurrent (`<NoncurrentDays>`). In addition to the number of days, you can also specify the number of noncurrent versions (`<NewerNoncurrentVersions>`) to retain (between 1 and 100). This value specifies how many newer noncurrent versions must exist before Amazon S3 can expire a given version. Amazon S3 will permanently delete any additional noncurrent versions beyond the specified number to retain. For the deletion to occur, both the `<NoncurrentDays>` **and** the `<NewerNoncurrentVersions>` values must be exceeded.

  To specify the number of noncurrent versions to retain, you must also provide a `<Filter>` element. If you don't specify a `<Filter>` element, Amazon S3 generates an `InvalidRequest` error when you specify the number of noncurrent versions to retain.

  Delayed removal of noncurrent objects can be helpful when you need to correct any accidental deletes or overwrites. For example, you can configure an expiration rule to delete noncurrent versions five days after they become noncurrent. For example, suppose that on 1/1/2014 at 10:30 AM UTC, you create an object called `photo.gif` (version ID 111111). On 1/2/2014 at 11:30 AM UTC, you accidentally delete `photo.gif` (version ID 111111), which creates a delete marker with a new version ID (such as version ID 4857693). You now have five days to recover the original version of `photo.gif` (version ID 111111) before the deletion is permanent. On 1/8/2014 at 00:00 UTC, the Lifecycle rule for expiration runs and permanently deletes `photo.gif` (version ID 111111), five days after it became a noncurrent version. 

  For details about how Amazon S3 calculates the date when you specify the number of days in an `NoncurrentVersionExpiration` action, see [Lifecycle rules: Based on an object's age](#intro-lifecycle-rules-number-of-days).
**Note**  
Object expiration lifecycle configurations don't remove incomplete multipart uploads. To remove incomplete multipart uploads, you must use the `AbortIncompleteMultipartUpload` Lifecycle configuration action that's described later in this section. 

In addition to the transition and expiration actions, you can use the following Lifecycle configuration actions to direct Amazon S3 to stop incomplete multipart uploads or to remove expired object delete markers: 
+ **`AbortIncompleteMultipartUpload` action element** – Use this element to set a maximum time (in days) that you want to allow multipart uploads to remain in progress. If the applicable multipart uploads (determined by the key name `prefix` specified in the Lifecycle rule) aren't successfully completed within the predefined time period, Amazon S3 stops the incomplete multipart uploads. For more information, see [Aborting a multipart upload](abort-mpu.md). 
**Note**  
You can't specify this lifecycle action in a rule that has a filter that uses object tags. 
+ **`ExpiredObjectDeleteMarker` action element** – In a versioning-enabled bucket, a delete marker with zero noncurrent versions is referred to as an *expired object delete marker*. You can use this lifecycle action to direct Amazon S3 to remove expired object delete markers. For an example, see [Removing expired object delete markers in a versioning-enabled bucket](lifecycle-configuration-examples.md#lifecycle-config-conceptual-ex7).
**Note**  
You can't specify this lifecycle action in a rule that has a filter that uses object tags.

### How Amazon S3 calculates how long an object has been noncurrent
<a name="non-current-days-calculations"></a>

In a versioning-enabled bucket, you can have multiple versions of an object. There is always one current version, and zero or more noncurrent versions. Each time you upload an object, the current version is retained as the noncurrent version and the newly added version, the successor, becomes the current version. To determine the number of days an object is noncurrent, Amazon S3 looks at when its successor was created. Amazon S3 uses the number of days since its successor was created as the number of days an object is noncurrent.

**Restoring previous versions of an object when using S3 Lifecycle configurations**  
As explained in [Restoring previous versions](RestoringPreviousVersions.md), you can use either of the following two methods to retrieve previous versions of an object:  
**Method 1 – Copy a noncurrent version of the object into the same bucket.** The copied object becomes the current version of that object, and all object versions are preserved.
**Method 2 – Permanently delete the current version of the object.** When you delete the current object version, you, in effect, turn the noncurrent version into the current version of that object. 
When you're using S3 Lifecycle configuration rules with versioning-enabled buckets, we recommend as a best practice that you use Method 1.   
S3 Lifecycle operates under an eventually consistent model. A current version that you permanently deleted might not disappear until the changes propagate to all of the Amazon S3 systems. (Therefore, Amazon S3 might be temporarily unaware of this deletion.) In the meantime, the lifecycle rule that you configured to expire noncurrent objects might permanently remove noncurrent objects, including the one that you want to restore. So, copying the old version, as recommended in Method 1, is the safer alternative.

### Lifecycle actions and bucket versioning state
<a name="lifecycle-actions-bucket-versioning-state"></a>

The following table summarizes the behavior of the S3 Lifecycle configuration rule actions on objects in relation to the versioning state of the bucket that contains the object.


| Action | Nonversioned bucket (versioning not enabled) | Versioning-enabled bucket | Versioning-suspended bucket | 
| --- | --- | --- | --- | 
|  `Transition` When a specified date or time period in the object's lifetime is reached.  | Amazon S3 transitions the object to the specified storage class. | Amazon S3 transitions the current version of the object to the specified storage class. | Same behavior as a versioning-enabled bucket. | 
|  `Expiration` When a specified date or time period in the object's lifetime is reached.  | The Expiration action deletes the object, and the deleted object can't be recovered. | If the current version isn't a delete marker, Amazon S3 creates a delete marker, which becomes the current version, and the existing current version is retained as a noncurrent version. | The lifecycle action creates a delete marker with null version ID, which becomes the current version. If the version ID of the current version of the object is null, the Expiration action permanently deletes this version. Otherwise, the current version is retained as a noncurrent version. | 
|  `NoncurrentVersionTransition` For noncurrent versions in a versioning enabled or versioning suspended bucket, S3 Lifecycle transitions an object when the number of days since the object has been noncurrent exceeds both the value specified under **Days after objects become noncurrent** (`<NoncurrentDays>`) in the rule **and** when the number of versions exceeds the value specified in **Number of newer versions to retain** (`<NewerNoncurrentVersions>`) in the rule.  | NoncurrentVersionTransition has no effect. |  Amazon S3 transitions the noncurrent object versions to the specified storage class.  | Same behavior as a versioning-enabled bucket. | 
|  `NoncurrentVersionExpiration` For noncurrent versions in a versioning enabled or versioning suspended bucket, S3 Lifecycle expires an object when the number of days since the object has been noncurrent exceeds both the value specified under **Days after objects become noncurrent** (`<NoncurrentDays>`) in the rule **and** when the number of versions exceeds the value specified in **Number of newer versions to retain** (`<NewerNoncurrentVersions>`) in the rule.  | NoncurrentVersionExpiration has no effect. | The NoncurrentVersionExpiration action permanently deletes the noncurrent version of the object, and the deleted object can't be recovered. | Same behavior as a versioning-enabled bucket. | 

### Lifecycle rules: Based on an object's age
<a name="intro-lifecycle-rules-number-of-days"></a>

You can specify a time period, in the number of days from the creation (or modification) of the object, when Amazon S3 can take the specified action. 

When you specify the number of days in the `Transition` and `Expiration` actions in an S3 Lifecycle configuration, note the following:
+ The value that you specify is the number of days since object creation when the action will occur.
+ Amazon S3 calculates the time by adding the number of days specified in the rule to the object creation time and rounding up the resulting time to the next day at midnight UTC. For example, if an object was created on 1/15/2014 at 10:30 AM UTC and you specify 3 days in a transition rule, then the transition date of the object would be calculated as 1/19/2014 00:00 UTC. 

**Note**  
Amazon S3 maintains only the last modified date for each object. For example, the Amazon S3 console shows the **Last modified** date in the object's **Properties** pane. When you initially create a new object, this date reflects the date that the object is created. If you replace the object, the date changes accordingly. Therefore, the creation date is synonymous with the **Last modified** date. 

When specifying the number of days in the `NoncurrentVersionTransition` and `NoncurrentVersionExpiration` actions in a Lifecycle configuration, note the following:
+ The value that you specify is the number of days from when the version of the object becomes noncurrent (that is, when the object is overwritten or deleted) that Amazon S3 will perform the action on the specified object or objects.
+ Amazon S3 calculates the time by adding the number of days specified in the rule to the time when the new successor version of the object is created and rounding up the resulting time to the next day at midnight UTC. For example, in your bucket, suppose that you have a current version of an object that was created on 1/1/2014 at 10:30 AM UTC. If the new version of the object that replaces the current version is created on 1/15/2014 at 10:30 AM UTC, and you specify 3 days in a transition rule, the transition date of the object is calculated as 1/19/2014 00:00 UTC. 

### Lifecycle rules: Based on a specific date
<a name="intro-lifecycle-rules-date"></a>

When specifying an action in an S3 Lifecycle rule, you can specify a date when you want Amazon S3 to take the action. When the specific date arrives, Amazon S3 applies the action to all qualified objects (based on the filter criteria). 

If you specify an S3 Lifecycle action with a date that is in the past, all qualified objects become immediately eligible for that lifecycle action.

**Important**  
The date-based action is not a one-time action. Amazon S3 continues to apply the date-based action even after the date has passed, as long as the rule status is `Enabled`.  
For example, suppose that you specify a date-based `Expiration` action to delete all objects (assume that no filter is specified in the rule). On the specified date, Amazon S3 expires all the objects in the bucket. Amazon S3 also continues to expire any new objects that you create in the bucket. To stop the lifecycle action, you must either remove the action from the lifecycle rule, disable the rule, or delete the rule from the lifecycle configuration.

The date value must conform to the ISO 8601 format. The time is always midnight UTC. 

**Note**  
You can't create date-based Lifecycle rules by using the Amazon S3 console, but you can view, disable, or delete such rules. 

# Adding filters to Lifecycle rules
<a name="intro-lifecycle-filters"></a>

Filters are an optional Lifecyle rule element that you can use to specify which objects that the rule applies to.

The following elements can be used to filter objects:

**Key prefix**  
You can filter objects based on a prefix. If you want to apply a lifecycle action to a subset of objects with different prefixes, create separate rules for each action.

**Object tags**  
You can filter objects based on one or more tags. Each tag must match both the key and value exactly, and, if you specify multiple tags each tag key must be unique. A filter with multiple object tags applies to a subset of objects that has all the tags specified. If an object has additional tags specified, the filter will still apply.  
If you specify only a `Key` element and no `Value` element, the rule will apply only to objects that match the tag key and that do not have a value specified.

**Minimum or maximum object size**  
You can filter objects based on size. You can specify a minimum size (`ObjectSizeGreaterThan`) or a maximum size (`ObjectSizeLessThan`), or you can specify a range of object sizes in the same filter. Object size values are in bytes. Maximum filter size is 50 TB. Amazon S3 applies a default minimum object size to lifecycle configuration. For more information, see [Example: Allowing objects smaller than 128 KB to be transitioned](lifecycle-configuration-examples.md#lc-small-objects).

You can combine different filter elements in which case Amazon S3 uses a logical `AND`.

## Filter examples
<a name="filter-examples"></a>

The following examples show how you can use different filter elements:
+ **Specifying a filter by using key prefixes** – This example shows an S3 Lifecycle rule that applies to a subset of objects based on the key name prefix (`logs/`). For example, the Lifecycle rule applies to the objects `logs/mylog.txt`, `logs/temp1.txt`, and `logs/test.txt`. The rule does not apply to the object `example.jpg`.

  ```
  <LifecycleConfiguration>
      <Rule>
          <Filter>
             <Prefix>logs/</Prefix>
          </Filter>
          transition/expiration actions
           ...
      </Rule>
      ...
  </LifecycleConfiguration>
  ```
**Note**  
 If you have one or more prefixes that start with the same characters, you can include all of those prefixes in your rule by specifying a partial prefix with no trailing slash (`/`) in the filter. For example, suppose that you have these prefixes:  

  ```
  sales1999/
                  sales2000/
                  sales2001/
  ```
To include all three prefixes in your rule, specify `sales` as the prefix in your lifecycle rule.

  If you want to apply a lifecycle action to a subset of objects based on different key name prefixes, specify separate rules. In each rule, specify a prefix-based filter. For example, to describe a lifecycle action for objects with the key prefixes `projectA/` and `projectB/`, you specify two rules as follows: 

  ```
  <LifecycleConfiguration>
      <Rule>
          <Filter>
             <Prefix>projectA/</Prefix>
          </Filter>
          transition/expiration actions
           ...
      </Rule>
  
      <Rule>
          <Filter>
             <Prefix>projectB/</Prefix>
          </Filter>
          transition/expiration actions
           ...
      </Rule>
  </LifecycleConfiguration>
  ```

  For more information about object keys, see [Naming Amazon S3 objects](object-keys.md). 
+ **Specifying a filter based on object tags** – In the following example, the Lifecycle rule specifies a filter based on a tag (`key`) and value (`value`). The rule then applies only to a subset of objects with the specific tag.

  ```
  <LifecycleConfiguration>
      <Rule>
          <Filter>
             <Tag>
                <Key>key</Key>
                <Value>value</Value>
             </Tag>
          </Filter>
          transition/expiration actions
          ...
      </Rule>
  </LifecycleConfiguration>
  ```

  You can specify a filter based on multiple tags. You must wrap the tags in the `<And>` element, as shown in the following example. The rule directs Amazon S3 to perform lifecycle actions on objects with two tags (with the specific tag key and value).

  ```
  <LifecycleConfiguration>
      <Rule>
        <Filter>
           <And>
              <Tag>
                 <Key>key1</Key>
                 <Value>value1</Value>
              </Tag>
              <Tag>
                 <Key>key2</Key>
                 <Value>value2</Value>
              </Tag>
               ...
            </And>
        </Filter>
        transition/expiration actions
      </Rule>
  </Lifecycle>
  ```

  The Lifecycle rule applies to objects that have both of the tags specified. Amazon S3 performs a logical `AND`. Note the following:
  + Each tag must match *both* the key and value exactly. If you specify only a `<Key>` element and no `<Value>` element, the rule will apply only to objects that match the tag key and that do *not* have a value specified.
  + The rule applies to a subset of objects that has all the tags specified in the rule. If an object has additional tags specified, the rule will still apply.
**Note**  
When you specify multiple tags in a filter, each tag key must be unique.
+ **Specifying a filter based on both the prefix and one or more tags** – In a Lifecycle rule, you can specify a filter based on both the key prefix and one or more tags. Again, you must wrap all of these filter elements in the `<And>` element, as follows:

  ```
  <LifecycleConfiguration>
      <Rule>
          <Filter>
            <And>
               <Prefix>key-prefix</Prefix>
               <Tag>
                  <Key>key1</Key>
                  <Value>value1</Value>
               </Tag>
               <Tag>
                  <Key>key2</Key>
                  <Value>value2</Value>
               </Tag>
                ...
            </And>
          </Filter>
          <Status>Enabled</Status>
          transition/expiration actions
      </Rule>
  </LifecycleConfiguration>
  ```

  Amazon S3 combines these filters by using a logical `AND`. That is, the rule applies to the subset of objects with the specified key prefix and the specified tags. A filter can have only one prefix, and zero or more tags.
+ **Specifying an empty filter** – You can specify an empty filter, in which case the rule applies to all objects in the bucket.

  ```
  <LifecycleConfiguration>
      <Rule>
          <Filter>
          </Filter>
          <Status>Enabled</Status>
          transition/expiration actions
      </Rule>
  </LifecycleConfiguration>
  ```
+ **>Specifying an object size filter** – To filter a rule by object size, you can specify a minimum size (`ObjectSizeGreaterThan`) or a maximum size (`ObjectSizeLessThan`), or you can specify a range of object sizes.

  Object size values are in bytes. Maximum filter size is 50 TB. Some storage classes have minimum object size limitations. For more information, see [Comparing the Amazon S3 storage classes](storage-class-intro.md#sc-compare).

  ```
  <LifecycleConfiguration>
      <Rule>
          <Filter>
              <ObjectSizeGreaterThan>500</ObjectSizeGreaterThan>   
          </Filter>
          <Status>Enabled</Status>
          transition/expiration actions
      </Rule>
  </LifecycleConfiguration>
  ```
**Note**  
The `ObjectSizeGreaterThan` and `ObjectSizeLessThan` filters exclude the specified values. For example, if you set objects sized 128 KB to 1024 KB to move from the S3 Standard storage class to the S3 Standard-IA storage class, objects that are exactly 1024 KB and 128 KB won't transition to S3 Standard-IA. Instead, the rule will apply only to objects that are greater than 128 KB and less than 1024 KB in size. 

  If you're specifying an object size range, the `ObjectSizeGreaterThan` integer must be less than the `ObjectSizeLessThan` value. When using more than one filter, you must wrap the filters in an `<And>` element. The following example shows how to specify objects in a range between 500 bytes and 64,000 bytes. 

  ```
  <LifecycleConfiguration>
      <Rule>
          <Filter>
              <And>
                  <Prefix>key-prefix</Prefix>
                  <ObjectSizeGreaterThan>500</ObjectSizeGreaterThan>
                  <ObjectSizeLessThan>64000</ObjectSizeLessThan>
              </And>    
          </Filter>
          <Status>Enabled</Status>
          transition/expiration actions
      </Rule>
  </LifecycleConfiguration>
  ```

# How Amazon S3 handles conflicts in lifecycle configurations
<a name="lifecycle-conflicts"></a>

Generally, Amazon S3 Lifecycle optimizes for cost. For example, if two expiration policies overlap, the shorter expiration policy is honored so that data is not stored for longer than expected. Likewise, if two transition policies overlap, S3 Lifecycle transitions your objects to the lower-cost storage class.

In both cases, S3 Lifecycle tries to choose the path that is least expensive for you. An exception to this general rule is with the S3 Intelligent-Tiering storage class. S3 Intelligent-Tiering is favored by S3 Lifecycle over any storage class, aside from the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes.

When you have multiple rules in an S3 Lifecycle configuration, an object can become eligible for multiple S3 Lifecycle actions on the same day. In such cases, Amazon S3 follows these general rules:
+ Permanent deletion takes precedence over transition.
+ Transition takes precedence over creation of [delete markers](DeleteMarker.md).
+ When an object is eligible for both an S3 Glacier Flexible Retrieval and an S3 Standard-IA (or an S3 One Zone-IA) transition, Amazon S3 chooses the S3 Glacier Flexible Retrieval transition.

## Examples of overlapping filters and conflicting lifecycle actions
<a name="lifecycle-config-conceptual-ex5"></a>

You might specify an S3 Lifecycle configuration in which you specify overlapping prefixes, or actions. The following examples show how Amazon S3 resolves potential conflicts.

**Example 1: Overlapping prefixes (no conflict)**  
The following example configuration has two rules that specify overlapping prefixes as follows:  
+ The first rule specifies an empty filter, indicating all objects in the bucket. 
+ The second rule specifies a key name prefix (`logs/`), indicating only a subset of objects.
Rule 1 requests Amazon S3 to delete all objects one year after creation. Rule 2 requests Amazon S3 to transition a subset of objects to the S3 Standard-IA storage class 30 days after creation.  

```
 1. <LifecycleConfiguration>
 2.   <Rule>
 3.     <ID>Rule 1</ID>
 4.     <Filter>
 5.     </Filter>
 6.     <Status>Enabled</Status>
 7.     <Expiration>
 8.       <Days>365</Days>
 9.     </Expiration>
10.   </Rule>
11.   <Rule>
12.     <ID>Rule 2</ID>
13.     <Filter>
14.       <Prefix>logs/</Prefix>
15.     </Filter>
16.     <Status>Enabled</Status>
17.     <Transition>
18.       <StorageClass>STANDARD_IA</StorageClass>
19.       <Days>30</Days>
20.     </Transition>
21.    </Rule>
22. </LifecycleConfiguration>
```
Since there is no conflict in this case, Amazon S3 will transition the objects with the `logs/` prefix to the S3 Standard-IA storage class 30 days after creation. When any object reaches one year after creation, it will be deleted.

**Example 2: Conflicting lifecycle actions**  
In this example configuration, there are two rules that direct Amazon S3 to perform two different actions on the same set of objects at the same time in the objects' lifetime:  
+ Both rules specify the same key name prefix, so both rules apply to the same set of objects.
+ Both rules specify the same 365 days after object creation when the rules apply.
+ One rule directs Amazon S3 to transition objects to the S3 Standard-IA storage class and another rule wants Amazon S3 to expire the objects at the same time.

```
<LifecycleConfiguration>
  <Rule>
    <ID>Rule 1</ID>
    <Filter>
      <Prefix>logs/</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Expiration>
      <Days>365</Days>
    </Expiration>        
  </Rule>
  <Rule>
    <ID>Rule 2</ID>
    <Filter>
      <Prefix>logs/</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Transition>
      <StorageClass>STANDARD_IA</StorageClass>
      <Days>365</Days>
    </Transition>
   </Rule>
</LifecycleConfiguration>
```
In this case, because you want objects to expire (to be removed), there is no point in changing the storage class, so Amazon S3 chooses the expiration action on these objects.

**Example 3: Overlapping prefixes resulting in conflicting lifecycle actions**  
In this example, the configuration has two rules, which specify overlapping prefixes as follows:  
+ Rule 1 specifies an empty prefix (indicating all objects).
+ Rule 2 specifies a key name prefix (`logs/`) that identifies a subset of all objects.
For the subset of objects with the `logs/` key name prefix, S3 Lifecycle actions in both rules apply. One rule directs Amazon S3 to transition objects 10 days after creation, and another rule directs Amazon S3 to transition objects 365 days after creation.   

```
<LifecycleConfiguration>
  <Rule>
    <ID>Rule 1</ID>
    <Filter>
      <Prefix></Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Transition>
      <StorageClass>STANDARD_IA</StorageClass>
      <Days>10</Days> 
    </Transition>
  </Rule>
  <Rule>
    <ID>Rule 2</ID>
    <Filter>
      <Prefix>logs/</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Transition>
      <StorageClass>STANDARD_IA</StorageClass>
      <Days>365</Days> 
    </Transition>
   </Rule>
</LifecycleConfiguration>
```
In this case, Amazon S3 chooses to transition them 10 days after creation. 

**Example 4: Tag-based filtering and resulting conflicting lifecycle actions**  
Suppose that you have the following S3 Lifecycle configuration that has two rules, each specifying a tag filter:  
+ Rule 1 specifies a tag-based filter (`tag1/value1`). This rule directs Amazon S3 to transition objects to the S3 Glacier Flexible Retrieval storage class 365 days after creation.
+ Rule 2 specifies a tag-based filter (`tag2/value2`). This rule directs Amazon S3 to expire objects 14 days after creation.
The S3 Lifecycle configuration is shown in following example.  

```
<LifecycleConfiguration>
  <Rule>
    <ID>Rule 1</ID>
    <Filter>
      <Tag>
         <Key>tag1</Key>
         <Value>value1</Value>
      </Tag>
    </Filter>
    <Status>Enabled</Status>
    <Transition>
      <StorageClass>GLACIER</StorageClass>
      <Days>365</Days> 
    </Transition>
  </Rule>
  <Rule>
    <ID>Rule 2</ID>
    <Filter>
      <Tag>
         <Key>tag2</Key>
         <Value>value2</Value>
      </Tag>
    </Filter>
    <Status>Enabled</Status>
    <Expiration>
      <Days>14</Days> 
    </Expiration>
   </Rule>
</LifecycleConfiguration>
```
If an object has both tags, then Amazon S3 has to decide which rule to follow. In this case, Amazon S3 expires the object 14 days after creation. The object is removed, and therefore the transition action does not apply.





# Examples of S3 Lifecycle configurations
<a name="lifecycle-configuration-examples"></a>

This section provides examples of S3 Lifecycle configuration. Each example shows how you can specify the XML in each of the example scenarios.

**Topics**
+ [

## Archiving all objects within one day after creation
](#lifecycle-config-ex1)
+ [

## Disabling Lifecycle rules temporarily
](#lifecycle-config-conceptual-ex2)
+ [

## Tiering down the storage class over an object's lifetime
](#lifecycle-config-conceptual-ex3)
+ [

## Specifying multiple rules
](#lifecycle-config-conceptual-ex4)
+ [

## Specifying a lifecycle rule for a versioning-enabled bucket
](#lifecycle-config-conceptual-ex6)
+ [

## Removing expired object delete markers in a versioning-enabled bucket
](#lifecycle-config-conceptual-ex7)
+ [

## Lifecycle configuration to abort multipart uploads
](#lc-expire-mpu)
+ [

## Expiring noncurrent objects that have no data
](#lc-size-rules)
+ [

## Example: Allowing objects smaller than 128 KB to be transitioned
](#lc-small-objects)

## Archiving all objects within one day after creation
<a name="lifecycle-config-ex1"></a>

Each S3 Lifecycle rule includes a filter that you can use to identify a subset of objects in your bucket to which the S3 Lifecycle rule applies. The following S3 Lifecycle configurations show examples of how you can specify a filter.
+ In this S3 Lifecycle configuration rule, the filter specifies a key prefix (`tax/`). Therefore, the rule applies to objects with the key name prefix `tax/`, such as `tax/doc1.txt` and `tax/doc2.txt`.

  The rule specifies two actions that direct Amazon S3 to do the following:
  + Transition objects to the S3 Glacier Flexible Retrieval storage class 365 days (one year) after creation.
  + Delete objects (the `Expiration` action) 3,650 days (10 years) after creation.

  ```
  <LifecycleConfiguration>
    <Rule>
      <ID>Transition and Expiration Rule</ID>
      <Filter>
         <Prefix>tax/</Prefix>
      </Filter>
      <Status>Enabled</Status>
      <Transition>
        <Days>365</Days>
        <StorageClass>GLACIER</StorageClass>
      </Transition>
      <Expiration>
        <Days>3650</Days>
      </Expiration>
    </Rule>
  </LifecycleConfiguration>
  ```

  Instead of specifying the object age in terms of days after creation, you can specify a date for each action. However, you can't use both `Date` and `Days` in the same rule. 
+ If you want the S3 Lifecycle rule to apply to all objects in the bucket, specify an empty prefix. In the following configuration, the rule specifies a `Transition` action that directs Amazon S3 to transition objects to the S3 Glacier Flexible Retrieval storage class 0 days after creation. This rule means that the objects are eligible for archival to S3 Glacier Flexible Retrieval at midnight UTC following creation. For more information about lifecycle constraints, see [Constraints and considerations for transitions](lifecycle-transition-general-considerations.md#lifecycle-configuration-constraints).

  ```
  <LifecycleConfiguration>
    <Rule>
      <ID>Archive all object same-day upon creation</ID>
      <Filter>
        <Prefix></Prefix>
      </Filter>
      <Status>Enabled</Status>
      <Transition>
        <Days>0</Days>
        <StorageClass>GLACIER</StorageClass>
      </Transition>
    </Rule>
  </LifecycleConfiguration>
  ```
+ You can specify zero or one key name prefix and zero or more object tags in a filter. The following example code applies the S3 Lifecycle rule to a subset of objects with the `tax/` key prefix and to objects that have two tags with specific key and value. When you specify more than one filter, you must include the `<And>` element as shown (Amazon S3 applies a logical `AND` to combine the specified filter conditions).

  ```
  ...
  <Filter>
     <And>
        <Prefix>tax/</Prefix>
        <Tag>
           <Key>key1</Key>
           <Value>value1</Value>
        </Tag>
        <Tag>
           <Key>key2</Key>
           <Value>value2</Value>
        </Tag>
      </And>
  </Filter>
  ...
  ```

  
+ You can filter objects based only on tags. For example, the following S3 Lifecycle rule applies to objects that have the two specified tags (it does not specify any prefix).

  ```
  ...
  <Filter>
     <And>
        <Tag>
           <Key>key1</Key>
           <Value>value1</Value>
        </Tag>
        <Tag>
           <Key>key2</Key>
           <Value>value2</Value>
        </Tag>
      </And>
  </Filter>
  ...
  ```

  

**Important**  
When you have multiple rules in an S3 Lifecycle configuration, an object can become eligible for multiple S3 Lifecycle actions on the same day. In such cases, Amazon S3 follows these general rules:  
Permanent deletion takes precedence over transition.
Transition takes precedence over creation of [delete markers](DeleteMarker.md).
When an object is eligible for both an S3 Glacier Flexible Retrieval and S3 Standard-IA (or S3 One Zone-IA) transition, Amazon S3 chooses the S3 Glacier Flexible Retrieval transition.
 For examples, see [Examples of overlapping filters and conflicting lifecycle actions](lifecycle-conflicts.md#lifecycle-config-conceptual-ex5). 



## Disabling Lifecycle rules temporarily
<a name="lifecycle-config-conceptual-ex2"></a>

You can temporarily disable an S3 Lifecycle rule using the `status` element. This can be useful if you want to test new rules or troubleshoot issues with your configuration, without overwriting your existing rules. The following S3 Lifecycle configuration specifies two rules:
+ Rule 1 directs Amazon S3 to transition objects with the `logs/` prefix to the S3 Glacier Flexible Retrieval storage class soon after creation. 
+ Rule 2 directs Amazon S3 to transition objects with the `documents/` prefix to the S3 Glacier Flexible Retrieval storage class soon after creation. 

In the configuration, Rule 1 is enabled and Rule 2 is disabled. Amazon S3 ignores the disabled rule.

```
<LifecycleConfiguration>
  <Rule>
    <ID>Rule1</ID>
    <Filter>
      <Prefix>logs/</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Transition>
      <Days>0</Days>
      <StorageClass>GLACIER</StorageClass>
    </Transition>
  </Rule>
  <Rule>
    <ID>Rule2</ID>
    <Filter>
      <Prefix>documents/</Prefix>
    </Filter>
    <Status>Disabled</Status>
    <Transition>
      <Days>0</Days>
      <StorageClass>GLACIER</StorageClass>
    </Transition>
  </Rule>
</LifecycleConfiguration>
```

## Tiering down the storage class over an object's lifetime
<a name="lifecycle-config-conceptual-ex3"></a>

In this example, you use S3 Lifecycle configuration to tier down the storage class of objects over their lifetime. Tiering down can help reduce storage costs. For more information about pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

The following S3 Lifecycle configuration specifies a rule that applies to objects with the key name prefix `logs/`. The rule specifies the following actions:
+ Two transition actions:
  + Transition objects to the S3 Standard-IA storage class 30 days after creation.
  + Transition objects to the S3 Glacier Flexible Retrieval storage class 90 days after creation.
+ One expiration action that directs Amazon S3 to delete objects a year after creation.

```
<LifecycleConfiguration>
  <Rule>
    <ID>example-id</ID>
    <Filter>
       <Prefix>logs/</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Transition>
      <Days>30</Days>
      <StorageClass>STANDARD_IA</StorageClass>
    </Transition>
    <Transition>
      <Days>90</Days>
      <StorageClass>GLACIER</StorageClass>
    </Transition>
    <Expiration>
      <Days>365</Days>
    </Expiration>
  </Rule>
</LifecycleConfiguration>
```

**Note**  
You can use one rule to describe all S3 Lifecycle actions if all actions apply to the same set of objects (identified by the filter). Otherwise, you can add multiple rules with each specifying a different filter.

**Important**  
When you have multiple rules in an S3 Lifecycle configuration, an object can become eligible for multiple S3 Lifecycle actions on the same day. In such cases, Amazon S3 follows these general rules:  
Permanent deletion takes precedence over transition.
Transition takes precedence over creation of [delete markers](DeleteMarker.md).
When an object is eligible for both an S3 Glacier Flexible Retrieval and S3 Standard-IA (or S3 One Zone-IA) transition, Amazon S3 chooses the S3 Glacier Flexible Retrieval transition.
 For examples, see [Examples of overlapping filters and conflicting lifecycle actions](lifecycle-conflicts.md#lifecycle-config-conceptual-ex5). 

## Specifying multiple rules
<a name="lifecycle-config-conceptual-ex4"></a>



You can specify multiple rules if you want different S3 Lifecycle actions of different objects. The following S3 Lifecycle configuration has two rules:
+ Rule 1 applies to objects with the key name prefix `classA/`. It directs Amazon S3 to transition objects to the S3 Glacier Flexible Retrieval storage class one year after creation and expire these objects 10 years after creation.
+ Rule 2 applies to objects with key name prefix `classB/`. It directs Amazon S3 to transition objects to the S3 Standard-IA storage class 90 days after creation and delete them one year after creation.

```
<LifecycleConfiguration>
    <Rule>
        <ID>ClassADocRule</ID>
        <Filter>
           <Prefix>classA/</Prefix>        
        </Filter>
        <Status>Enabled</Status>
        <Transition>        
           <Days>365</Days>        
           <StorageClass>GLACIER</StorageClass>       
        </Transition>    
        <Expiration>
             <Days>3650</Days>
        </Expiration>
    </Rule>
    <Rule>
        <ID>ClassBDocRule</ID>
        <Filter>
            <Prefix>classB/</Prefix>
        </Filter>
        <Status>Enabled</Status>
        <Transition>        
           <Days>90</Days>        
           <StorageClass>STANDARD_IA</StorageClass>       
        </Transition>    
        <Expiration>
             <Days>365</Days>
        </Expiration>
    </Rule>
</LifecycleConfiguration>
```

**Important**  
When you have multiple rules in an S3 Lifecycle configuration, an object can become eligible for multiple S3 Lifecycle actions on the same day. In such cases, Amazon S3 follows these general rules:  
Permanent deletion takes precedence over transition.
Transition takes precedence over creation of [delete markers](DeleteMarker.md).
When an object is eligible for both an S3 Glacier Flexible Retrieval and S3 Standard-IA (or S3 One Zone-IA) transition, Amazon S3 chooses the S3 Glacier Flexible Retrieval transition.
 For examples, see [Examples of overlapping filters and conflicting lifecycle actions](lifecycle-conflicts.md#lifecycle-config-conceptual-ex5). 

## Specifying a lifecycle rule for a versioning-enabled bucket
<a name="lifecycle-config-conceptual-ex6"></a>

Suppose that you have a versioning-enabled bucket, which means that for each object, you have a current version and zero or more noncurrent versions. (For more information about S3 Versioning, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).) 

In the following example, you want to maintain one year's worth of history, and retain 5 noncurrent versions. S3 Lifecycle configurations support keeping 1 to 100 versions of any object. Be aware that more than 5 newer noncurrent versions must exist before Amazon S3 can expire a given version. Amazon S3 will permanently delete any additional noncurrent versions beyond the specified number to retain. For the deletion to occur, both the `NoncurrentDays` and the `NewerNoncurrentVersions` values must be exceeded.

To save storage costs, you want to move noncurrent versions to S3 Glacier Flexible Retrieval 30 days after they become noncurrent (assuming that these noncurrent objects are cold data for which you don't need real-time access). In addition, you expect the frequency of access of the current versions to diminish 90 days after creation, so you might choose to move these objects to the S3 Standard-IA storage class.

```
 1. <LifecycleConfiguration>
 2.     <Rule>
 3.         <ID>sample-rule</ID>
 4.         <Filter>
 5.            <Prefix></Prefix>
 6.         </Filter>
 7.         <Status>Enabled</Status>
 8.         <Transition>
 9.            <Days>90</Days>
10.            <StorageClass>STANDARD_IA</StorageClass>
11.         </Transition>
12.         <NoncurrentVersionTransition>      
13.             <NoncurrentDays>30</NoncurrentDays>      
14.             <StorageClass>GLACIER</StorageClass>   
15.         </NoncurrentVersionTransition>    
16.        <NoncurrentVersionExpiration>     
17.             <NewerNoncurrentVersions>5</NewerNoncurrentVersions>
18.             <NoncurrentDays>365</NoncurrentDays>    
19.        </NoncurrentVersionExpiration> 
20.     </Rule>
21. </LifecycleConfiguration>
```

## Removing expired object delete markers in a versioning-enabled bucket
<a name="lifecycle-config-conceptual-ex7"></a>



A versioning-enabled bucket has one current version and zero or more noncurrent versions for each object. When you delete an object, note the following:
+ If you don't specify a version ID in your delete request, Amazon S3 adds a delete marker instead of deleting the object. The current object version becomes noncurrent, and the delete marker becomes the current version. 
+ If you specify a version ID in your delete request, Amazon S3 deletes the object version permanently (a delete marker isn't created).
+ A delete marker with zero noncurrent versions is referred to as an *expired object delete marker*. 

This example shows a scenario that can create expired object delete markers in your bucket, and how you can use S3 Lifecycle configuration to direct Amazon S3 to remove the expired object delete markers.

Suppose that you write an S3 Lifecycle configuration that uses the `NoncurrentVersionExpiration` action to remove noncurrent versions 30 days after they become noncurrent and to retain 10 noncurrent versions, as shown in the following example. Be aware that more than 10 newer noncurrent versions must exist before Amazon S3 can expire a given version. Amazon S3 will permanently delete any additional noncurrent versions beyond the specified number to retain. For the deletion to occur, both the `NoncurrentDays` and the `NewerNoncurrentVersions` values must be exceeded.

```
<LifecycleConfiguration>
    <Rule>
        ...
        <NoncurrentVersionExpiration>     
            <NewerNoncurrentVersions>10</NewerNoncurrentVersions>
            <NoncurrentDays>30</NoncurrentDays>    
        </NoncurrentVersionExpiration>
    </Rule>
</LifecycleConfiguration>
```

The `NoncurrentVersionExpiration` action doesn't apply to the current object versions. It removes only the noncurrent versions.

For current object versions, you have the following options to manage their lifetime, depending on whether the current object versions follow a well-defined lifecycle: 
+ **The current object versions follow a well-defined lifecycle.**

  In this case, you can use an S3 Lifecycle configuration with the `Expiration` action to direct Amazon S3 to remove the current versions, as shown in the following example.

  ```
  <LifecycleConfiguration>
      <Rule>
          ...
          <Expiration>
             <Days>60</Days>
          </Expiration>
          <NoncurrentVersionExpiration>     
              <NewerNoncurrentVersions>10</NewerNoncurrentVersions>
              <NoncurrentDays>30</NoncurrentDays>    
          </NoncurrentVersionExpiration>
      </Rule>
  </LifecycleConfiguration>
  ```

  In this example, Amazon S3 removes current versions 60 days after they're created by adding a delete marker for each of the current object versions. This process makes the current version noncurrent, and the delete marker becomes the current version. For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md). 
**Note**  
You can't specify both a `Days` and an `ExpiredObjectDeleteMarker` tag on the same rule. When you specify the `Days` tag, Amazon S3 automatically performs `ExpiredObjectDeleteMarker` cleanup when the delete markers are old enough to satisfy the age criteria. To clean up delete markers as soon as they become the only version, create a separate rule with only the `ExpiredObjectDeleteMarker` tag.

  The `NoncurrentVersionExpiration` action in the same S3 Lifecycle configuration removes noncurrent objects 30 days after they become noncurrent. Thus, in this example, all object versions are permanently removed 90 days after object creation. Be aware that in this example, more than 10 newer noncurrent versions must exist before Amazon S3 can expire a given version. Amazon S3 will permanently delete any additional noncurrent versions beyond the specified number to retain. For the deletion to occur, both the `NoncurrentDays` and the `NewerNoncurrentVersions` values must be exceeded. 

  Although expired object delete markers are created during this process, Amazon S3 detects and removes the expired object delete markers for you. 
+ **The current object versions don't have a well-defined lifecycle.** 

  In this case, you might remove the objects manually when you don't need them, creating a delete marker with one or more noncurrent versions. If your S3 Lifecycle configuration with the `NoncurrentVersionExpiration` action removes all the noncurrent versions, you now have expired object delete markers.

  Specifically for this scenario, S3 Lifecycle configuration provides an `Expiration` action that you can use to remove the expired object delete markers.

  

  ```
  <LifecycleConfiguration>
      <Rule>
         <ID>Rule 1</ID>
          <Filter>
            <Prefix>logs/</Prefix>
          </Filter>
          <Status>Enabled</Status>
          <Expiration>
             <ExpiredObjectDeleteMarker>true</ExpiredObjectDeleteMarker>
          </Expiration>
          <NoncurrentVersionExpiration>     
              <NewerNoncurrentVersions>10</NewerNoncurrentVersions>
              <NoncurrentDays>30</NoncurrentDays>    
          </NoncurrentVersionExpiration>
      </Rule>
  </LifecycleConfiguration>
  ```

By setting the `ExpiredObjectDeleteMarker` element to `true` in the `Expiration` action, you direct Amazon S3 to remove the expired object delete markers.

**Note**  
When you use the `ExpiredObjectDeleteMarker` S3 Lifecycle action, the rule cannot specify a tag-based filter.

## Lifecycle configuration to abort multipart uploads
<a name="lc-expire-mpu"></a>

You can use the Amazon S3 multipart upload REST API operations to upload large objects in parts. For more information about multipart uploads, see [Uploading and copying objects using multipart upload in Amazon S3](mpuoverview.md). 

By using an S3 Lifecycle configuration, you can direct Amazon S3 to stop incomplete multipart uploads (identified by the key name prefix specified in the rule) if they aren't completed within a specified number of days after initiation. When Amazon S3 aborts a multipart upload, it deletes all the parts associated with the multipart upload. This process helps control your storage costs by ensuring that you don't have incomplete multipart uploads with parts that are stored in Amazon S3. 

**Note**  
When you use the `AbortIncompleteMultipartUpload` S3 Lifecycle action, the rule cannot specify a tag-based filter.

The following is an example S3 Lifecycle configuration that specifies a rule with the `AbortIncompleteMultipartUpload` action. This action directs Amazon S3 to stop incomplete multipart uploads seven days after initiation.

```
<LifecycleConfiguration>
    <Rule>
        <ID>sample-rule</ID>
        <Filter>
           <Prefix>SomeKeyPrefix/</Prefix>
        </Filter>
        <Status>rule-status</Status>
        <AbortIncompleteMultipartUpload>
          <DaysAfterInitiation>7</DaysAfterInitiation>
        </AbortIncompleteMultipartUpload>
    </Rule>
</LifecycleConfiguration>
```

## Expiring noncurrent objects that have no data
<a name="lc-size-rules"></a>

You can create rules that transition objects based only on their size. You can specify a minimum size (`ObjectSizeGreaterThan`) or a maximum size (`ObjectSizeLessThan`), or you can specify a range of object sizes in bytes. When using more than one filter, such as a prefix and size rule, you must wrap the filters in an `<And>` element.

```
<LifecycleConfiguration>
  <Rule>
    <ID>Transition with a prefix and based on size</ID>
    <Filter>
       <And>
          <Prefix>tax/</Prefix>
          <ObjectSizeGreaterThan>500</ObjectSizeGreaterThan>
       </And>   
    </Filter>
    <Status>Enabled</Status>
    <Transition>
      <Days>365</Days>
      <StorageClass>GLACIER</StorageClass>
    </Transition>
  </Rule>
</LifecycleConfiguration>
```

If you're specifying a range by using both the `ObjectSizeGreaterThan` and `ObjectSizeLessThan` elements, the maximum object size must be larger than the minimum object size. When using more than one filter, you must wrap the filters in an `<And>` element. The following example shows how to specify objects in a range between 500 bytes and 64,000 bytes. When you're specifying a range, the `ObjectSizeGreaterThan` and `ObjectSizeLessThan` filters exclude the specified values. For more information, see [Filter element](intro-lifecycle-rules.md#intro-lifecycle-rules-filter).

```
<LifecycleConfiguration>
    <Rule>
        ...
          <And>
             <ObjectSizeGreaterThan>500</ObjectSizeGreaterThan>
             <ObjectSizeLessThan>64000</ObjectSizeLessThan>
          </And>
    </Rule>
</LifecycleConfiguration>
```

You can also create rules to specifically expire noncurrent objects that have no data, including noncurrent delete marker objects created in a versioning-enabled bucket. The following example uses the `NoncurrentVersionExpiration` action to remove noncurrent versions 30 days after they become noncurrent and to retain 10 noncurrent versions. This example also uses the `ObjectSizeLessThan` element to filter only objects with no data. 

Be aware that more than 10 newer noncurrent versions must exist before Amazon S3 can expire a given version. Amazon S3 will permanently delete any additional noncurrent versions beyond the specified number to retain. For the deletion to occur, both the `NoncurrentDays` and the `NewerNoncurrentVersions` values must be exceeded. 

```
<LifecycleConfiguration>
  <Rule>
    <ID>Expire noncurrent with size less than 1 byte</ID>
    <Filter>
       <ObjectSizeLessThan>1</ObjectSizeLessThan>
    </Filter>
    <Status>Enabled</Status>
    <NoncurrentVersionExpiration>     
       <NewerNoncurrentVersions>10</NewerNoncurrentVersions>
       <NoncurrentDays>30</NoncurrentDays>
    </NoncurrentVersionExpiration>
  </Rule>
</LifecycleConfiguration>
```

## Example: Allowing objects smaller than 128 KB to be transitioned
<a name="lc-small-objects"></a>

Amazon S3 applies a default behavior to your Lifecycle configuration that prevents objects smaller than 128 KB from being transitioned to any storage class. You can allow smaller objects to transition by adding a minimum size (`ObjectSizeGreaterThan`) or a maximum size (`ObjectSizeLessThan`) filter that specifies a smaller size to the configuration. The following example allows any object smaller than 128 KB to transition to the S3 Glacier Instant Retrieval storage class:

```
<LifecycleConfiguration>
  <Rule>
    <ID>Allow small object transitions</ID>
    <Filter>
          <ObjectSizeGreaterThan>1</ObjectSizeGreaterThan>
    </Filter>
    <Status>Enabled</Status>
    <Transition>
      <Days>365</Days>
      <StorageClass>GLACIER_IR</StorageClass>
    </Transition>
  </Rule>
</LifecycleConfiguration>
```

**Note**  
In September 2024, Amazon S3 updated the default transition behavior for small objects, as follows:  
**Updated default transition behavior** — Starting September 2024, the default behavior prevents objects smaller than 128 KB from being transitioned to any storage class.
**Previous default transition behavior** — Before September 2024, the default behavior allowed objects smaller than 128 KB to be transitioned only to the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes.
Configurations created before September 2024 retain the previous transition behavior unless you modify them. That is, if you create, edit, or delete rules, the default transition behavior for your configuration changes to the updated behavior. If your use case requires, you can change the default transition behavior so that objects smaller than 128KB will transition to S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive. To do this, use the optional `x-amz-transition-object-size-minimum-default` header in a [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html) request.

The following example shows how to use the `x-amz-transition-object-size-minimum-default` header in a [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html) request to apply the `varies_by_storage_class` default transition behavior to an S3 Lifecycle configuration. This behavior allows object smaller than 128 KB to transition to the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes. By default, all other storage classes will prevent transitions smaller than 128 KB. You can still use custom filters to change the minimum transition size for any storage class. Custom filters always take precedence over the default transition behavior:

```
HTTP/1.1 200
x-amz-transition-object-size-minimum-default: varies_by_storage_class
<?xml version="1.0" encoding="UTF-8"?>
...
```

# Troubleshooting Amazon S3 Lifecycle issues
<a name="troubleshoot-lifecycle"></a>

The following information can help you troubleshoot common issues with Amazon S3 Lifecycle rules.

**Topics**
+ [

## I ran a list operation on my bucket and saw objects that I thought were expired or transitioned by a lifecycle rule.
](#troubleshoot-lifecycle-1)
+ [

## How do I monitor the actions taken by my lifecycle rules?
](#troubleshoot-lifecycle-2)
+ [

## My S3 object count still increases, even after setting up lifecycle rules on a versioning-enabled bucket.
](#troubleshoot-lifecycle-3)
+ [

## How do I empty my S3 bucket by using lifecycle rules?
](#troubleshoot-lifecycle-4)
+ [

## My Amazon S3 bill increased after transitioning objects to a lower-cost storage class.
](#troubleshoot-lifecycle-5)
+ [

## I’ve updated my bucket policy, but my S3 objects are still being deleted by expired lifecycle rules.
](#troubleshoot-lifecycle-6)
+ [

## Can I recover S3 objects that are expired by S3 Lifecycle rules?
](#troubleshoot-lifecycle-7)
+ [

## Why are my expiration and transition lifecycle actions not occurring?
](#troubleshoot-lifecycle-failures)
+ [

## How can I exclude a prefix from my lifecycle rule?
](#troubleshoot-lifecycle-8)
+ [

## How can I include multiple prefixes in my lifecycle rule?
](#troubleshoot-lifecycle-9)

## I ran a list operation on my bucket and saw objects that I thought were expired or transitioned by a lifecycle rule.
<a name="troubleshoot-lifecycle-1"></a>

S3 Lifecycle [object transitions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html) and [object expirations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-expire-general-considerations.html) are asynchronous operations. Therefore, there might be a delay between the time that the objects are eligible for expiration or transition and the time that they are actually transitioned or expired. Changes in billing are applied as soon as the lifecycle rule is satisfied, even if the action isn't complete. The exception to this behavior is if you have a lifecycle rule set to transition to the S3 Intelligent-Tiering storage class. In that case, billing changes don't occur until the object has transitioned to S3 Intelligent-Tiering. For more information about changes in billing, see [Setting lifecycle configuration on a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html).

**Note**  
Amazon S3 doesn’t transition objects that are smaller than 128 KB from the S3 Standard or S3 Standard-IA storage class to the S3 Intelligent-Tiering, S3 Standard-IA, or S3 One Zone-IA storage class.

## How do I monitor the actions taken by my lifecycle rules?
<a name="troubleshoot-lifecycle-2"></a>

To monitor actions taken by lifecycle rules, you can use the following features: 
+ **S3 Event Notifications** – You can set up [S3 Event Notifications](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-configure-notification.html) so that you're notified of any S3 Lifecycle expiration or transition events.
+ **S3 server access logs** – You can enable server access logs for your S3 buckets to capture S3 Lifecycle actions, such as object transitions to another storage class or object expirations. For more information, see [Lifecycle and logging](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-and-other-bucket-config.html#lifecycle-general-considerations-logging).

To view the changes in your storage caused by lifecycle actions on a daily basis, we recommend using [S3 Storage Lens dashboards](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens_basics_metrics_recommendations.html#storage_lens_basics_dashboards) instead of using Amazon CloudWatch metrics. In your Storage Lens dashboard, you can view the following metrics, which monitor the object count or size:
+ **Current version bytes**
+ **Current version object count**
+ **Noncurrent version bytes**
+ **Noncurrent version object count**
+ **Delete marker object count**
+ **Delete marker storage bytes**
+ **Incomplete multipart upload bytes**
+ **Incomplete multipart upload object count**

## My S3 object count still increases, even after setting up lifecycle rules on a versioning-enabled bucket.
<a name="troubleshoot-lifecycle-3"></a>

In a [versioning-enabled bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html#versioning-states), when an object is expired, the object isn't completely deleted from the bucket. Instead, a [delete marker](https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html) is created as the newest version of the object. Delete markers are still counted as objects. Therefore, if a lifecycle rule is created to expire only the current versions, then the object count in the S3 bucket actually increases instead of going down.

For example, let's say an S3 bucket is versioning-enabled with 100 objects, and a lifecycle rule is set to expire current versions of the object after 7 days. After the seventh day, the object count increases to 200 because 100 delete markers are created in addition to the 100 original objects, which are now the noncurrent versions. For more information about S3 Lifecycle configuration rule actions for versioning-enabled buckets, see [Setting lifecycle configuration on a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html).

To permanently remove objects, add an additional lifecycle configuration to delete the previous versions of the objects, expired delete markers, and incomplete multipart uploads. For instructions on how to create new lifecycle rules, see [Setting lifecycle configuration on a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html).

**Note**  
Amazon S3 rounds the transition or expiration date of an object to midnight UTC the next day.   
When evaluating objects for lifecycle actions, Amazon S3 uses the object creation time in UTC. For example, consider a nonversioned bucket with a lifecycle rule that's configured to expire objects after one day. Suppose that an object was created on January 1 at 17:05 Pacific Daylight Time (PDT), which corresponds to January 2 at 00:05 UTC. The object becomes one day old at 00:05 UTC on January 3, which makes it eligible for expiration when S3 Lifecycle evaluates objects at 00:00 UTC on January 4.  
Because Amazon S3 lifecycle actions occur asynchronously, there might be some delay between the date specified in the lifecycle rule and the actual physical transition of the object. For more information, see [Transition or expiration delay](how-to-set-lifecycle-configuration-intro.md#lifecycle-action-delay).  
For more information, see [Lifecycle rules: Based on an object's age](https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html#intro-lifecycle-rules-number-of-days).
For S3 objects that are protected by Object Lock, current versions are not permanently deleted. Instead, a delete marker is added to the objects, making them noncurrent. Noncurrent versions are then preserved and are not permanently expired.

## How do I empty my S3 bucket by using lifecycle rules?
<a name="troubleshoot-lifecycle-4"></a>

S3 Lifecycle rules are an effective tool to [empty an S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/empty-bucket.html) with millions of objects. To delete a large number of objects from your S3 bucket, make sure to use these two pairs of lifecycle rules:
+ **Expire current versions of objects** and **Permanently delete previous versions of objects**
+ **Delete expired delete markers** and **Delete incomplete multipart uploads**

For steps on how to create a lifecycle configuration rule, see [Setting lifecycle configuration on a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html).

**Note**  
For S3 objects that are protected by Object Lock, current versions are not permanently deleted. Instead, a delete marker is added to the objects, making them noncurrent. Noncurrent versions are then preserved and are not permanently expired.

## My Amazon S3 bill increased after transitioning objects to a lower-cost storage class.
<a name="troubleshoot-lifecycle-5"></a>

There are several reasons that your bill might increase after transitioning objects to a lower-cost storage class: 
+ S3 Glacier overhead charges for small objects

  For each object that is transitioned to S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive, a total overhead of 40 KB is associated with this storage update. As part of the 40 KB overhead, 8 KB is used to store metadata and the name of the object. This 8 KB is charged according to S3 Standard rates. The remaining 32 KB is used for indexing and related metadata. This 32 KB is charged according to S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive pricing.

  Therefore, if you're storing many smaller sized objects, we don't recommend using lifecycle transitions. Instead, to reduce any overhead charges, consider aggregating many smaller objects into a smaller number of large objects before storing them in Amazon S3. For more information about cost considerations, see [Transitioning to the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes (object archival)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html#before-deciding-to-archive-objects).
+ Minimum storage charges

  Some S3 storage classes have minimum storage-duration requirements. Objects that are deleted, overwritten, or transitioned from those classes before the minimum duration is satisfied are charged a prorated early transition or deletion fee. These minimum storage-duration requirements are as follows: 
  + S3 Standard-IA and S3 One Zone-IA – 30 days
  + S3 Glacier Flexible Retrieval and S3 Glacier Instant Retrieval – 90 days
  + S3 Glacier Deep Archive – 180 days

  For more information about these requirements, see the *Constraints* section of [Transitioning objects using S3 Lifecycle](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html). For general S3 pricing information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/) and the [AWS Pricing Calculator](https://calculator.aws/#/addService/S3).
+ Lifecycle transition costs

  Each time an object is transitioned to a different storage class by a lifecycle rule, Amazon S3 counts that transition as one transition request. The costs for these transition requests are in addition to the costs of these storage classes. If you plan to transition a large number of objects, consider the request costs when transitioning to a lower tier. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

## I’ve updated my bucket policy, but my S3 objects are still being deleted by expired lifecycle rules.
<a name="troubleshoot-lifecycle-6"></a>

`Deny` statements in a bucket policy don't prevent the expiration of the objects defined in a lifecycle rule. Lifecycle actions (such as transitions or expirations) don't use the S3 `DeleteObject` operation. Instead, S3 Lifecycle actions are performed by using internal S3 endpoints. (For more information, see [Lifecycle and logging](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-and-other-bucket-config.html#lifecycle-general-considerations-logging).) 

To prevent your lifecycle rule from taking any action, you must edit, delete, or [disable the rule](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-configuration-examples.html#lifecycle-config-conceptual-ex2).

## Can I recover S3 objects that are expired by S3 Lifecycle rules?
<a name="troubleshoot-lifecycle-7"></a>

The only way to recover objects that are expired by S3 Lifecycle is through versioning, which must be in place before the objects become eligible for expiration. You cannot undo the expiration operations that are performed by lifecycle rules. If objects are permanently deleted by the S3 Lifecycle rules that are in place, you cannot recover these objects. To enable versioning on a bucket, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).

If you have applied versioning to the bucket and the noncurrent versions of the objects are still intact, you can [restore previous versions of the expired objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/RestoringPreviousVersions.html). For more information about the behavior of S3 Lifecycle rule actions and versioning states, see the *Lifecycle actions and bucket versioning state* table in [Elements to describe lifecycle actions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/intro-lifecycle-rules.html#non-current-days-calculations).

**Note**  
If the S3 bucket is protected by [AWS Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html) or [S3 Replication](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html), you might also be able to use these features to recover your expired objects.

## Why are my expiration and transition lifecycle actions not occurring?
<a name="troubleshoot-lifecycle-failures"></a>

For a versioning-enabled or versioning-suspended bucket, the following considerations guide how Amazon S3 handles the Expiration action:
+ Object expiration applies only to an object's current version (it has no impact on noncurrent object versions).
+ Amazon S3 doesn't take any action if there are one or more object versions and the delete marker is the current version.
+ Amazon S3 doesn't take any action on noncurrent versions of objects that have S3 Object Lock applied.
+ For objects with a `PENDING` or `FAILED` replication status, Amazon S3 doesn't take any action on current or noncurrent versions of objects.

Lifecycle storage class transitions have the following constraints:
+ By default, objects smaller than 128 KB won't transition to any storage class.
+ Objects must be stored for at least 30 days before transitioning to S3 Standard-IA or S3 One Zone-IA.
+ For versioning enabled or versioning suspended buckets, objects with a `PENDING` or `FAILED` replication status can't be transitioned.

## How can I exclude a prefix from my lifecycle rule?
<a name="troubleshoot-lifecycle-8"></a>

S3 Lifecycle doesn't support excluding prefixes in your rules. Instead, use tags to tag all of the objects that you want to include in the rule. For more information about using tags in your lifecycle rules, see [Archiving all objects within one day after creation](lifecycle-configuration-examples.md#lifecycle-config-ex1).

## How can I include multiple prefixes in my lifecycle rule?
<a name="troubleshoot-lifecycle-9"></a>

S3 Lifecycle doesn't support including multiple prefixes in your rules. Instead, use tags to tag all of the objects that you want to include in the rule. For more information about using tags in your lifecycle rules, see [Archiving all objects within one day after creation](lifecycle-configuration-examples.md#lifecycle-config-ex1).

However, if you have one or more prefixes that start with the same characters, you can include all of those prefixes in your rule by specifying a partial prefix with no trailing slash (`/`) in the filter. For example, suppose that you have these prefixes:

```
sales1999/
sales2000/
sales2001/
```

To include all three prefixes in your rule, specify `<Prefix>sales</Prefix>` in your lifecycle rule. 