

# Understanding and managing Amazon S3 storage classes
<a name="storage-class-intro"></a>

Each object in Amazon S3 has a storage class associated with it. By default, objects in S3 are stored in the S3 Standard storage class, however Amazon S3 offers a range of other storage classes for the objects that you store. You choose a class depending on your use case scenario and performance access requirements. Choosing a storage class designed for your use case lets you optimize storage costs, performance, and availability for your objects. All of these storage classes offer high durability.

The following sections provide details of the various storage classes and how to set the storage class for your objects.

**Topics**
+ [

## Storage classes for frequently accessed objects
](#sc-freq-data-access)
+ [

## Storage class for automatically optimizing data with changing or unknown access patterns
](#sc-dynamic-data-access)
+ [

## Storage classes for infrequently accessed objects
](#sc-infreq-data-access)
+ [

## Storage classes for rarely accessed objects
](#sc-glacier)
+ [

## Storage class for Amazon S3 on Outposts
](#s3-outposts)
+ [

## Comparing the Amazon S3 storage classes
](#sc-compare)
+ [

# Setting the storage class of an object
](sc-howtoset.md)
+ [

# Amazon S3 analytics – Storage Class Analysis
](analytics-storage-class.md)
+ [

# Managing storage costs with Amazon S3 Intelligent-Tiering
](intelligent-tiering.md)
+ [

# Understanding S3 Glacier storage classes for long-term data storage
](glacier-storage-classes.md)
+ [

# Working with archived objects
](archived-objects.md)

## Storage classes for frequently accessed objects
<a name="sc-freq-data-access"></a>

For performance-sensitive use cases (those that require millisecond access time) and frequently accessed data, Amazon S3 provides the following storage classes:
+ **S3 Standard** (`STANDARD`) – The default storage class. If you don't specify the storage class when you upload an object, Amazon S3 assigns the S3 Standard storage class. To help you optimize costs between S3 Standard and S3 Standard-IA you can use [Amazon S3 analytics – Storage Class Analysis](analytics-storage-class.md).
+ **S3 Express One Zone** (`EXPRESS_ONEZONE`) – Amazon S3 Express One Zone is a high-performance, single-zone Amazon S3 storage class that is purpose-built to deliver consistent, single-digit millisecond data access for your most latency-sensitive applications. S3 Express One Zone is the lowest latency cloud object storage class available today, with data access speed up to 10x faster and with request costs 50 percent lower than S3 Standard. With S3 Express One Zone, your data is redundantly stored on multiple devices within a single Availability Zone. For more information, see [S3 Express One Zone](directory-bucket-high-performance.md#s3-express-one-zone).
+ **Reduced Redundancy Storage ** (`REDUCED_REDUNDANCY`) – The Reduced Redundancy Storage (RRS) class is designed for noncritical, reproducible data that can be stored with less redundancy than the S3 Standard storage class.
**Important**  
We recommend not using this storage class. The S3 Standard storage class is more cost-effective. 

  For durability, RRS objects have an average annual expected loss of 0.01 percent of objects. If an RRS object is lost, when requests are made to that object, Amazon S3 returns a 405 error.

## Storage class for automatically optimizing data with changing or unknown access patterns
<a name="sc-dynamic-data-access"></a>

**S3 Intelligent-Tiering** (`INTELLIGENT_TIERING`) is an Amazon S3 storage class that's designed to optimize storage costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead. S3 Intelligent-Tiering is the only cloud storage class that delivers automatic cost savings by moving data on a granular object level between access tiers when access patterns change. S3 Intelligent-Tiering is the ideal storage class when you want to optimize storage costs for data that has unknown or changing access patterns. There are no retrieval fees for S3 Intelligent-Tiering. 

For a small monthly object monitoring and automation fee, S3 Intelligent-Tiering monitors access patterns and automatically moves objects that have not been accessed to lower-cost access tiers. S3 Intelligent-Tiering delivers automatic storage cost savings in three low-latency and high-throughput access tiers. For data that can be accessed asynchronously, you can choose to activate automatic archiving capabilities within the S3 Intelligent-Tiering storage class. S3 Intelligent-Tiering is designed for 99.9% availability and 99.999999999% durability.

S3 Intelligent-Tiering automatically stores objects in three access tiers: 
+ **Frequent Access** – Objects that are uploaded or transitioned to S3 Intelligent-Tiering are automatically stored in the Frequent Access tier.
+ **Infrequent Access** – S3 Intelligent-Tiering moves objects that have not been accessed in 30 consecutive days to the Infrequent Access tier.
+ **Archive Instant Access** – With S3 Intelligent-Tiering, any existing objects that have not been accessed for 90 consecutive days are automatically moved to the Archive Instant Access tier. 

In addition to these three tiers, S3 Intelligent-Tiering offers two optional archive access tiers: 
+ **Archive Access** – S3 Intelligent-Tiering provides you with the option to activate the Archive Access tier for data that can be accessed asynchronously. After activation, the Archive Access tier automatically archives objects that have not been accessed for a minimum of 90 consecutive days.
+ **Deep Archive Access** – S3 Intelligent-Tiering provides you with the option to activate the Deep Archive Access tier for data that can be accessed asynchronously. After activation, the Deep Archive Access tier automatically archives objects that have not been accessed for a minimum of 180 consecutive days.

**Note**  
Only activate the Archive Access tier for 90 days if you want to bypass the Archive Instant Access tier. The Archive Access tier delivers slightly lower-cost storage with minute-to-hour retrieval times. The Archive Instant Access tier delivers millisecond access and high-throughput performance.
Activate the Archive Access and Deep Archive Access tiers only if your objects can be accessed asynchronously by your application. If the object that you are retrieving is stored in the Archive Access or Deep Archive Access tiers, first restore the object by using `RestoreObject`.

You can [move newly created data to S3 Intelligent-Tiering](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-intelligent-tiering.html#moving-data-to-int-tiering), setting it as your default storage class. You can also choose to activate one or both of the archive access tiers by using the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketIntelligentTieringConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketIntelligentTieringConfiguration.html) API operation, the AWS CLI, or the Amazon S3 console. For more information about using S3 Intelligent-Tiering and activating the archive access tiers, see [Using S3 Intelligent-Tiering](using-intelligent-tiering.md).

To access objects in the Archive Access or Deep Archive Access tiers, you first need to restore them. For more information, see [Restoring objects from the S3 Intelligent-Tiering Archive Access and Deep Archive Access tiers](intelligent-tiering-managing.md#restore-data-from-int-tier-archive).

**Note**  
If the size of an object is less than 128 KB, it is not monitored and not eligible for auto-tiering. Smaller objects are always stored in the Frequent Access tier. For more information about S3 Intelligent-Tiering, see [S3 Intelligent-Tiering access tiers](intelligent-tiering-overview.md#intel-tiering-tier-definition).

## Storage classes for infrequently accessed objects
<a name="sc-infreq-data-access"></a>

The **S3 Standard-IA** and **S3 One Zone-IA** storage classes are designed for long-lived and infrequently accessed data. (IA stands for *infrequent access*.) S3 Standard-IA and S3 One Zone-IA objects are available for millisecond access (similar to the S3 Standard storage class). Amazon S3 charges a retrieval fee for these objects, so they are most suitable for infrequently accessed data. For pricing information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

For example, you might choose the S3 Standard-IA and S3 One Zone-IA storage classes to do the following:
+ For storing backups. 
+ For older data that is accessed infrequently, but that still requires millisecond access. For example, when you upload data, you might choose the S3 Standard storage class, and use lifecycle configuration to tell Amazon S3 to transition the objects to the S3 Standard-IA or S3 One Zone-IA class.

  For more information about lifecycle management, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md).

**Note**  
The S3 Standard-IA and S3 One Zone-IA storage classes are suitable for objects larger than 128 KB that you plan to store for at least 30 days. If an object is less than 128 KB, Amazon S3 charges you for 128 KB. If you delete an object before the end of the 30-day minimum storage duration period, you are charged for 30 days. Objects that are deleted, overwritten, or transitioned to a different storage class before 30 days will incur the normal storage usage charge plus a pro-rated charge for the remainder of the 30-day minimum. For pricing information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

These storage classes differ as follows:
+ **S3 Standard-IA** (`STANDARD_IA`) – Amazon S3 stores the object data redundantly across multiple geographically separated Availability Zones (similar to the S3 Standard storage class). S3 Standard-IA objects are resilient to the loss of an Availability Zone. This storage class offers greater availability and resiliency than the S3 One Zone-IA class. To help you optimize costs between S3 Standard and S3 Standard-IA you can use [Amazon S3 analytics – Storage Class Analysis](analytics-storage-class.md)
+ **S3 One Zone-IA** (`ONEZONE_IA`) – Amazon S3 stores the object data in only one Availability Zone, which makes it less expensive than S3 Standard-IA. However, the data is not resilient to the physical loss of the Availability Zone resulting from disasters, such as earthquakes and floods. The S3 One Zone-IA storage class is as durable as S3 Standard-IA, but it is less available and less resilient. For a comparison of storage class durability and availability, see [Comparing the Amazon S3 storage classes](#sc-compare) at the end of this section. For pricing information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). For data residency and isolation use cases, you can create directory buckets in AWS Local Zones and use the S3 Express One Zone (`EXPRESS_ONEZONE`) and S3 One Zone-IA (`ONEZONE_IA`) storage classes. For more information about directory buckets in Local Zones, see [Data residency workloads](directory-bucket-data-residency.md). 

We recommend the following:
+ **S3 Standard-IA** (`STANDARD_IA`) – Use for your primary or only copy of data that can't be re-created. 
+ **S3 One Zone-IA** (`ONEZONE_IA`) – Use if you can re-create the data if the Availability Zone fails, for object replicas when configuring S3 Cross-Region Replication (CRR). Also, for data residency and isolation, you can create directory buckets in AWS Local Zones and use the S3 One Zone-IA storage class.

## Storage classes for rarely accessed objects
<a name="sc-glacier"></a>

The **S3 Glacier Instant Retrieval** (`GLACIER_IR`), **S3 Glacier Flexible Retrieval** (`GLACIER`), and **S3 Glacier Deep Archive** (`DEEP_ARCHIVE`) storage classes are designed for low-cost, long-term data storage and data archiving. These storage classes require minimum storage durations and retrieval fees making them most effective for rarely accessed data. For more information about S3 Glacier storage classes, see [Understanding S3 Glacier storage classes for long-term data storage](glacier-storage-classes.md).

Amazon S3 provides the following S3 Glacier storage classes:
+ **S3 Glacier Instant Retrieval** (`GLACIER_IR`) – Use for long-term data that's rarely accessed and requires milliseconds retrieval. Data in this storage class is available for real-time access.
+ **S3 Glacier Flexible Retrieval** (`GLACIER`) – Use for archives where portions of the data might need to be retrieved in minutes. Data in this storage class is archived, and not available for real-time access.
+ **S3 Glacier Deep Archive** (`DEEP_ARCHIVE`) – Use for archiving data that rarely needs to be accessed. Data in this storage class is archived, and not available for real-time access.

### Retrieving archived objects
<a name="sc-glacier-restore"></a>

You can set the storage class of an object to S3 Glacier Flexible Retrieval (`GLACIER`) or S3 Glacier Deep Archive (`DEEP_ARCHIVE`) in the same ways that you do for the other storage classes as described in the section [Setting the storage class of an object](sc-howtoset.md). However, S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive objects are archived, and not available for real-time access. For more information, see [Understanding archival storage in S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive](archival-storage.md).

**Note**  
When you use S3 Glacier storage classes, your objects remain in Amazon S3. You can't access them directly through the separate Amazon Glacier service. For information about the Amazon Glacier service, see the [Amazon Glacier Developer Guide](https://docs.aws.amazon.com/amazonglacier/latest/dev/).

## Storage class for Amazon S3 on Outposts
<a name="s3-outposts"></a>

With Amazon S3 on Outposts, you can create S3 buckets on your AWS Outposts resources and store and retrieve objects on-premises for applications that require local data access, local data processing, and data residency. You can use the same API operations and features on AWS Outposts as you do on Amazon S3, including access policies, encryption, and tagging. You can use S3 on Outposts through the AWS Management Console, AWS CLI, AWS SDKs, or REST API.

S3 on Outposts provides a new storage class, S3 Outposts (`OUTPOSTS`). The S3 Outposts storage class is available only for objects stored in buckets on Outposts. If you try to use this storage class with an S3 bucket in an AWS Region, an `InvalidStorageClass` error occurs. In addition, if you try to use other S3 storage classes with objects stored in S3 on Outposts buckets, the same error occurs. 

Objects stored in the S3 Outposts (`OUTPOSTS`) storage class are always encrypted by using server-side encryption with Amazon S3 managed encryption keys (SSE-S3). For more information, see [Using server-side encryption with Amazon S3 managed keys (SSE-S3)](UsingServerSideEncryption.md). 

You can also explicitly choose to encrypt objects stored in the S3 Outposts storage class by using server-side encryption with customer-provided encryption keys (SSE-C). For more information, see [Using server-side encryption with customer-provided keys (SSE-C)](ServerSideEncryptionCustomerKeys.md). 

**Note**  
S3 on Outposts doesn't support server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS).

For more information about S3 on Outposts, see [What is S3 on Outposts](https://docs.aws.amazon.com/AmazonS3/latest/s3-outposts/S3onOutposts.html) in the *Amazon S3 on Outposts User Guide*.

## Comparing the Amazon S3 storage classes
<a name="sc-compare"></a>

The following table compares the storage classes, including their availability, durability, minimum storage duration, and other considerations.


****  

| Storage class | Designed for | Durability (designed for) | Availability (designed for) | Availability Zones | Min storage duration | Min billable object size | Other considerations  | 
| --- | --- | --- | --- | --- | --- | --- | --- | 
|  S3 Standard (`STANDARD`)  |  Frequently accessed data (more than once a month) with millisecond access  |  99.999999999%   |  99.99%  |  >= 3  |  None  |  None  |  None  | 
|  S3 Standard-IA (`STANDARD_IA`)  |  Long-lived, infrequently accessed data (once a month) with millisecond access  |  99.999999999%   |  99.9%  |  >= 3  |  30 days  |  128 KB  |  Per-GB retrieval fees apply.   | 
|  S3 Intelligent-Tiering (`INTELLIGENT_TIERING`)  |  Data with unknown, changing, or unpredictable access patterns  |  99.999999999%  |  99.9%  |  >= 3  |  None  |  None  |  Monitoring and automation fees per object apply. No retrieval fees. Objects less than 128KB are not monitored and always stored in the Frequent Access tier. For more information, see [How S3 Intelligent-Tiering works](intelligent-tiering-overview.md).  | 
|  S3 One Zone-IA (`ONEZONE_IA`)  |  Recreatable, infrequently accessed data (once a month) with millisecond access  |  99.999999999%   |  99.5%  |  1  |  30 days  |  128 KB  |  Per-GB retrieval fees apply. Not resilient to the loss of the Availability Zone.  | 
|  S3 Express One Zone (`EXPRESS_ONEZONE`)  |  Single-digit millisecond data access for latency-sensitive applications within a single AWS Availability Zone  |  99.999999999%   |  99.95%  |  1  |  None  |  None  |  S3 Express One Zone (`EXPRESS_ONEZONE`) objects are stored in a single AWS Availability Zone that you choose.   | 
|  S3 Glacier Instant Retrieval (`GLACIER_IR`)  | Long-lived, archive data accessed once a quarter with millisecond access | 99.999999999%  |  99.9%  |  >= 3  |  90 days  |  128 KB  | Per-GB retrieval fees apply. | 
|  S3 Glacier Flexible Retrieval (`GLACIER`)  | Long-lived archive data accessed once a year with retrieval times of minutes to hours | 99.999999999%  |  99.99% (after you restore objects)  |  >= 3  |  90 days  |  NA\$1  | Per-GB retrieval fees apply. You must first restore archived objects before you can access them. For information, see [Restoring an archived object](restoring-objects.md). | 
|  S3 Glacier Deep Archive (`DEEP_ARCHIVE`)  | Long-lived archive data accessed less than once a year with retrieval times of hours | 99.999999999%  |  99.99% (after you restore objects)  |  >= 3  |  180 days  |  NA\$1\$1  | Per-GB retrieval fees apply. You must first restore archived objects before you can access them. For information, see [Restoring an archived object](restoring-objects.md). | 
|  Reduced Redundancy Storage (`REDUCED_REDUNDANCY`) Not recommended  |  Noncritical, frequently accessed data with millisecond access  |  99.99%   |  99.99%  |  >= 3  |  None  |  None  |  None  | 

\$1 S3 Glacier Flexible Retrieval requires 40 KB of additional metadata for each archived object. This includes 32 KB of metadata charged at the S3 Glacier Flexible Retrieval rate (required to identify and retrieve your data), and an additional 8 KB data charged at the S3 Standard rate. The S3 Standard rate is required to maintain the user-defined name and metadata for objects archived to S3 Glacier Flexible Retrieval. For more information about storage classes, see [Amazon S3 storage classes](https://aws.amazon.com/s3/storage-classes/).

\$1\$1 S3 Glacier Deep Archive requires 40 KB of additional metadata for each archived object. This includes 32 KB of metadata charged at the S3 Glacier Deep Archive rate (required to identify and retrieve your data), and an additional 8 KB data charged at the S3 Standard rate. The S3 Standard rate is required to maintain the user-defined name and metadata for objects archived to Amazon S3 Glacier Deep Archive. For more information about storage classes, see [Amazon S3 storage classes](https://aws.amazon.com/s3/storage-classes/).

Be aware that all of the storage classes except for S3 One Zone-IA (`ONEZONE_IA`) and S3 Express One Zone (`EXPRESS_ONEZONE`) are designed to be resilient to the physical loss of an Availability Zone resulting from disasters. Also, consider costs, in addition to the performance requirements of your application scenario. For storage class pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

# Setting the storage class of an object
<a name="sc-howtoset"></a>

You can specify a storage class for an object when you upload it. If you don't, Amazon S3 uses the default Amazon S3 Standard storage class for objects in general purpose buckets. You can also change the storage class of an object that's already stored in an Amazon S3 general purpose bucket to any other storage class using the Amazon S3 console, AWS SDKs, or the AWS Command Line Interface (AWS CLI). All of these approaches use Amazon S3 API operations to send requests to Amazon S3.

**Note**  
You can't change the storage class of objects stored in directory buckets.

You can direct Amazon S3 to change the storage class of objects automatically by adding an S3 Lifecycle configuration to a bucket. For more information, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md).

When setting up a S3 Replication configuration, you can set the storage class for replicated objects to any other storage class. However, you can't replicate objects that are stored in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes. For more information, see [Replication configuration file elements](replication-add-config.md).

When setting the storage class programmatically you provide the value of the storage class. The following is a list of console names for storage classes with their corresponding API values:
+ **Reduced Redundancy Storage** – `REDUCED_REDUNDANCY`
+ **S3 Express One Zone** – `EXPRESS_ONEZONE`
+ **S3 Glacier Deep Archive** – `DEEP_ARCHIVE`
+ **S3 Glacier Flexible Retrieval** – `GLACIER`
+ **S3 Glacier Instant Retrieval** – `GLACIER_IR`
+ **S3 Intelligent-Tiering** – `INTELLIGENT_TIERING`
+ **S3 One Zone-IA** – `ONEZONE_IA`
+ **S3 Standard** – `STANDARD`
+ **S3 Standard-IA** – `STANDARD_IA`

## Setting the storage class on a new object
<a name="setting-storage-class"></a>

To set the storage class when you upload an object, you can use the following methods.

### Using the S3 console
<a name="setting-storage-class-console"></a>

To set the storage class when uploading a new object in the console:

1. Sign in to the AWS Management Console and open the Amazon S3 console at: [ https://console.aws.amazon.com/s3/](https://console.aws.amazon.com//s3).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to upload your folders or files to.

1. Choose **Upload**.

1. In the **Upload** window, choose **Properties**.

1. Under Storage class, choose a storage classes for the files you're uploading.

1. (Optional) Configure any additional properties for the files you're uploading, For more information, see [Uploading objects](upload-objects.md)

1. In the Upload window, do one of the following:
   + Drag files and folders to the Upload window. 
   + Choose **Add file** or **Add folder**, choose the files or folders to upload, and choose **Open**.

1. At the bottom of the page, Choose **Upload**.

### Using the REST API
<a name="setting-storage-class-rest"></a>

You can specify the storage class on an object when you create it using the `PutObject`, `POST Object` Object, and `CreateMultipartUpload` API operations, add the `x-amz-storage-class` request header. If you don't add this header, Amazon S3 uses the default S3 Standard (`STANDARD`) storage class.

This example request uses the `[PutObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)` command to set the storage class on a new object to S3 Intelligent-Tiering:

```
PUT /my-image.jpg HTTP/1.1
Host: amzn-s3-demo-bucket1.s3.Region.amazonaws.com 
Date: Wed, 12 Oct 2009 17:50:00 GMT 
Authorization: authorization string 
Content-Type: image/jpeg 
Content-Length: 11434 
Expect: 100-continue 
x-amz-storage-class: INTELLIGENT_TIERING
```

### Using the AWS CLI
<a name="setting-storage-class-rest"></a>

This example uses the `put-object` command to upload the *my\$1images.tar.bz2* to **amzn-s3-demo-bucket1** in the `GLACIER `storage class:

```
aws s3api put-object --bucket amzn-s3-demo-bucket1 --key dir-1/my_images.tar.bz2 --storage-class GLACIER --body my_images.tar.bz2
```

If the object size is more than 5 GB, use the following command to set the storage class:

```
aws s3 cp large_test_file s3://amzn-s3-demo-bucket1 --storage-class GLACIER
```

## Changing the storage class for an existing object
<a name="changing-storage-class"></a>

To set the storage class when you upload an object, you can use the following methods.

### Using the S3 console
<a name="changing-storage-class-console"></a>

You can change an object's storage class using the Amazon S3 console if the object size is less than 5 GB. If larger, we recommend adding an S3 Lifecycle configuration to change the object's storage class.

To change the storage class of an object in the console:

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1.  In the buckets list, choose the name of the bucket containing the objects you want to change.

1. Select the check box to the left of the names of the objects you want to change.

1. On the **Actions** menu, choose **Edit storage class** from the list of options that appears.

1. Select from the storage classes available for your object.

1. Under **Additional copy settings**, choose whether you want to **Copy source settings**, **Don’t specify settings**, or **Specify settings**. **Copy source settings** is the default option. If you only want to copy the object without the source settings attributes, choose **Don’t specify settings**. Choose **Specify settings** to specify settings for storage class, ACLs, object tags, metadata, server-side encryption, and additional checksums.

1. Choose **Save changes** in the bottom-right corner. Amazon S3 saves your changes.

### Using the REST API
<a name="changing-storage-class-rest"></a>

To change the storage class of an existing object, use the following methods.

This example request uses the `[PutObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)` command to set the storage class for an existing object to S3 Intelligent-Tiering:

```
PUT /my-image.jpg HTTP/1.1
Host: amzn-s3-demo-bucket1.s3.Region.amazonaws.com 
Date: Wed, 12 Oct 2009 17:50:00 GMT 
Authorization: authorization string 
Content-Type: image/jpeg 
Content-Length: 11434 
Expect: 100-continue 
x-amz-storage-class: INTELLIGENT_TIERING
```

### Using the AWS CLI
<a name="setting-storage-class-rest"></a>

This example uses the `cp` command to change the storage class of the of an existing object from its current storage class to `DEEP_ARCHIVE `storage class:

```
aws s3 cp object_S3_URI object_S3_URI --storage-class DEEP_ARCHIVE
```

## Restricting access policy permissions to a specific storage class
<a name="restricting-storage-class"></a>

When you grant access policy permissions for Amazon S3 operations, you can use the `s3:x-amz-storage-class` condition key to restrict which storage class to use when storing uploaded objects. For example, when you grant the `s3:PutObject` permission, you can restrict object uploads to a specific storage class. For an example policy, see [Example: Restricting object uploads to objects with a specific storage class](security_iam_service-with-iam.md#example-storage-class-condition-key). 

For more information about using conditions in policies and a complete list of Amazon S3 condition keys, see the following topics:
+ [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*

  For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).
+ [Bucket policy examples using condition keys](amazon-s3-policy-keys.md)

# Amazon S3 analytics – Storage Class Analysis
<a name="analytics-storage-class"></a>

By using Amazon S3 analytics *Storage Class Analysis* you can analyze storage access patterns to help you decide when to transition the right data to the right storage class. This new Amazon S3 analytics feature observes data access patterns to help you determine when to transition less frequently accessed STANDARD storage to the STANDARD\$1IA (IA, for infrequent access) storage class. For more information about storage classes, see [Understanding and managing Amazon S3 storage classes](storage-class-intro.md). 

After storage class analysis observes the infrequent access patterns of a filtered set of data over a period of time, you can use the analysis results to help you improve your lifecycle configurations. You can configure storage class analysis to analyze all the objects in a bucket. Or, you can configure filters to group objects together for analysis by common prefix (that is, objects that have names that begin with a common string), by object tags, or by both prefix and tags. You'll most likely find that filtering by object groups is the best way to benefit from storage class analysis. 

**Important**  
Storage class analysis only provides recommendations for Standard to Standard IA classes.

You can have multiple storage class analysis filters per bucket, up to 1,000, and will receive a separate analysis for each filter. Multiple filter configurations allow you analyze specific groups of objects to improve your lifecycle configurations that transition objects to STANDARD\$1IA. 

Storage class analysis provides storage usage visualizations in the Amazon S3 console that are updated daily. You can also export this daily usage data to an S3 bucket and view them in a spreadsheet application, or with business intelligence tools, like Quick.

There are costs associated with the storage class analysis. For pricing information, see *Management and insights* [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

**Topics**
+ [

## How do I set up storage class analysis?
](#analytics-storage-class-how-to-set-up)
+ [

## How do I use storage class analysis?
](#analytics-storage-class-contents)
+ [

## How can I export storage class analysis data?
](#analytics-storage-class-export-to-file)
+ [

# Configuring storage class analysis
](configure-analytics-storage-class.md)

## How do I set up storage class analysis?
<a name="analytics-storage-class-how-to-set-up"></a>

You set up storage class analysis by configuring what object data you want to analyze. You can configure storage class analysis to do the following:
+ **Analyze the entire contents of a bucket.**

  You'll receive an analysis for all the objects in the bucket.
+ **Analyze objects grouped together by prefix and tags.**

  You can configure filters that group objects together for analysis by prefix, or by object tags, or by a combination of prefix and tags. You receive a separate analysis for each filter you configure. You can have multiple filter configurations per bucket, up to 1,000. 
+ **Export analysis data.** 

  When you configure storage class analysis for a bucket or filter, you can choose to have the analysis data exported to a file each day. The analysis for the day is added to the file to form a historic analysis log for the configured filter. The file is updated daily at the destination of your choice. When selecting data to export, you specify a destination bucket and optional destination prefix where the file is written.

You can use the Amazon S3 console, the REST API, or the AWS CLI or AWS SDKs to configure storage class analysis.
+ For information about how to configure storage class analysis in the Amazon S3 console, see [Configuring storage class analysis](configure-analytics-storage-class.md).
+ To use the Amazon S3 API, use the [PutBucketAnalyticsConfiguration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTAnalyticsConfig.html) REST API, or the equivalent, from the AWS CLI or AWS SDKs. 

## How do I use storage class analysis?
<a name="analytics-storage-class-contents"></a>

You use storage class analysis to observe your data access patterns over time to gather information to help you improve the lifecycle management of your STANDARD\$1IA storage. After you configure a filter, you'll start seeing data analysis based on the filter in the Amazon S3 console in 24 to 48 hours. However, storage class analysis observes the access patterns of a filtered data set for 30 days or longer to gather information for analysis before giving a result. The analysis continues to run after the initial result and updates the result as the access patterns change

When you first configure a filter, the Amazon S3 console may take a moment to analyze your data.

Storage class analysis observes the access patterns of a filtered object data set for 30 days or longer to gather enough information for the analysis. After storage class analysis has gathered sufficient information, you'll see a message in the Amazon S3 console that analysis is complete.

When performing the analysis for infrequently accessed objects storage class analysis looks at the filtered set of objects grouped together based on age since they were uploaded to Amazon S3. Storage class analysis determines if the age group is infrequently accessed by looking at the following factors for the filtered data set:
+ Objects in the STANDARD storage class that are larger than 128 KB.
+ How much average total storage you have per age group.
+ Average number of bytes transferred out (not frequency) per age group.
+ Analytics export data only includes requests with data relevant to storage class analysis. This might cause differences in the number of requests, and the total upload and request bytes compared to what are shown in storage metrics or tracked by your own internal systems.
+ Failed GET and PUT requests are not counted for the analysis. However, you will see failed requests in storage metrics. 

**How Much of My Storage did I Retrieve?**

The Amazon S3 console graphs how much of the storage in the filtered data set has been retrieved for the observation period.

**What Percentage of My Storage did I Retrieve?**

The Amazon S3 console also graphs what percentage of the storage in the filtered data set has been retrieved for the observation period.

As stated earlier in this topic, when you are performing the analysis for infrequently accessed objects, storage class analysis looks at the filtered set of objects grouped together based on the age since they were uploaded to Amazon S3. The storage class analysis uses the following predefined object age groups: 
+ Amazon S3 Objects less than 15 days old
+ Amazon S3 Objects 15-29 days old
+ Amazon S3 Objects 30-44 days old
+ Amazon S3 Objects 45-59 days old
+ Amazon S3 Objects 60-74 days old
+ Amazon S3 Objects 75-89 days old
+ Amazon S3 Objects 90-119 days old
+ Amazon S3 Objects 120-149 days old
+ Amazon S3 Objects 150-179 days old
+ Amazon S3 Objects 180-364 days old
+ Amazon S3 Objects 365-729 days old
+ Amazon S3 Objects 730 days and older

Usually it takes about 30 days of observing access patterns to gather enough information for an analysis result. It might take longer than 30 days, depending on the unique access pattern of your data. However, after you configure a filter you'll start seeing data analysis based on the filter in the Amazon S3 console in 24 to 48 hours. You can see analysis on a daily basis of object access broken down by object age group in the Amazon S3 console. 

**How Much of My Storage is Infrequently Accessed?**

The Amazon S3 console shows the access patterns grouped by the predefined object age groups. The **Frequently accessed** or **Infrequently accessed** text shown is meant as a visual aid to help you in the lifecycle creation process.

## How can I export storage class analysis data?
<a name="analytics-storage-class-export-to-file"></a>

You can choose to have storage class analysis export analysis reports to a comma-separated values (CSV) flat file. Reports are updated daily and are based on the object age group filters you configure. When using the Amazon S3 console you can choose the export report option when you create a filter. When selecting data export you specify a destination bucket and optional destination prefix where the file is written. You can export the data to a destination bucket in a different account. The destination bucket must be in the same region as the bucket that you configure to be analyzed.

You must create a bucket policy on the destination bucket to grant permissions to Amazon S3 to verify what AWS account owns the bucket and to write objects to the bucket in the defined location. For an example policy, see [Grant permissions for S3 Inventory and S3 analytics](example-bucket-policies.md#example-bucket-policies-s3-inventory-1).

After you configure storage class analysis reports, you start getting the exported report daily after 24 hours. After that, Amazon S3 continues monitoring and providing daily exports. 

You can open the CSV file in a spreadsheet application or import the file into other applications like [Amazon Quick](https://docs.aws.amazon.com/quicksight/latest/user/welcome.html). For information on using Amazon S3 objects with Amazon Quick, see the [Amazon Quick User Guide](https://docs.aws.amazon.com/quicksight/latest/user/create-a-data-set-s3.html).

Data in the exported file is sorted by date within object age group as shown in following examples. If the storage class is STANDARD the row also contains data for the columns `ObjectAgeForSIATransition` and `RecommendedObjectAgeForSIATransition`.

![\[Screen shot of exported storage class analysis data sorted by date within object age group.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/storage-class-analysis-export-file1.png)


At the end of the report the object age group is given as ALL. The ALL rows contain cumulative totals, including objects smaller than 128 KB, for all the age groups for that day.

![\[Screen shot of exported storage class analysis data with ALL rows containing cumulative totals.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/storage-class-analysis-export-file3.png)


The next section describes the columns used in the report.

### Exported file layout
<a name="analytics-storage-class-export-file-layout"></a>

The following table describes the Amazon S3 storage class analysis export file layout.

Use the scroll bars to see the rest of the table.


| Column name | Dimension/Metric | DataType | Description | 
| --- | --- | --- | --- | 
| Date  | Dimension | String  | Date when the record was processed. Format is MM-DD-YYYY. | 
| ConfigId  | Dimension | String  | Value entered as the filter name when adding the filter configuration.  | 
| Filter | Dimension | String  | The `Filter` field is intentionally set to an empty value. | 
| StorageClass | Dimension | String  | Storage class of the data. | 
| ObjectAge | Dimension | String  | Age group for the objects in the filter. In addition to the 12 different age groups (0-14 days, 15-29 days, 30-44 days, 45-59 days, 60-74 days, 75-89 days, 90-119 days, 120-149 days, 150-179 days, 180-364 days, 365-729 days, 730 days\$1) for 128KB\$1 objects, there is one extra value='ALL', which represents all age groups. | 
| ObjectCount  | Metric  |  Integer  | Total number of objects counted per storage class for the day. This value is only populated for the `AgeGroup='ALL'` and shows the total object count for all the age groups for the day. | 
| DataUploaded\$1MB  | Metric | Number | Total data in MB uploaded per storage class for the day. This value is only populated for the `AgeGroup='ALL'` and shows the total upload count in MB for all the age groups for the day. (Note that you will not see multipart object upload activity in your export data because multipart upload requests do not currently have storage class information.) | 
| Storage\$1MB  | Metric | Number  | Total storage in MB per storage class for the day in the age group. For the `AgeGroup='ALL'`, the value is the overall storage count in MB for all the age groups for the day. | 
| DataRetrieved\$1MB | Metric | Number | Data transferred out in MBs per storage class with GET requests for the day in the age group. For `AgeGroup='ALL'`, the value is the overall data transferred out in MB with GET requests for all the age groups for the day. | 
| GetRequestCount | Metric | Integer | Number of GET and PUT requests made per storage class for the day in the age group. For AgeGroup='ALL', the value represents the overall GET and PUT request count for all the age groups for the day.  The GetRequestCount column is mislabelled and also includes the number of PUT requests made per storage class.   | 
| CumulativeAccessRatio | Metric | Number | Cumulative access ratio. This ratio is used to represent the usage/byte heat on any given age group to help determine if an age group is eligible for transition to STANDARD\$1IA.  | 
| ObjectAgeForSIATransition | Metric | Integer In Days  | This value exists only where the `AgeGroup=’ALL’` and storage class = STANDARD. It represents the observed age for transition to STANDARD\$1IA. | 
| RecommendedObjectAgeForSIATransition  | Metric | Integer In Days  | This value exists only where the `AgeGroup=’ALL’` and storage class = STANDARD. It represents the object age in days to consider for transition to STANDARD\$1IA after the `ObjectAgeForSIATransition` stabilizes. | 

# Configuring storage class analysis
<a name="configure-analytics-storage-class"></a>

By using the Amazon S3 analytics storage class analysis tool, you can analyze storage access patterns to help you decide when to transition the right data to the right storage class. Storage class analysis observes data access patterns to help you determine when to transition less frequently accessed STANDARD storage to the STANDARD\$1IA (IA, for infrequent access) storage class. For more information about STANDARD\$1IA, see the [Amazon S3 FAQ](https://aws.amazon.com/s3/faqs/#sia) and [Understanding and managing Amazon S3 storage classes](storage-class-intro.md).

You set up storage class analysis by configuring what object data you want to analyze. You can configure storage class analysis to do the following:
+ **Analyze the entire contents of a bucket.**

  You'll receive an analysis for all the objects in the bucket.
+ **Analyze objects grouped together by prefix and tags.**

  You can configure filters that group objects together for analysis by prefix, or by object tags, or by a combination of prefix and tags. You receive a separate analysis for each filter you configure. You can have multiple filter configurations per bucket, up to 1,000. 
+ **Export analysis data.** 

  When you configure storage class analysis for a bucket or filter, you can choose to have the analysis data exported to a file each day. The analysis for the day is added to the file to form a historic analysis log for the configured filter. The file is updated daily at the destination of your choice. When selecting data to export, you specify a destination bucket and optional destination prefix where the file is written.

You can use the Amazon S3 console, the REST API, or the AWS CLI or AWS SDKs to configure storage class analysis.

**Important**  
Storage class analysis does not give recommendations for transitions to the ONEZONE\$1IA or S3 Glacier Flexible Retrieval storage classes.  
If you want to configure storage class analysis to export your findings as a .csv file and the destination bucket uses default bucket encryption with a AWS KMS key, you must update the AWS KMS key policy to grant Amazon S3 permission to encrypt the .csv file. For instructions, see [Granting Amazon S3 permission to use your customer managed key for encryption](configure-inventory.md#configure-inventory-kms-key-policy).

For more information about analytics, see [Amazon S3 analytics – Storage Class Analysis](analytics-storage-class.md).

## Using the S3 console
<a name="storage-class-analysis-console"></a>

**To configure storage class analysis**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets** or **Directory buckets**.

1. In the buckets list, choose the name of the bucket for which you want to configure storage class analysis.

1. Choose the **Metrics** tab.

1. Under **Storage Class Analysis**, choose **Create analytics configuration**.

1. Type a name for the filter. If you want to analyze the whole bucket, leave the **Prefix** field empty.

1. In the **Prefix** field, type text for the prefix for the objects that you want to analyze.

1. To add a tag, choose **Add tag**. Enter a key and value for the tag. You can enter one prefix and multiple tags.

1. Optionally, you can choose **Enable** under **Export CSV** to export analysis reports to a comma-separated values (.csv) flat file. Choose a destination bucket where the file can be stored. You can type a prefix for the destination bucket. The destination bucket must be in the same AWS Region as the bucket for which you are setting up the analysis. The destination bucket can be in a different AWS account. 

   If the destination bucket for the .csv file uses default bucket encryption with a KMS key, you must update the AWS KMS key policy to grant Amazon S3 permission to encrypt the .csv file. For instructions, see [Granting Amazon S3 permission to use your customer managed key for encryption](configure-inventory.md#configure-inventory-kms-key-policy).

1. Choose **Create Configuration**.

 Amazon S3 creates a bucket policy on the destination bucket that grants Amazon S3 write permission. This will allow it to write the export data to the bucket. 

 If an error occurs when you try to create the bucket policy, you'll be given instructions on how to fix it. For example, if you chose a destination bucket in another AWS account and do not have permissions to read and write to the bucket policy, you'll see the following message. You must have the destination bucket owner add the displayed bucket policy to the destination bucket. If the policy is not added to the destination bucket you won’t get the export data because Amazon S3 doesn’t have permission to write to the destination bucket. If the source bucket is owned by a different account than that of the current user, then the correct account ID of the source bucket must be substituted in the policy.

For information about the exported data and how the filter works, see [Amazon S3 analytics – Storage Class Analysis](analytics-storage-class.md).

## Using the REST API
<a name="storage-class-apis"></a>

To configure Storage Class Analysis using the REST API, use the [PutBucketAnalyticsConfiguration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTAnalyticsConfig.html). You can also use the equivalent operation with the AWS CLI or AWS SDKs. 

You can use the following REST APIs to work with Storage Class Analysis:
+  [ DELETE Bucket Analytics configuration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETEAnalyticsConfiguration.html) 
+  [ GET Bucket Analytics configuration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETAnalyticsConfig.html) 
+  [ List Bucket Analytics Configuration](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketListAnalyticsConfigs.html) 

# Managing storage costs with Amazon S3 Intelligent-Tiering
<a name="intelligent-tiering"></a>

The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective access tier when access patterns change, without operational overhead or impact on performance. For a small monthly object monitoring and automation charge, S3 Intelligent-Tiering monitors access patterns and automatically moves objects that have not been accessed to lower-cost access tiers.

S3 Intelligent-Tiering delivers automatic storage cost savings in three low latency and high throughput access tiers. For data that can be accessed asynchronously, you can choose to activate automatic archiving capabilities within the S3 Intelligent-Tiering storage class. There are no retrieval charges in S3 Intelligent-Tiering. If an object in the Infrequent Access tier or Archive Instant Access tier is accessed later, it is automatically moved back to the Frequent Access tier. No additional tiering charges apply when objects are moved between access tiers within the S3 Intelligent-Tiering storage class.

S3 Intelligent-Tiering is the recommended storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period, such as data lakes, data analytics, and new applications.

 The S3 Intelligent-Tiering storage class supports all Amazon S3 features, including the following:
+ S3 Inventory, for verifying the access tier of objects
+ S3 Replication, for replicating data to any AWS Region
+ S3 Storage Lens, for viewing storage usage and activity metrics
+ Server-side encryption, for protecting object data
+ S3 Object Lock, for preventing accidental deletion of data
+ AWS PrivateLink, for accessing Amazon S3 through a private endpoint in a virtual private cloud (VPC)

For information about using S3 Intelligent-Tiering, see the following sections:

**Topics**
+ [

# How S3 Intelligent-Tiering works
](intelligent-tiering-overview.md)
+ [

# Using S3 Intelligent-Tiering
](using-intelligent-tiering.md)
+ [

# Managing S3 Intelligent-Tiering
](intelligent-tiering-managing.md)

# How S3 Intelligent-Tiering works
<a name="intelligent-tiering-overview"></a>

The Amazon S3 Intelligent-Tiering storage class automatically stores objects in three access tiers. One tier is optimized for frequent access, one lower-cost tier is optimized for infrequent access, and another very low-cost tier is optimized for rarely accessed data. For a low monthly object monitoring and automation charge, S3 Intelligent-Tiering monitors access patterns and automatically moves objects to the Infrequent Access tier when they haven't been accessed for 30 consecutive days. After 90 days of no access, the objects are moved to the Archive Instant Access tier without performance impact or operational overhead.

To get the lowest storage cost for data that can be accessed in minutes to hours, activate archiving capabilities to add two additional access tiers. You can tier down objects to the Archive Access tier, the Deep Archive Access tier, or both. With Archive Access, S3 Intelligent-Tiering moves objects that have not been accessed for a minimum of 90 consecutive days to the Archive Access tier. With Deep Archive Access, S3 Intelligent-Tiering moves objects to the Deep Archive Access tier after a minimum of 180 consecutive days of no access. For both tiers, you can configure the number of days of no access based on your needs.

The following actions constitute access that prevents tiering your objects down to the Archive Access tier or the Deep Archive Access tier:
+ Downloading or copying an object through the Amazon S3 console.
+ Invoking [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), or replicating objects with S3 Batch Replication. In these cases, the source objects of the copy or replication operations are tiered up.
+ Invoking [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html), or [https://docs.aws.amazon.com/AmazonS3/latest/API/API_SelectObjectContent.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_SelectObjectContent.html).

For example, if your objects are accessed through `SelectObjectContent` before your specified number of days of no access (for example, 180 days), that action resets the timer. Your objects won't move to the Archive Access tier or the Deep Archive Access tier until the time after the last `SelectObjectContent` request reaches your specified number of days.

If an object in the Infrequent Access tier or Archive Instant Access tier is accessed later, it is automatically moved back to the Frequent Access tier.

The following actions constitute access that automatically moves objects from the Infrequent Access tier or the Archive Instant Access tier back to the Frequent Access tier:
+ Downloading or copying an object through the Amazon S3 console.
+ Invoking [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), or replicating objects with Batch Replication. In these cases, the source objects of the copy or replication operations are tiered up.
+ Invoking [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html), or [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html).

Other actions **don't** constitute access that automatically moves objects from the Infrequent Access tier or the Archive Instant Access tier back to the Frequent Access tier. The following is a sample, not a definitive list, of such actions:
+ Invoking [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions), and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateObjectEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateObjectEncryption.html).
+ Invoking [https://docs.aws.amazon.com/AmazonS3/latest/API/API_SelectObjectContent.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_SelectObjectContent.html) doesn't constitute access that tiers objects up to a Frequent Access tier. In addition, it doesn't prevent tiering objects down from the Frequent Access tier to the Infrequent Access tier, and then to the Archive Instant Access tier.

You can use S3 Intelligent-Tiering as your default storage class for newly created data by specifying `INTELLIGENT-TIERING` in the [x-amz-storage-class request header](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#AmazonS3-PutObject-request-header-StorageClass) when calling the `PutObject`, `CopyObject`, or `CreateMultipartUpload` operations. S3 Intelligent-Tiering is designed for 99.9% availability and 99.999999999% durability.

**Note**  
If the size of an object is less than 128 KB, it is not monitored and is not eligible for automatic tiering. Smaller objects are always stored in the Frequent Access tier.

## S3 Intelligent-Tiering access tiers
<a name="intel-tiering-tier-definition"></a>

The following section explains the different automatic and optional access tiers. When objects move between access tiers, the storage class remains the same (S3 Intelligent-Tiering).

Frequent Access tier (automatic)  
This is the default access tier that any object created or transitioned to S3 Intelligent-Tiering begins its lifecycle in. An object remains in this tier as long as it is being accessed. The Frequent Access tier provides low latency and high-throughput performance.

Infrequent Access tier (automatic)  
If an object is not accessed for 30 consecutive days, the object moves to the Infrequent Access tier. The Infrequent Access tier provides low latency and high-throughput performance.

Archive Instant Access tier (automatic)  
If an object is not accessed for 90 consecutive days, the object moves to the Archive Instant Access tier. The Archive Instant Access tier provides low latency and high-throughput performance.

Archive Access tier (optional)  
S3 Intelligent-Tiering provides you with the option to activate the Archive Access tier for data that can be accessed asynchronously. After activation, the Archive Access tier automatically archives objects that have not been accessed for a minimum of 90 consecutive days. You can extend the last access time for archiving to a maximum of 730 days. The Archive Access tier has the same performance as the [S3 Glacier Flexible Retrieval](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html#sc-glacier) storage class.   
Standard retrieval times for this access tier can range from 3–5 hours. If you initiate your restore request by using S3 Batch Operations, your restore starts within minutes. For more information about retrieval options and times, see [Restoring objects from the S3 Intelligent-Tiering Archive Access and Deep Archive Access tiers](intelligent-tiering-managing.md#restore-data-from-int-tier-archive).  
Only activate the Archive Access tier for 90 days if you want to bypass the Archive Instant Access tier. The Archive Access tier delivers slightly lower storage costs, with minute-to-hour retrieval times. The Archive Instant Access tier delivers millisecond access and high-throughput performance.

Deep Archive Access tier (optional)  
S3 Intelligent-Tiering provides you with the option to activate the Deep Archive Access tier for data that can be accessed asynchronously. After activation, the Deep Archive Access tier automatically archives objects that have not been accessed for a minimum of 180 consecutive days. You can extend the last access time for archiving to a maximum of 730 days. The Deep Archive Access tier has the same performance as the [S3 Glacier Deep Archive](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html#sc-glacier) storage class.   
Standard retrieval of objects in this access tier occurs within 12 hours. If you initiate your restore request by using S3 Batch Operations, your restore starts within 9 hours. For more information about retrieval options and times, see [Restoring objects from the S3 Intelligent-Tiering Archive Access and Deep Archive Access tiers](intelligent-tiering-managing.md#restore-data-from-int-tier-archive).

**Note**  
Activate the Archive Access and Deep Archive Access tiers only if your objects can be accessed asynchronously by your application. If the object that you are retrieving is stored in the Archive Access or Deep Archive Access tiers, you must first restore the object by using the `RestoreObject` operation.  
You can restore archived objects with up to 1,000 transactions per second (TPS) of object restore requests per account per AWS Region from S3 Intelligent-Tiering Archive Access, and S3 Intelligent-Tiering Deep Archive Access.

# Using S3 Intelligent-Tiering
<a name="using-intelligent-tiering"></a>

You can use the S3 Intelligent-Tiering storage class to automatically optimize storage costs. S3 Intelligent-Tiering delivers automatic cost savings by moving data on a granular object level between access tiers when access patterns change. For data that can be accessed asynchronously, you can choose to enable automatic archiving within the S3 Intelligent-Tiering storage class using the AWS Management Console, AWS CLI, or Amazon S3 API.

## Moving data to S3 Intelligent-Tiering
<a name="moving-data-to-int-tiering"></a>

There are two ways to move data into S3 Intelligent-Tiering. You can upload objects directly into S3 Intelligent-Tiering from the console or programmatically using a `PUT` operation. For more information, see [Setting the storage class of an object](sc-howtoset.md). You can also configure S3 Lifecycle configurations to transition objects from S3 Standard or S3 Standard-Infrequent Access to S3 Intelligent-Tiering.

### Uploading data to S3 Intelligent-Tiering using Direct PUT
<a name="moving-data-to-int-tiering-directPUT"></a>

When you upload an object to the S3 Intelligent-Tiering storage class using the [PUT](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html) API operation, you specify S3 Intelligent-Tiering in the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject_RequestSyntax](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject_RequestSyntax) request header.

The following request stores the image, `my-image.jpg`, in the `myBucket` bucket. The request uses the `x-amz-storage-class` header to request that the object is stored using the S3 Intelligent-Tiering storage class. 

**Example**  

```
PUT /my-image.jpg HTTP/1.1
Host: myBucket.s3.<Region>.amazonaws.com (http://amazonaws.com/)
Date: Wed, 1 Sep 2021 17:50:00 GMT
Authorization: authorization string
Content-Type: image/jpeg
Content-Length: 11434
Expect: 100-continue
x-amz-storage-class: INTELLIGENT_TIERING
```

### Transitioning data to S3 Intelligent-Tiering from S3 Standard or S3 Standard-Infrequent Access using S3 Lifecycle
<a name="moving-data-to-int-tiering-lifecycle"></a>

You can add rules to an S3 Lifecycle configuration to tell Amazon S3 to transition objects from one storage class to another. For information on supported transitions and related constraints, see [ Transitioning objects using S3 Lifecycle](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html). 

You can specify S3 Lifecycle configurations at the bucket or prefix level. In this S3 Lifecycle configuration rule, the filter specifies a key prefix (`documents/`). Therefore, the rule applies to objects with key name prefix `documents/`, such as `documents/doc1.txt` and `documents/doc2.txt`. The rule specifies a `Transition` action directing Amazon S3 to transition objects to the S3 Intelligent-Tiering storage class 0 days after creation. In this case, objects are eligible for transition to S3 Intelligent-Tiering at midnight UTC following creation.

**Example**  

```
<LifecycleConfiguration>
  <Rule>
    <ID>ExampleRule</ID>
    <Filter>
       <Prefix>documents/</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Transition>
      <Days>0</Days>
      <StorageClass>INTELLIGENT_TIERING</StorageClass>
    </Transition>
 </Rule>
</LifecycleConfiguration>
```

A versioning-enabled bucket maintains one current object version, and zero or more noncurrent object versions. You can define separate Lifecycle rules for current and noncurrent object versions.

For more information, see [Lifecycle configuration elements](intro-lifecycle-rules.md).

## Enabling S3 Intelligent-Tiering Archive Access and Deep Archive Access tiers
<a name="enable-auto-archiving-int-tiering"></a>

To get the lowest storage cost on data that can be accessed in minutes to hours, you can activate one or both of the archive access tiers by creating a bucket, prefix, or object tag level configuration using the AWS Management Console, AWS CLI, or Amazon S3 API. 

### Using the S3 console
<a name="enable-auto-archiving-int-tiering-console"></a>

**To enable S3 Intelligent-Tiering automatic archiving**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the **Buckets** list, choose the name of the bucket that you want.

1. Choose **Properties**.

1. Navigate to the **S3 Intelligent-Tiering Archive configurations** section and choose **Create configuration**.

1. In the **Archive configuration settings** section, specify a descriptive configuration name for your S3 Intelligent-Tiering Archive configuration.

1. Under **Choose a configuration scope**, choose a configuration scope to use. Optionally, you can limit the configuration scope to specified objects within a bucket using a shared prefix, object tag, or combination of the two.

   1. To limit the scope of the configuration, select **Limit the scope of this configuration using one or more filters**.

   1. To limit the scope of the configuration using a single prefix, enter the prefix under **Prefix**. 

   1. To limit the scope of the configuration using object tags, select **Add tag** and enter a value for Key.

1. Under **Status**, select **Enable**.

1. In the **Archive settings** section, select one or both of the Archive Access tiers to enable.

1. Choose **Create**.

### Using the AWS CLI
<a name="enable-auto-archiving-int-tiering-cli"></a>

You can use the following AWS CLI commands to manage S3 Intelligent-Tiering configurations:
+ [https://docs.aws.amazon.com/cli/latest/reference/s3api/delete-bucket-intelligent-tiering-configuration.html](https://docs.aws.amazon.com/cli/latest/reference/s3api/delete-bucket-intelligent-tiering-configuration.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/s3api/get-bucket-intelligent-tiering-configuration.html](https://docs.aws.amazon.com/cli/latest/reference/s3api/get-bucket-intelligent-tiering-configuration.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/s3api/list-bucket-intelligent-tiering-configurations.html](https://docs.aws.amazon.com/cli/latest/reference/s3api/list-bucket-intelligent-tiering-configurations.html)
+ [https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-intelligent-tiering-configuration.html](https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-intelligent-tiering-configuration.html)

For instructions on setting up the AWS CLI, see [Developing with Amazon S3 using the AWS CLI](https://docs.aws.amazon.com/AmazonS3/latest/API/setup-aws-cli.html) in the *Amazon S3 API Reference*.

When using the AWS CLI, you cannot specify the configuration as an XML file. You must specify the JSON instead. The following is an example XML S3 Intelligent-Tiering configuration and equivalent JSON that you can specify in an AWS CLI command.

The following example puts an S3 Intelligent-Tiering configuration to the specified bucket.

**Example [https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-intelligent-tiering-configuration.html](https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-intelligent-tiering-configuration.html)**  

```
{
  "Id": "string",
  "Filter": {
    "Prefix": "string",
    "Tag": {
      "Key": "string",
      "Value": "string"
    },
    "And": {
      "Prefix": "string",
      "Tags": [
        {
          "Key": "string",
          "Value": "string"
        }
        ...
      ]
    }
  },
  "Status": "Enabled"|"Disabled",
  "Tierings": [
    {
      "Days": integer,
      "AccessTier": "ARCHIVE_ACCESS"|"DEEP_ARCHIVE_ACCESS"
    }
    ...
  ]
}
```

```
PUT /?intelligent-tiering&id=Id HTTP/1.1
Host: Bucket.s3.amazonaws.com
<?xml version="1.0" encoding="UTF-8"?>
<IntelligentTieringConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
   <Id>string</Id>
   <Filter>
      <And>
         <Prefix>string</Prefix>
         <Tag>
            <Key>string</Key>
            <Value>string</Value>
         </Tag>
         ...
      </And>
      <Prefix>string</Prefix>
      <Tag>
         <Key>string</Key>
         <Value>string</Value>
      </Tag>
   </Filter>
   <Status>string</Status>
   <Tiering>
      <AccessTier>string</AccessTier>
      <Days>integer</Days>
   </Tiering>
   ...
</IntelligentTieringConfiguration>
```

### Using the PUT API operation
<a name="enable-auto-archiving-int-tiering-api"></a>

You can use the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketIntelligentTieringConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketIntelligentTieringConfiguration.html) operation for a specified bucket and up to 1,000 S3 Intelligent-Tiering configurations per bucket. You can define which objects within a bucket are eligible for the archive access tiers using a shared prefix or object tag. Using a shared prefix or object tag allows you to align to specific business applications, workflows, or internal organizations. You also have the flexibility to activate the Archive Access tier, the Deep Archive Access tier, or both.

## Getting started with S3 Intelligent-Tiering
<a name="intelligent-tiering-tutorial"></a>

To learn more about how to use S3 Intelligent-Tiering, see [Tutorial: Getting started using S3 Intelligent-Tiering](https://aws.amazon.com/getting-started/hands-on/getting-started-using-amazon-s3-intelligent-tiering/?ref=docs_gateway/amazons3/using-intelligent-tiering.html).

# Managing S3 Intelligent-Tiering
<a name="intelligent-tiering-managing"></a>

The S3 Intelligent-Tiering storage class delivers automatic storage cost savings in three low-latency and high-throughput access tiers. It also offers optional archive capabilities to help you get the lowest storage costs in the cloud for data that can be accessed in minutes to hours. 

## Identifying which S3 Intelligent-Tiering access tier objects are stored in
<a name="identify-intelligent-tiering-access-tier"></a>

To get a list of your objects and their corresponding metadata, including their S3 Intelligent-Tiering access tier, you can use [Amazon S3 Inventory](storage-inventory.md). S3 Inventory provides CSV, ORC, or Parquet output files that list your objects and their corresponding metadata. You can receive these inventory reports on either a daily or weekly basis for an Amazon S3 bucket or a shared prefix. (*Shared prefix* refers to objects that have names that begin with a common string.) 

## Viewing the archive status of an object within S3 Intelligent-Tiering
<a name="identify-archive-status"></a>

To receive notice when an object within the S3 Intelligent-Tiering storage class has moved to either the Archive Access tier or the Deep Archive Access tier, you can set up S3 Event Notifications. For more information, see [Enabling event notifications](how-to-enable-disable-notification-intro.md).

Amazon S3 can publish event notifications to an Amazon Simple Notification Service (Amazon SNS) topic, an Amazon Simple Queue Service (Amazon SQS) queue, or an AWS Lambda function. For more information, see [Amazon S3 Event Notifications](EventNotifications.md).

The following is an example of a message that Amazon S3 sends to publish an `s3:IntelligentTiering` event. For more information, see [Event message structure](notification-content-structure.md).

```
 1. {  
 2.    "Records":[  
 3.       {  
 4.          "eventVersion":"2.3",
 5.          "eventSource":"aws:s3",
 6.          "awsRegion":"us-west-2",
 7.          "eventTime":"1970-01-01T00:00:00.000Z",
 8.          "eventName":"IntelligentTiering",
 9.          "userIdentity":{  
10.             "principalId":"s3.amazonaws.com"
11.          },
12.          "requestParameters":{  
13.             "sourceIPAddress":"s3.amazonaws.com"
14.          },
15.          "responseElements":{  
16.             "x-amz-request-id":"C3D13FE58DE4C810",
17.             "x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
18.          },
19.          "s3":{  
20.             "s3SchemaVersion":"1.0",
21.             "configurationId":"testConfigRule",
22.             "bucket":{  
23.                "name":"amzn-s3-demo-bucket",
24.                "ownerIdentity":{  
25.                   "principalId":"A3NL1KOZZKExample"
26.                },
27.                "arn":"arn:aws:s3:::amzn-s3-demo-bucket"
28.             },
29.             "object":{  
30.                "key":"HappyFace.jpg",
31.                "size":1024,
32.                "eTag":"d41d8cd98f00b204e9800998ecf8427e",              
33.             }
34.          },
35.          "intelligentTieringEventData":{
36.             "destinationAccessTier": "ARCHIVE_ACCESS"
37.             }
38.       }
39.    ]
40. }
```

You can also use a [`HEAD` object request](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html) to view an object's archive status. If an object is stored in the S3 Intelligent-Tiering storage class and is in one of the archive tiers, the `HEAD` object response shows the current archive tier. To show the archive tier, the request uses the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html#API_HeadObject_ResponseElements](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html#API_HeadObject_ResponseElements) header. 

The following `HEAD` object request returns the metadata of an object (in this case, `my-image.jpg`).

**Example**  

```
HEAD /my-image.jpg HTTP/1.1
Host: bucket.s3.region.amazonaws.com
Date: Wed, 28 Oct 2009 22:32:00 GMT
Authorization: AWS AKIAIOSFODNN7EXAMPLE:02236Q3V0RonhpaBX5sCYVf1bNRuU=
```

You can also use `HEAD` object requests to monitor the status of a `restore-object` request. If the archive restoration is in progress, the `HEAD` object response includes the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html#API_HeadObject_ResponseElements](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html#API_HeadObject_ResponseElements) header. 

The following sample `HEAD` object response shows an object archived by using S3 Intelligent-Tiering with a restore request in progress.

**Example**  

```
HTTP/1.1 200 OK
x-amz-id-2: FSVaTMjrmBp3Izs1NnwBZeu7M19iI8UbxMbi0A8AirHANJBo+hEftBuiESACOMJp
x-amz-request-id: E5CEFCB143EB505A
Date: Fri, 13 Nov 2020 00:28:38 GMT
Last-Modified: Mon, 15 Oct 2012 21:58:07 GMT
ETag: "1accb31fcf202eba0c0f41fa2f09b4d7"
x-amz-storage-class: 'INTELLIGENT_TIERING'
x-amz-archive-status: 'ARCHIVE_ACCESS'
x-amz-restore: 'ongoing-request="true"'
x-amz-restore-request-date: 'Fri, 13 Nov 2020 00:20:00 GMT'
Accept-Ranges: bytes
Content-Type: binary/octet-stream
Content-Length: 300
Server: AmazonS3
```

## Restoring objects from the S3 Intelligent-Tiering Archive Access and Deep Archive Access tiers
<a name="restore-data-from-int-tier-archive"></a>

To access objects in the S3 Intelligent-Tiering Archive Access and Deep Archive Access tiers, you must initiate a [ restore request](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html), and then wait until the object is moved into the Frequent Access tier. For more information about archived objects, see [Working with archived objects](archived-objects.md).

When you restore an object from the Archive Access tier or Deep Archive Access tier, the object moves back into the Frequent Access tier. Afterwards, if the object isn't accessed for 30 consecutive days, it automatically moves into the Infrequent Access tier. Then, after a minimum of 90 consecutive days of no access, the object moves into the Archive Access tier. After a minimum of 180 consecutive days of no access, the object moves into the Deep Archive Access tier. For more information, see [How S3 Intelligent-Tiering works](intelligent-tiering-overview.md).

You can restore an archived object by using the Amazon S3 console, S3 Batch Operations, the Amazon S3 REST API, the AWS SDKs, or the AWS Command Line Interface (AWS CLI). For more information, see [Working with archived objects](archived-objects.md).

# Understanding S3 Glacier storage classes for long-term data storage
<a name="glacier-storage-classes"></a>

You can use Amazon S3 S3 Glacier storage classes to provide cost-effective solutions to storing long-term data that isn't accessed often. The S3 Glacier storage classes are:
+ S3 Glacier Instant Retrieval
+ S3 Glacier Flexible Retrieval
+ S3 Glacier Deep Archive

You choose one of these storage classes based on how often you access your data and how fast you need to retrieve it. Each of these storage classes offer the same durability and resiliency as the S3 Standard storage class, but at lower storage costs. For more information about the S3 Glacier storage classes, see [https://aws.amazon.com/s3/storage-classes/glacier/](https://aws.amazon.com/s3/storage-classes/glacier/).

****Topics****
+ [

## Comparing the S3 Glacier storage classes
](#glacier-class-compare)
+ [

## S3 Glacier Instant Retrieval
](#GIR)
+ [

## S3 Glacier Flexible Retrieval
](#GFR)
+ [

## S3 Glacier Deep Archive
](#GDA)
+ [

# Understanding archival storage in S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive
](archival-storage.md)
+ [

## How these storage classes differ from the Amazon Glacier service
](#glacier-storage-vs-service)

## Comparing the S3 Glacier storage classes
<a name="glacier-class-compare"></a>

Each S3 Glacier storage class has a minimum storage duration for all objects. If you delete, overwrite, or transition the object to a different storage class before the minimum, you are charged for the remainder of that duration.

Some S3 Glacier storage classes are archival, which means the objects stored in those classes are archived and not available for real-time access. For more information, see [Understanding archival storage in S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive](archival-storage.md).

Storage classes designed for less frequent access patterns with longer retrieval times offer lower storage costs. For pricing information, see [https://aws.amazon.com/s3/pricing/](https://aws.amazon.com/s3/pricing/).

The following table summarizes the key points to consider when choosing a S3 Glacier storage class:


| S3 Glacier storage class | Minimum storage duration | Recommended access frequency | Average retrieval times | Archival? | 
| --- | --- | --- | --- | --- | 
| S3 Glacier Instant Retrieval | 90 days | Quarterly | Milliseconds | No | 
| S3 Glacier Flexible Retrieval | 90 days | Semi-annually | Minutes to 12 hours | Yes | 
| S3 Glacier Deep Archive | 180 days | Annually | 9 to 48 hours | Yes | 

## S3 Glacier Instant Retrieval
<a name="GIR"></a>

We recommend using S3 Glacier Instant Retrieval for long-term data that's accessed once per quarter and requires millisecond retrieval times. This storage class is ideal for performance-sensitive use cases such as image hosting, file-sharing applications, and storing medical records for access during appointments.

S3 Glacier Instant Retrieval storage class offers real-time access to your objects with the same latency and throughput performance as the S3 Standard-IA storage class. When compared to S3 Standard-IA, S3 Glacier Instant Retrieval has lower storage costs but higher data access costs.

There is a minimum object size of 128 KB for data stored in the S3 Glacier Instant Retrieval storage class. This storage class also has a minimum storage duration period of 90 days. 

## S3 Glacier Flexible Retrieval
<a name="GFR"></a>

We recommend using S3 Glacier Flexible Retrieval for archive data that's accessed one to two times a year and doesn't require immediate access. S3 Glacier Flexible Retrieval offers flexible retrieval times to help you balance costs, with access times ranging from a few minutes to hours, and free bulk retrievals. This storage class is ideal for backup and disaster recovery.

Objects stored in S3 Glacier Flexible Retrieval are archived and not available for real-time access. For more information, see [Understanding archival storage in S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive](archival-storage.md). To access these objects, you first initiate a restore request which creates a temporary copy of the object that you can access when the request completes. For information, see [Working with archived objects](archived-objects.md). When you restore an object, you can choose a retrieval tier to meet your use case, with lower costs for longer restore times.

The following retrieval tiers are available for S3 Glacier Flexible Retrieval:
+ **Expedited retrieval** – Typically restores the object in 1–5 minutes. Expedited retrievals are subject to demand, so to make sure you have reliable and predictable restore times, we recommend that you purchase provisioned retrieval capacity. For more information, see [Provisioned capacity](restoring-objects-retrieval-options.md#restoring-objects-expedited-capacity).
+ **Standard retrieval** – Typically restores the object in 3–5 hours, or within 1 minute to 5 hours when you use S3 Batch Operations. For more information, see [Restore objects with Batch Operations](batch-ops-initiate-restore-object.md).
+ **Bulk retrieval** – Typically restores the object within 5–12 hours. Bulk retrievals are free.

For any retrieval option, objects larger than 5 TB typically finish within 48 hours with up to 300 megabytes per second (MBps) of retrieval throughput. For more information, see [Understanding archive retrieval options](restoring-objects-retrieval-options.md).

The minimum storage duration for objects in S3 Glacier Flexible Retrieval storage class is 90 days. 

S3 Glacier Flexible Retrieval requires 40 KB of additional metadata for each object. This includes 32 KB of metadata required to identify and retrieve your data, which is charged at the default rate for S3 Glacier Flexible Retrieval. An additional 8 KB data is required to maintain the user-defined name and metadata for archived objects, and is charged at the S3 Standard rate.

## S3 Glacier Deep Archive
<a name="GDA"></a>

We recommend using S3 Glacier Deep Archive for archive data that's accessed less than once a year. This storage class is designed for retaining data sets for multiple years to meet compliance requirements and can also be used for backup or disaster recovery or any infrequently accessed data that you can wait up to 72 hours to retrieve. S3 Glacier Deep Archive is the lowest-cost storage option in AWS.

Objects stored in S3 Glacier Deep Archive are archived and not available for real-time access. For more information, see [Understanding archival storage in S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive](archival-storage.md). To access these objects, you first initiate a restore request which creates a temporary copy of the object that you can access when the request completes. For information, see [Working with archived objects](archived-objects.md). When you restore an object, you can choose a retrieval tier to meet your use case, with lower costs for longer restore times.

The following retrieval tiers are available for S3 Glacier Deep Archive:
+ **Standard retrieval** – Typically restores the object within 12 hours, or within 9–12 hours when you use S3 Batch Operations. For more information, see [Restore objects with Batch Operations](batch-ops-initiate-restore-object.md).
+ **Bulk retrieval** – Typically restores the object within 48 hours at a fraction of the cost of the Standard retrieval tier.

The minimum storage duration for objects in S3 Glacier Deep Archive storage class is 180 days. 

S3 Glacier Deep Archive requires 40 KB of additional metadata for each object. This includes 32 KB of metadata required to identify and retrieve your data, which is charged at the default rate for S3 Glacier Deep Archive. An additional 8 KB data is required to maintain the user-defined name and metadata for archived objects, and is charged at the S3 Standard rate.

# Understanding archival storage in S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive
<a name="archival-storage"></a>

S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive are archival storage classes. This means that when you store an object in these storage classes that object is archived, and cannot be accessed directly. To access an archived object, you submit a restore request for it, and then wait for the service to restore the object. The restore request restores a temporary copy of the object, and that copy is deleted when the duration you specified in the request expires. For more information, see [Working with archived objects](archived-objects.md).

The transition of objects to the S3 Glacier Deep Archive storage class can go only one way.

If you want to change the storage class of an archived object to another storage class, you must use the restore operation to make a temporary copy of the object first. Then use the copy operation to overwrite the object specifying S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, or Reduced Redundancy Storage as the storage class.

**Note**  
The Copy operation for restored objects isn't supported in the Amazon S3 console for objects in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes. For this type of Copy operation, use the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the REST API.

You can restore archived objects in these storage classes with up to 1,000 transactions per second (TPS) of [object restore requests](https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html) per account per AWS Region.

## Cost considerations
<a name="before-deciding-to-archive-objects"></a>

If you are planning to archive infrequently accessed data for a period of months or years, the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes can reduce your storage costs. However, to ensure that the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage class is appropriate for you, consider the following:
+ **Storage overhead charges** – Each archived object requires 40 KB of additional metadata. This includes 32 KB of metadata required to identify and retrieve your data, which is charged at the default rate for that storage class. An additional 8 KB data is required to maintain the user-defined name and metadata for archived objects, and is charged at the S3 Standard rate.

  If you are archiving small objects, consider these storage charges. Also consider aggregating many small objects into a smaller number of large objects to reduce overhead costs.
+ **Multipart upload pricing** – Objects in S3-storage-class-glacier; and S3 Glacier Deep Archive are billed at S3 Standard storage class rates when you upload them using multipart uploads. For more information, see [Multipart upload and pricing](mpuoverview.md#mpuploadpricing).
+ **Minimum 30 day storage charges** – S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive are long-term archival solutions. The minimal storage duration period is 90 days for the S3 Glacier Flexible Retrieval storage class and 180 days for S3 Glacier Deep Archive. Deleting data that is archived to these storage classes doesn't incur charges if the objects you delete are archived for more than the minimal storage duration period. If you delete or overwrite an archived object within the minimal duration period, Amazon S3 charges for the remainder of that duration. 
+ **Data retrieval charges** – When you restore an archived objects to S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive there are per-request data retrieval charges. These charges vary based on the retrieval tier you choose when you initiate a restore. For pricing information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). 
+ **S3 Lifecycle ** – When you restore an archived objects to S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive there are per-request data retrieval charges. These charges vary based on the retrieval tier you choose when you initiate a restore. For pricing information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). 

## Restoring archived objects
<a name="restore-glacier-objects-concepts"></a>

Archived objects aren't accessible in real time. You must first initiate a restore request and then wait until a temporary copy of the object is available for the duration that you specify in the request. After you receive a temporary copy of the restored object, the object's storage class remains S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive. (A [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html) or [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html) API operation request will return S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive as the storage class.) 

**Note**  
When you restore an archive, you are paying for both the archive (S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive rate) and a copy that you restored temporarily (S3 Standard storage rate). For information about pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). 

You can restore an object copy programmatically or by using the Amazon S3 console. Amazon S3 processes only one restore request at a time per object. For more information, see [Restoring an archived object](restoring-objects.md).

## How these storage classes differ from the Amazon Glacier service
<a name="glacier-storage-vs-service"></a>

The S3 Glacier storage classes are part of the Amazon S3 service and store data as objects in S3 buckets. You can manage objects in these storage classes using the S3 console or programmatically using the S3 APIs or SDKs. When you store objects in S3 Glacier storage classes, you can use S3 features such as advanced encryption, object tagging, and S3 Lifecycle configurations to help manage data accessibility and cost. 

**Important**  
We recommend using the S3 Glacier storage classes within the Amazon S3 service for all of your long-term data.

The Amazon Glacier service is a separate service that stores data as archives within vaults. This service doesn't support Amazon S3 features and doesn’t provide console support for data upload and download operations. We don't recommend using the Amazon Glacier service for your long-term data. Data stored in this service isn't accessible from the Amazon S3 service. If you are looking for information on the Amazon Glacier service, see the [Amazon Glacier Developer Guide](https://docs.aws.amazon.com/amazonglacier/latest/dev/introduction.html). To transfer data from the Amazon Glacier service to a storage class in Amazon S3 see [Data Transfer from Amazon Glacier Vaults to Amazon S3](https://aws.amazon.com/solutions/implementations/data-transfer-from-amazon-s3-glacier-vaults-to-amazon-s3/) in the AWS solutions library.

# Working with archived objects
<a name="archived-objects"></a>

To reduce your storage costs for infrequently accessed objects, you can *archive* those objects. When you archive an object, it is moved into low-cost storage, which means that you can't access it in real time. 

Although archived objects are not accessible in real time, you can restore them in minutes or hours, depending on the storage class. You can restore an archived object by using the Amazon S3 console, S3 Batch Operations, the REST API, the AWS SDKs, and the AWS Command Line Interface (AWS CLI). For instructions, see [Restoring an archived object](restoring-objects.md).

Amazon S3 objects in the following storage classes or tiers are archived and are not accessible in real time: 
+ The S3 Glacier Flexible Retrieval storage class
+ The S3 Glacier Deep Archive storage class
+ The S3 Intelligent-Tiering Archive Access tier
+ The S3 Intelligent-Tiering Deep Archive Access tier

To restore archived objects, you must do the following:
+ For objects in the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes, you must initiate the restore request and wait until a temporary copy of the object is available. When a temporary copy of the restored object is created, the object's storage class remains the same. (A [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html) or [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html) API operation request returns S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive as the storage class.) 
+ For objects in the S3 Intelligent-Tiering Archive Access and S3 Intelligent-Tiering Deep Archive Access tiers, you must initiate the restore request and wait until the object is moved into the Frequent Access tier. 

For more information about how all Amazon S3 storage classes compare, see [Understanding and managing Amazon S3 storage classes](storage-class-intro.md). For more information about S3 Intelligent-Tiering, see [How S3 Intelligent-Tiering works](intelligent-tiering-overview.md).

The time it takes a restore job to finish depends on which archive storage class or storage tier you use and which retrieval option you specify: Expedited (only available for S3 Glacier Flexible Retrieval and S3 Intelligent-Tiering Archive Access), Standard, or Bulk. For more information, see [Understanding archive retrieval options](restoring-objects-retrieval-options.md).

You can be notified when your restore is complete by using Amazon S3 Event Notifications. For more information, see [Amazon S3 Event Notifications](EventNotifications.md).

## Restoring objects from Amazon Glacier
<a name="archived-objects-glacier"></a>

When you use S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive, Amazon S3 restores a temporary copy of the object only for the specified duration. After that, it deletes the restored object copy. You can modify the expiration period of a restored copy by reissuing a restore request. In this case, Amazon S3 updates the expiration period relative to the current time. 

**Note**  
When you restore an archived object from S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive, you pay for both the archived object and the copy that you restored temporarily. For information about pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

Amazon S3 calculates the expiration time of the restored object copy by adding the number of days specified in the restoration request to the time when the requested restoration is completed. Amazon S3 then rounds the resulting time to the next day at midnight Universal Coordinated Time (UTC). For example, suppose that a restored object copy was created on October 15, 2012, at 10:30 AM UTC, and the restoration period was specified as 3 days. In this case, the restored copy expires on October 19, 2012, at 00:00 UTC, at which time Amazon S3 deletes the object copy. 

## Restoring objects from S3 Intelligent-Tiering
<a name="archived-objects-int"></a>

When you restore an object from the S3 Intelligent-Tiering Archive Access tier or S3 Intelligent-Tiering Deep Archive Access tier, the object moves back into the S3 Intelligent-Tiering Frequent Access tier. If the object is not accessed after 30 consecutive days, it automatically moves into the Infrequent Access tier. After a minimum of 90 consecutive days of no access, the object moves into the S3 Intelligent-Tiering Archive Access tier. If the object is not accessed after a minimum of 180 consecutive days, the object moves into the Deep Archive Access tier.

**Note**  
Unlike in the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes, restore requests for S3 Intelligent-Tiering objects don't accept the `Days` value. 

## Using S3 Batch Operations with restore requests
<a name="using-batch-ops-with-restore-requests"></a>

To restore more than one Amazon S3 object with a single request, you can use S3 Batch Operations. You provide S3 Batch Operations with a list of objects to operate on. S3 Batch Operations calls the respective API operation to perform the specified operation. A single Batch Operations job can perform the specified operation on billions of objects containing exabytes of data. 

**Topics**
+ [

## Restoring objects from Amazon Glacier
](#archived-objects-glacier)
+ [

## Restoring objects from S3 Intelligent-Tiering
](#archived-objects-int)
+ [

## Using S3 Batch Operations with restore requests
](#using-batch-ops-with-restore-requests)
+ [

# Understanding archive retrieval options
](restoring-objects-retrieval-options.md)
+ [

# Restoring an archived object
](restoring-objects.md)

# Understanding archive retrieval options
<a name="restoring-objects-retrieval-options"></a>

Amazon S3 has three archival storage classes - S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive. While objects stored in the S3 Glacier Instant Retrieval storage class are immediately available using `GET`, to access data stored in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes you first need to retrieve data using the [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOSTrestore.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOSTrestore.html) REST API. Restoring datasets made up of tens of millions of objects or hundreds of terabytes of data could take longer than typical restore times and need special considerations. For more information, see [Restoring large datasets](#restoring-objects-large-datasets).

You can choose from three retrieval access options to restore your archived objects based on your desired retrieval speed – Expedited, Standard and Bulk.
+ **Expedited retrieval** – Quickly access your data stored in the S3 Glacier Flexible Retrieval storage class or S3 Intelligent-Tiering Archive Access tier. You can use this option for occasional urgent requests for up to hundreds of objects. Objects under 250 megabytes in size are typically made available within 1–5 minutes, and objects 250 megabytes or larger in size are typically retrieved with up to 300 megabytes per second of retrieval throughput. In addition, you have the option to purchase Provisioned Capacity for Expedited retrievals. Provisioned Capacity helps ensure that Expedited retrieval capacity is available when you need it. For more information, see [Provisioned capacity](#restoring-objects-expedited-capacity).
**Note**  
Expedited retrievals are a premium feature and are charged at the Expedited request and retrieval rate. For information about Amazon S3 pricing, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).
+ **Standard retrieval** – Access your data within several hours. Standard is the default option for requests that do not specify the retrieval option. Standard retrievals typically finish within 3–5 hours for the S3 Glacier Flexible Retrieval storage class or S3 Intelligent-Tiering Archive Access tier. Standard retrievals typically finish within 12 hours for the S3 Glacier Deep Archive storage class or S3 Intelligent-Tiering Deep Archive Access tier. Standard retrievals are free for objects stored in the S3 Intelligent-Tiering storage class.
**Note**  
For objects stored in the S3 Glacier Flexible Retrieval storage class or the S3 Intelligent-Tiering Archive Access tier, Standard retrievals initiated by using the S3 Batch Operations restore operation typically start within minutes and finish within 3-5 hours at a throughput of up to 1–2 petabytes per day.
For objects in the S3 Glacier Deep Archive storage class or the S3 Intelligent-Tiering Deep Archive Access tier, Standard retrievals initiated by using Batch Operations typically start to complete within 9 hours at a throughput of up to 1–2 petabytes per day.
+ **Bulk retrieval** – Access your data by using the lowest-cost retrieval option in S3 Glacier storage classes. With Bulk retrievals, you can retrieve large amounts of data inexpensively. For objects stored in the S3 Glacier Flexible Retrieval storage class or the S3 Intelligent-Tiering Archive Access tier, Bulk retrievals typically finish within 5–12 hours. For objects stored in the S3 Glacier Deep Archive storage class or the S3 Intelligent-Tiering Deep Archive Access tier, these retrievals typically finish within 48 hours. Bulk retrievals are free for objects that are stored in the S3 Glacier Flexible Retrieval or S3 Intelligent-Tiering storage classes.

The following table summarizes the archive retrieval options. For information about pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).


| Storage class or tier | Expedited | Standard (with Batch Operations) | Standard (without Batch Operations) | Bulk | 
| --- | --- | --- | --- | --- | 
|  S3 Glacier Flexible Retrieval or S3 Intelligent-Tiering Archive Access  |  1–5 minutes  |  Minutes–5 hours  |  3–5 hours  |  5–12 hours  | 
|  S3 Glacier Deep Archive or S3 Intelligent-Tiering Deep Archive Access  |  Not available  |  9-12 hours  |  Within 12 hours  |  Within 48 hours  | 

To make an `Expedited`, `Standard`, or `Bulk` retrieval, set the `Tier` request element in the [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOSTrestore.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOSTrestore.html) REST API operation request to the option that you want, or the equivalent in the AWS Command Line Interface (AWS CLI) or AWS SDKs. If you purchased provisioned capacity, all Expedited retrievals are automatically served through your provisioned capacity. 

## Restoring large datasets
<a name="restoring-objects-large-datasets"></a>

Restoring datasets made up of tens of millions of objects or hundreds of terabytes of data could take longer than typical restore times for any retrieval tier due to retrieval limits.

When you initiate restore requests for objects that are stored in the S3 Glacier Flexible Retrieval, S3 Glacier Deep Archive, or S3 Intelligent-Tiering storage classes, a retrieval-requests quota is applied for your AWS account. S3 Glacier supports restore requests at a rate of 1,000 transactions per second. If this rate is exceeded otherwise valid requests are throttled or rejected and Amazon S3 returns a `ThrottlingException` error. You can use S3 Batch Operations to retrieve many objects with a single request, which fully utilizes the restore request rate available in your account. For more information, see [Performing object operations in bulk with Batch Operations](batch-ops.md).

After you initiate restore requests, S3 Glacier supports restoring large datasets at a throughput of up to 1–2 petabytes per day per customer account. For any retrieval option, objects larger than 5 terabytes will take longer to be restored with up to 300 megabytes per second of retrieval throughput. For example, a 50-terabyte S3 Glacier Flexible Retrieval object could take up to 48 hours to complete. If you require increased restoration limits, you can contact AWS Support.

## Provisioned capacity
<a name="restoring-objects-expedited-capacity"></a>

Provisioned capacity helps ensure that your retrieval capacity for Expedited retrievals from S3 Glacier Flexible Retrieval is available when you need it. Each unit of capacity provides that at least three Expedited retrievals can be performed every 5 minutes, and it provides up to 300 megabytes per second of retrieval throughput.

Without provisioned capacity, Expedited retrievals might not be accepted during periods of high demand. For predictable, fast access to more of your data, consider using the [S3 Glacier Instant Retrieval](https://aws.amazon.com/s3/storage-classes/glacier/instant-retrieval/) storage class.

Provisioned capacity units are allocated to an AWS account. Thus, the requester of the Expedited data retrieval should purchase the provisioned capacity unit, not the bucket owner.

You can purchase provisioned capacity by using the Amazon S3 console, the Amazon Glacier console, the [Purchase Provisioned Capacity](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-PurchaseProvisionedCapacity.html) REST API operation, the AWS SDKs, or the AWS CLI. For provisioned capacity pricing information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

# Restoring an archived object
<a name="restoring-objects"></a>

Amazon S3 objects in the following storage classes or tiers are archived and are not accessible in real time: 
+ The S3 Glacier Flexible Retrieval storage class
+ The S3 Glacier Deep Archive storage class
+ The S3 Intelligent-Tiering Archive Access tier
+ The S3 Intelligent-Tiering Deep Archive Access tier

Amazon S3 objects that are stored in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes are not immediately accessible. To access an object in these storage classes, you must restore a temporary copy of the object to its S3 bucket for a specified duration (number of days). If you want a permanent copy of the object, restore the object, and then create a copy of it in your Amazon S3 bucket. Copying restored objects isn't supported in the Amazon S3 console. For this type of copy operation, use the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the REST API. Unless you make a copy and change its storage class, the object will still be stored in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes. For information about using these storage classes, see [Storage classes for rarely accessed objects](storage-class-intro.md#sc-glacier).

To access objects in the S3 Intelligent-Tiering Archive Access and Deep Archive Access tiers, you must initiate a restore request and wait until the object is moved into the Frequent Access tier. When you restore an object from the Archive Access tier or Deep Archive Access tier, the object moves back into the Frequent Access tier. For information about using these storage classes, see [Storage class for automatically optimizing data with changing or unknown access patterns](storage-class-intro.md#sc-dynamic-data-access).

For general information about archived objects, see [Working with archived objects](archived-objects.md).

**Note**  
When you restore an archived object from the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes, you pay for both the archived object and the copy that you restored temporarily. 
When you restore an object from S3 Intelligent-Tiering, there are no retrieval charges for Standard or Bulk retrievals. 
Subsequent restore requests called on archived objects that have already been restored are billed as `GET` requests. For information about pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). 

## Restoring an archived object
<a name="restore-archived-objects"></a>

You can restore an archived object by using the Amazon S3 console, the Amazon S3 REST API, the AWS SDKs, the AWS Command Line Interface (AWS CLI), or S3 Batch Operations. 

### Using the S3 console
<a name="restoring-objects-console"></a>

**Restore objects using the Amazon S3 console**  
Use the following procedure to Restore an object that has been archived to the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes, or the S3 Intelligent-Tiering Archive Access or Deep Archive Access storage tiers.

**To restore an archived object**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that contains the objects that you want to restore.

1. In the **Objects** list, select the object or objects that you want to restore, choose **Actions**, and then choose **Initiate restore**.

1. If you're restoring from S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive, enter the number of days that you want your archived data to be accessible in the **Number of days that the restored copy is available** box. 

1. For **Retrieval tier**, do one of the following:
   + Choose **Bulk retrieval** or **Standard retrieval**, and then choose **Initiate restore**. 
   + Choose **Expedited retrieval** (available only for S3 Glacier Flexible Retrieval or S3 Intelligent-Tiering Archive Access). If you're restoring an object in S3 Glacier Flexible Retrieval, you can choose whether to buy provisioned capacity for your Expedited retrieval. If you want to purchase provisioned capacity, proceed to the next step. If you don't, choose **Initiate restore**.
**Note**  
Objects from the S3 Intelligent-Tiering Archive Access and Deep Archive Access tiers are automatically restored to the Frequent Access tier.

1. (Optional) If you're restoring an object in S3 Glacier Flexible Retrieval and you chose **Expedited retrieval**, you can choose whether to buy provisioned capacity. Provisioned capacity is available only for objects in S3 Glacier Flexible Retrieval. If you already have provisioned capacity, choose **Initiate restore** to start a provisioned retrieval. 

   If you have provisioned capacity, all of your Expedited retrievals are served by your provisioned capacity. For more information, see [Provisioned capacity](restoring-objects-retrieval-options.md#restoring-objects-expedited-capacity). 
   + If you don't have provisioned capacity and you don't want to buy it, choose **Initiate restore**. 
   + If you don't have provisioned capacity, but you want to buy provisioned capacity units (PCUs), choose **Purchase PCUs**. In the **Purchase PCUs** dialog box, choose how many PCUs you want to buy, confirm your purchase, and then choose **Purchase PCUs**. When you get the **Purchase succeeded** message, choose **Initiate restore** to start provisioned retrieval.

### Using the AWS CLI
<a name="restoring-objects-cli"></a>

**Restore objects from S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive**  
The following example uses the `restore-object` command to restore the object *`dir1/example.obj`* in the bucket `amzn-s3-demo-bucket` for 25 days.

```
aws s3api restore-object --bucket amzn-s3-demo-bucket --key dir1/example.obj --restore-request '{"Days":25,"GlacierJobParameters":{"Tier":"Standard"}}'
```

If the JSON syntax used in the example results in an error on a Windows client, replace the restore request with the following syntax:

```
--restore-request Days=25,GlacierJobParameters={"Tier"="Standard"}
```

**Restore objects from S3 Intelligent-Tiering Archive Access and Deep Archive Access**  
The following example uses the `restore-object` command to restore the object *`dir1/example.obj`* in the bucket `amzn-s3-demo-bucket` to the Frequent Access tier.

```
aws s3api restore-object --bucket amzn-s3-demo-bucket --key dir1/example.obj --restore-request '{}'
```

**Note**  
Unlike in the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes, restore requests for S3 Intelligent-Tiering objects don't accept the `Days` value.

**Monitor restore status**  
To monitor the status of your `restore-object` request, use the following `head-object` command:

```
aws s3api head-object --bucket amzn-s3-demo-bucket --key dir1/example.obj
```

For more information, see [https://docs.aws.amazon.com//cli/latest/reference/s3api/restore-object.html](https://docs.aws.amazon.com//cli/latest/reference/s3api/restore-object.html) in the *AWS CLI Command Reference*.

### Using the REST API
<a name="restoring-objects-rest"></a>

Amazon S3 provides an API operation for you to initiate the restoration of an archived object. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOSTrestore.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOSTrestore.html) in the *Amazon Simple Storage Service API Reference*.

### Using the AWS SDKs
<a name="restoring-objects-sdks"></a>

For examples of how to restore archived objects in S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive with the AWS SDKs, see [Code examples](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_RestoreObject_section.html) in the *Amazon S3 API Reference*.

### Using S3 Batch Operations
<a name="restoring-int-tier-archive-objects-batch-ops"></a>

To restore more than one archived object with a single request, you can use S3 Batch Operations. You provide S3 Batch Operations with a list of objects to operate on. S3 Batch Operations calls the respective API operation to perform the specified operation. A single Batch Operations job can perform the specified operation on billions of objects containing exabytes of data. 

To create a Batch Operations job, you must have a manifest that contains only the objects that you want to restore. You can create a manifest by using S3 Inventory, or you can supply a CSV file with the necessary information. For more information, see [Specifying a manifest](batch-ops-create-job.md#specify-batchjob-manifest).

Before creating and running S3 Batch Operations jobs, you must grant permissions to Amazon S3 to perform S3 Batch Operations on your behalf. For the required permissions, see [Granting permissions for Batch Operations](batch-ops-iam-role-policies.md).

**Note**  
Batch Operations jobs can operate either on S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage class objects *or* on S3 Intelligent-Tiering Archive Access and Deep Archive Access storage tier objects. Batch Operations can't operate on both types of archived objects in the same job. To restore objects of both types, you *must* create separate Batch Operations jobs.  
For more information about using Batch Operations to restore archive objects, see [Restore objects with Batch Operations](batch-ops-initiate-restore-object.md).

**To create an S3 Initiate Restore Object Batch Operations job**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Batch Operations**.

1. Choose **Create job**.

1. For **AWS Region**, choose the Region where you want to create your job.

1. Under **Manifest format**, choose the type of manifest to use.
   + If you choose **S3 inventory report**, enter the path to the `manifest.json` object that Amazon S3 generated as part of the CSV-formatted inventory report. If you want to use a manifest version other than the most recent, enter the version ID for the `manifest.json` object.
   + If you choose **CSV**, enter the path to a CSV-formatted manifest object. The manifest object must follow the format described in the console. If you want to use a version other than the most recent, you can optionally include the version ID for the manifest object.

1. Choose **Next**.

1. In the **Operation** section, choose **Restore**.

1. In the **Restore** section, for **Restore source**, choose either **Glacier Flexible Retrieval or Glacier Deep Archive** or **Intelligent-Tiering Archive Access tier or Deep Archive Access tier**. 

   If you chose **Glacier Flexible Retrieval or Glacier Deep Archive**, enter a number for **Number of days that the restored copy is available**. 

   For **Retrieval tier**, choose the tier that you want to use.

1. Choose **Next**.

1. 

   On the **Configure additional options** page, fill out the following sections: 
   + In the **Additional options** section, provide a description for the job and specify a priority number for the job. Higher numbers indicate a higher priority. For more information, see [Assigning job priority](batch-ops-job-priority.md).
   + In the **Completion report** section, select whether Batch Operations should create a completion report. For more information about completion reports, see [Completion reports](batch-ops-job-status.md#batch-ops-completion-report).
   + In the **Permissions** section, you must grant permissions to Amazon S3 to perform Batch Operations on your behalf. For the required permissions, see [Granting permissions for Batch Operations](batch-ops-iam-role-policies.md).
   + (Optional) In the **Job tags** section, add tags in key-value pairs. For more information, see [Controlling access and labeling jobs using tags](batch-ops-job-tags.md).

   When you're finished, choose **Next**.

1. On the **Review** page, verify the settings. If you need to make changes, choose **Previous**. Otherwise, choose **Create job**.

For more information about Batch Operations, see [Restore objects with Batch Operations](batch-ops-initiate-restore-object.md) and [Creating an S3 Batch Operations job](batch-ops-create-job.md).

## Checking the restore status and expiration date
<a name="restore-archived-objects-status"></a>

You can check the status of a restore request or the expiration date by using the Amazon S3 console, Amazon S3 Event Notifications, the AWS CLI, or the Amazon S3 REST API.

**Note**  
Objects restored from the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes are stored only for the number of days that you specify. The following procedures return the expiration date for these copies.   
Objects restored from the S3 Intelligent-Tiering Archive Access and Deep Archive Access storage tiers don't have expiration dates and instead are moved back to the Frequent Access tier.

### Using the S3 console
<a name="restore-archived-objects-status-console"></a>

**To check an object's restore status and expiration date in the Amazon S3 console**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that contains the object that you are restoring.

1. In the **Objects** list, select the object that you are restoring. The object's details page appears. 
   + If the restoration isn't finished, at the top of the page, you see a section that says **Restoration in progress**.
   + If the restoration is finished, at the top of the page, you see a section that says **Restoration complete**. If you're restoring from S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive, this section also displays the **Restoration expiry date**. Amazon S3 will remove the restored copy of your archived object on this date.

### Using Amazon S3 Event Notifications
<a name="restore-archived-objects-status-event-notifications"></a>

You can be notified of object restoration completion by using the `s3:ObjectRestore:Completed` action with the Amazon S3 Event Notifications feature. For more information about enabling event notifications, see [ Enabling notifications by using Amazon SQS, Amazon SNS, and AWS Lambda](how-to-enable-disable-notification-intro.md). For more information about the various `ObjectRestore` event types, see [Supported event types for SQS, SNS, and Lambda](notification-how-to-event-types-and-destinations.md#supported-notification-event-types).

### Using the AWS CLI
<a name="restore-archived-objects-status-cli"></a>

**Check an object's restore status and expiration date with the AWS CLI**  
The following example uses the `head-object` command to view metadata for the object *`dir1/example.obj`* in the bucket `amzn-s3-demo-bucket`. When you run this command on an object being restored Amazon S3 returns if the restore is ongoing and (if applicable) the expiration date.

```
aws s3api head-object --bucket amzn-s3-demo-bucket --key dir1/example.obj
```

Expected output (restore ongoing):

```
{
    "Restore": "ongoing-request=\"true\"",
    "LastModified": "2020-06-16T21:55:22+00:00",
    "ContentLength": 405,
    "ETag": "\"b662d79adeb7c8d787ea7eafb9ef6207\"",
    "VersionId": "wbYaE2vtOV0iIBXrOqGAJt3fP1cHB8Wi",
    "ContentType": "binary/octet-stream",
    "ServerSideEncryption": "AES256",
    "Metadata": {},
    "StorageClass": "GLACIER"
}
```

Expected output (restore finished):

```
{
    "Restore": "ongoing-request=\"false\", expiry-date=\"Wed, 12 Aug 2020 00:00:00 GMT\"",
    "LastModified": "2020-06-16T21:55:22+00:00",
    "ContentLength": 405,
    "ETag": "\"b662d79adeb7c8d787ea7eafb9ef6207\"",
    "VersionId": "wbYaE2vtOV0iIBXrOqGAJt3fP1cHB8Wi",
    "ContentType": "binary/octet-stream",
    "ServerSideEncryption": "AES256",
    "Metadata": {},
    "StorageClass": "GLACIER"
}
```

For more information about `head-object`, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/head-object.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/head-object.html) in the *AWS CLI Command Reference*.

### Using the REST API
<a name="restore-archived-objects-status-api"></a>

Amazon S3 provides an API operation for you to retrieve object metadata. To check the restoration status and expiration date of an archived object using the REST API, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html) in the *Amazon Simple Storage Service API Reference*.

## Upgrading the speed of an in-progress restore
<a name="restore-archived-objects-upgrade"></a>

You can upgrade the speed of your restoration while it is in progress.

**To upgrade an in-progress restore to a faster tier**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the name of the bucket that contains the objects that you want to restore.

1. In the **Objects** list, select the object that you are restoring. The object's details page appears. On the object's details page, choose **Upgrade retrieval tier**. For information about checking the restoration status of an object, see [Checking the restore status and expiration date](#restore-archived-objects-status). 

1. Choose the tier that you want to upgrade to, and then choose **Initiate restore**. 