

# Data protection in Amazon S3
<a name="data-protection"></a>

In addition to the resilience offered by the AWS global infrastructure, Amazon S3 offers a number of features to help protect your data against accidental deletions or Regional failures. 

**S3 Replication**  
You can use live replication to enable automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. You can replicate objects to a single destination bucket or to multiple destination buckets. The destination buckets can be in different AWS Regions or within the same Region as the source bucket. To enable failover controls, you can configure replication to be two-way (bidirectional) so that your source and destination buckets can be kept in sync during a Regional failure. For more information, see [Replicating objects within and across Regions](replication.md).

**Multi-Region Access Points and failover controls**  
Amazon S3 Multi-Region Access Points provide a global endpoint that applications can use to fulfill requests from S3 buckets that are located in multiple AWS Regions. You can use Multi-Region Access Points to build multi-Region applications with the same architecture that's used in a single Region, and then run those applications anywhere in the world. Instead of sending requests over the congested public internet, Multi-Region Access Points provide built-in network resilience with acceleration of internet-based requests to Amazon S3. Application requests made to a Multi-Region Access Point global endpoint use [AWS Global Accelerator](https://docs.aws.amazon.com/global-accelerator/latest/dg/) to automatically route over the AWS global network to the closest-proximity S3 bucket with an active routing status. For more information about Multi-Region Access Points, see [Managing multi-Region traffic with Multi-Region Access Points](MultiRegionAccessPoints.md).  
With Amazon S3 Multi-Region Access Point failover controls, you can maintain business continuity during Regional traffic disruptions, while also giving your applications a multi-Region architecture to fulfill compliance and redundancy needs. If your Regional traffic gets disrupted, you can use Multi-Region Access Point failover controls to select which AWS Regions behind an Amazon S3 Multi-Region Access Point will process data-access and storage requests.   
To support failover, you can set up your Multi-Region Access Point in an active-passive configuration, with traffic flowing to the active Region during normal conditions, and a passive Region on standby for failover. If you have S3 Cross-Region Replication (CRR) enabled with two-way replication rules, you can keep your buckets synchronized during a failover. For more information about failover controls, see [Amazon S3 Multi-Region Access Points failover controls](MrapFailover.md).

**S3 Versioning**  
Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures. For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).

**S3 Object Lock**  
You can use S3 Object Lock to store objects using a *write once, read many* (WORM) model. Using S3 Object Lock, you can prevent an object from being deleted or overwritten for a fixed amount of time or indefinitely. S3 Object Lock enables you to meet regulatory requirements that require WORM storage or simply to add an additional layer of protection against object changes and deletion. For more information, see [Locking objects with Object Lock](object-lock.md).

**AWS Backup**  
Amazon S3 is natively integrated with AWS Backup, a fully managed, policy-based service that you can use to centrally define backup policies to protect your data in Amazon S3. After you define your backup policies and assign Amazon S3 resources to the policies, AWS Backup automates the creation of Amazon S3 backups and securely stores the backups in an encrypted backup vault that you designate in your backup plan. For more information, see [Backing up your Amazon S3 data](backup-for-s3.md).

For a tutorial on using some of these features together to protect your data, see [Tutorial: Protecting data on Amazon S3 against accidental deletion or application bugs using S3 Versioning, S3 Object Lock, and S3 Replication](https://aws.amazon.com/getting-started/hands-on/protect-data-on-amazon-s3/?ref=docs_gateway/amazons3/DataDurability.html).

**Important**  
In addition to using the preceding features to protect your data, we recommend reviewing the recommendations in [Security best practices for Amazon S3](security-best-practices.md). 

**Topics**
+ [Replicating objects within and across Regions](replication.md)
+ [Managing multi-Region traffic with Multi-Region Access Points](MultiRegionAccessPoints.md)
+ [Retaining multiple versions of objects with S3 Versioning](Versioning.md)
+ [Locking objects with Object Lock](object-lock.md)
+ [Backing up your Amazon S3 data](backup-for-s3.md)

# Replicating objects within and across Regions
<a name="replication"></a>

You can use replication to enable automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. You can replicate objects to a single destination bucket or to multiple destination buckets. The destination buckets can be in different AWS Regions or within the same Region as the source bucket.

There are two types of replication: *live replication* and *on-demand replication*.
+ **Live replication** – **To automatically replicate new and updated objects** as they are written to the source bucket, use live replication. Live replication doesn't replicate any objects that existed in the bucket before you set up replication. To replicate objects that existed before you set up replication, use on-demand replication.
+ **On-demand replication** – **To replicate existing objects** from the source bucket to one or more destination buckets on demand, use S3 Batch Replication. For more information about replicating existing objects, see [When to use S3 Batch Replication](#batch-replication-scenario).

There are two forms of live replication: *Cross-Region Replication (CRR)* and *Same-Region Replication (SRR)*.
+ **Cross-Region Replication (CRR)** – You can use CRR to replicate objects across Amazon S3 buckets in different AWS Regions. For more information about CRR, see [When to use Cross-Region Replication](#crr-scenario).
+ **Same-Region Replication (SRR)** – You can use SRR to copy objects across Amazon S3 buckets in the same AWS Region. For more information about SRR, see [When to use Same-Region Replication](#srr-scenario).

**Topics**
+ [Why use replication?](#replication-scenario)
+ [When to use Cross-Region Replication](#crr-scenario)
+ [When to use Same-Region Replication](#srr-scenario)
+ [When to use two-way replication (bi-directional replication)](#two-way-replication-scenario)
+ [When to use S3 Batch Replication](#batch-replication-scenario)
+ [Workload requirements and live replication](#replication-workload-requirements)
+ [What does Amazon S3 replicate?](replication-what-is-isnot-replicated.md)
+ [Requirements and considerations for replication](replication-requirements.md)
+ [Setting up live replication overview](replication-how-setup.md)
+ [Managing or pausing live replication](disable-replication.md)
+ [Replicating existing objects with Batch Replication](s3-batch-replication-batch.md)
+ [Troubleshooting replication](replication-troubleshoot.md)
+ [Monitoring replication with metrics, event notifications, and statuses](replication-metrics.md)

## Why use replication?
<a name="replication-scenario"></a>

Replication can help you do the following:
+ **Replicate objects while retaining metadata** – You can use replication to make copies of your objects that retain all metadata, such as the original object creation times and version IDs. This capability is important if you must ensure that your replica is identical to the source object.
+ **Replicate objects into different storage classes** – You can use replication to directly put objects into S3 Glacier Flexible Retrieval, S3 Glacier Deep Archive, or another storage class in the destination buckets. You can also replicate your data to the same storage class and use lifecycle configurations on the destination buckets to move your objects to a colder storage class as they age.
+ **Maintain object copies under different ownership** – Regardless of who owns the source object, you can tell Amazon S3 to change replica ownership to the AWS account that owns the destination bucket. This is referred to as the *owner override* option. You can use this option to restrict access to object replicas.
+ **Keep objects stored over multiple AWS Regions** – To ensure geographic differences in where your data is kept, you can set multiple destination buckets across different AWS Regions. This feature might help you meet certain compliance requirements. 
+ **Replicate objects within 15 minutes** – To replicate your data in the same AWS Region or across different Regions within a predictable time frame, you can use S3 Replication Time Control (S3 RTC). S3 RTC replicates 99.99 percent of new objects stored in Amazon S3 within 15 minutes (backed by a service-level agreement). For more information, see [Meeting compliance requirements with S3 Replication Time Control](replication-time-control.md).
**Note**  
S3 RTC does not apply to Batch Replication. Batch Replication is an on-demand replication job, and can be tracked with S3 Batch Operations. For more information, see [Tracking job status and completion reports](batch-ops-job-status.md).
+ **Sync buckets, replicate existing objects, and replicate previously failed or replicated objects** – To sync buckets and replicate existing objects, use Batch Replication as an on-demand replication action. For more information about when to use Batch Replication, see [When to use S3 Batch Replication](#batch-replication-scenario).
+ **Replicate objects and fail over to a bucket in another AWS Region** – To keep all metadata and objects in sync across buckets during data replication, use two-way replication (also known as bi-directional replication) rules before configuring Amazon S3 Multi-Region Access Point failover controls. Two-way replication rules help ensure that when data is written to the S3 bucket that traffic fails over to, that data is then replicated back to the source bucket.

## When to use Cross-Region Replication
<a name="crr-scenario"></a>

S3 Cross-Region Replication (CRR) is used to copy objects across Amazon S3 buckets in different AWS Regions. CRR can help you do the following:
+ **Meet compliance requirements** – Although Amazon S3 stores your data across multiple geographically distant Availability Zones by default, compliance requirements might dictate that you store data at even greater distances. To satisfy these requirements, use Cross-Region Replication to replicate data between distant AWS Regions.
+ **Minimize latency** – If your customers are in two geographic locations, you can minimize latency in accessing objects by maintaining object copies in AWS Regions that are geographically closer to your users.
+ **Increase operational efficiency** – If you have compute clusters in two different AWS Regions that analyze the same set of objects, you might choose to maintain object copies in those Regions.

## When to use Same-Region Replication
<a name="srr-scenario"></a>

Same-Region Replication (SRR) is used to copy objects across Amazon S3 buckets in the same AWS Region. SRR can help you do the following:
+ **Aggregate logs into a single bucket** – If you store logs in multiple buckets or across multiple accounts, you can easily replicate logs into a single, in-Region bucket. Doing so allows for simpler processing of logs in a single location.
+ **Configure live replication between production and test accounts** – If you or your customers have production and test accounts that use the same data, you can replicate objects between those multiple accounts, while maintaining object metadata.
+ **Abide by data sovereignty laws** – You might be required to store multiple copies of your data in separate AWS accounts within a certain Region. Same-Region Replication can help you automatically replicate critical data when compliance regulations don't allow the data to leave your country.

## When to use two-way replication (bi-directional replication)
<a name="two-way-replication-scenario"></a>
+ **Build shared datasets across multiple AWS Regions** – With replica modification sync, you can easily replicate metadata changes, such as object access control lists (ACLs), object tags, or object locks, on replication objects. This two-way replication is important if you want to keep all objects and object metadata changes in sync. You can [enable replica modification sync](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-for-metadata-changes.html#enabling-replication-for-metadata-changes) on a new or existing replication rule when performing two-way replication between two or more buckets in the same or different AWS Regions.
+ **Keep data synchronized across Regions during failover** – You can synchronize data in buckets between AWS Regions by configuring two-way replication rules with S3 Cross-Region Replication (CRR) directly from a Multi-Region Access Point. To make an informed decision on when to initiate failover, you can also enable S3 replication metrics so that you can monitor the replication in Amazon CloudWatch, in S3 Replication Time Control (S3 RTC), or from the Multi-Region Access Point.
+ **Make your application highly available** – Even in the event of a Regional traffic disruption, you can use two-way replication rules to keep all metadata and objects in sync across buckets during data replication.

## When to use S3 Batch Replication
<a name="batch-replication-scenario"></a>

Batch Replication replicates existing objects to different buckets as an on-demand option. Unlike live replication, these jobs can be run as needed. Batch Replication can help you do the following:
+ **Replicate existing objects** – You can use Batch Replication to replicate objects that were added to the bucket before Same-Region Replication or Cross-Region Replication were configured.
+ **Replicate objects that previously failed to replicate** – You can filter a Batch Replication job to attempt to replicate objects with a replication status of **FAILED**.
+ **Replicate objects that were already replicated** – You might be required to store multiple copies of your data in separate AWS accounts or AWS Regions. Batch Replication can replicate existing objects to newly added destinations.
+ **Replicate replicas of objects that were created from a replication rule** – Replication configurations create replicas of objects in destination buckets. Replicas of objects can be replicated only with Batch Replication.

## Workload requirements and live replication
<a name="replication-workload-requirements"></a>

Depending on your workload requirements, some types of live replication will be better suited to your use case than others. Use the following table to determine which type of replication to use for your situation, and whether to use S3 Replication Time Control (S3 RTC) for your workload. S3 RTC replicates 99.99 percent of new objects stored in Amazon S3 within 15 minutes (backed by a service-level agreement, or SLA). For more information, see [Meeting compliance requirements with S3 Replication Time Control](replication-time-control.md).


| Workload requirement | S3 RTC (15-minute SLA) | Cross-Region Replication (CRR) | Same-Region Replication (SRR) | 
| --- | --- | --- | --- | 
| Replicate objects between different AWS accounts | Yes | Yes | Yes | 
| Replicate objects within the same AWS Region within 24-48 hours (not SLA backed) | No | No | Yes | 
| Replicate objects between different AWS Regions within 24-48 hours (not SLA backed) | No | Yes | No | 
|  Predictable replication time: Backed by SLA to replicate 99.9 percent of objects within 15 minutes  | Yes | No | No | 

# What does Amazon S3 replicate?
<a name="replication-what-is-isnot-replicated"></a>

Amazon S3 replicates only specific items in buckets that are configured for replication. 

**Topics**
+ [What is replicated with replication configurations?](#replication-what-is-replicated)
+ [What isn't replicated with replication configurations?](#replication-what-is-not-replicated)

## What is replicated with replication configurations?
<a name="replication-what-is-replicated"></a>

By default, Amazon S3 replicates the following:
+ Objects created after you add a replication configuration.
+ Unencrypted objects. 
+ Objects encrypted using customer provided keys (SSE-C), objects encrypted at rest under an Amazon S3 managed key (SSE-S3) or a KMS key stored in AWS Key Management Service (SSE-KMS). For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md). 
+ Object metadata from the source objects to the replicas. For information about replicating metadata from the replicas to the source objects, see [Replicating metadata changes with replica modification sync](replication-for-metadata-changes.md).
+ Only objects in the source bucket for which the bucket owner has permissions to read objects and access control lists (ACLs). 

  For more information about resource ownership, see [Amazon S3 bucket and object ownership](access-policy-language-overview.md#about-resource-owner).
+ Object ACL updates, unless you direct Amazon S3 to change the replica ownership when source and destination buckets aren't owned by the same accounts. 

  For more information, see [Changing the replica owner](replication-change-owner.md). 

  It can take a while until Amazon S3 can bring the two ACLs in sync. This change in ownership applies only to objects created after you add a replication configuration to the bucket.
+  Object tags, if there are any.
+ S3 Object Lock retention information, if there is any. 

  When Amazon S3 replicates objects that have retention information applied, it applies those same retention controls to your replicas, overriding the default retention period configured on your destination buckets. If you don't have retention controls applied to the objects in your source bucket, and you replicate into destination buckets that have a default retention period set, the destination bucket's default retention period is applied to your object replicas. For more information, see [Locking objects with Object Lock](object-lock.md).

### How delete operations affect replication
<a name="replication-delete-op"></a>

If you delete an object from the source bucket, the following actions occur by default:
+ If you make a DELETE request without specifying an object version ID, Amazon S3 adds a delete marker. Amazon S3 deals with the delete marker as follows:
  + If you are using the latest version of the replication configuration (that is, you specify the `Filter` element in a replication configuration rule), Amazon S3 does not replicate the delete marker by default. However, you can add *delete marker replication* to non-tag-based rules. For more information, see [Replicating delete markers between buckets](delete-marker-replication.md).
  + If you don't specify the `Filter` element, Amazon S3 assumes that the replication configuration is version V1, and it replicates delete markers that resulted from user actions. However, if Amazon S3 deletes an object due to a lifecycle action, the delete marker is not replicated to the destination buckets.
+ If you specify an object version ID to delete in a `DELETE` request, Amazon S3 deletes that object version in the source bucket. But it doesn't replicate the deletion in the destination buckets. In other words, it doesn't delete the same object version from the destination buckets. This protects data from malicious deletions. 

## What isn't replicated with replication configurations?
<a name="replication-what-is-not-replicated"></a>

By default, Amazon S3 doesn't replicate the following:
+ Objects in the source bucket that are replicas that were created by another replication rule. For example, suppose you configure replication where bucket A is the source and bucket B is the destination. Now suppose that you add another replication configuration where bucket B is the source and bucket C is the destination. In this case, objects in bucket B that are replicas of objects in bucket A are not replicated to bucket C. 

  To replicate objects that are replicas, use Batch Replication. Learn more about configuring Batch Replication at [Replicating existing objects](s3-batch-replication-batch.md).
+ Objects in the source bucket that have already been replicated to a different destination. For example, if you change the destination bucket in an existing replication configuration, Amazon S3 won't replicate the objects again.

  To replicate previously replicated objects, use Batch Replication. Learn more about configuring Batch Replication at [Replicating existing objects](s3-batch-replication-batch.md).
+ Batch Replication does not support re-replicating objects that were deleted with the version ID of the object from the destination bucket. To re-replicate these objects, you can copy the source objects in place with a Batch Copy job. Copying those objects in place creates new versions of the objects in the source bucket and initiates replication automatically to the destination. For more information about how to use Batch Copy, see, [Examples that use Batch Operations to copy objects](batch-ops-examples-copy.md).
+ By default, when replicating from a different AWS account, delete markers added to the source bucket are not replicated.

  For information about how to replicate delete markers, see [Replicating delete markers between buckets](delete-marker-replication.md).
+ Objects that are stored in the S3 Glacier Flexible Retrieval, S3 Glacier Deep Archive, S3 Intelligent-Tiering Archive Access, or S3 Intelligent-Tiering Deep Archive Access storage classes or tiers. You cannot replicate these objects until you restore them and copy them to a different storage class. 

  To learn more about S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive, see [Storage classes for rarely accessed objects](storage-class-intro.md#sc-glacier).

  To learn more about the S3 Intelligent-Tiering, see [Managing storage costs with Amazon S3 Intelligent-Tiering](intelligent-tiering.md).
+ Objects in the source bucket that the bucket owner doesn't have sufficient permissions to replicate. 

  For information about how an object owner can grant permissions to a bucket owner, see [Grant cross-account permissions to upload objects while ensuring that the bucket owner has full control](example-bucket-policies.md#example-bucket-policies-acl-2).
+ Updates to bucket-level subresources. 

  For example, if you change the lifecycle configuration or add a notification configuration to your source bucket, these changes are not applied to the destination bucket. This feature makes it possible to have different configurations on source and destination buckets. 
+ Actions performed by lifecycle configuration. 

  For example, if lifecycle configuration is enabled only on your source bucket, Amazon S3 creates delete markers for expired objects but doesn't replicate those markers. If you want the same lifecycle configuration applied to both the source and destination buckets, enable the same lifecycle configuration on both. For more information about lifecycle configuration, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md).
+ When you're using tag-based replication rules with live replication, new objects must be tagged with the matching replication rule tag in the `PutObject` operation. Otherwise, the objects won't be replicated. If objects are tagged after the `PutObject` operation, those objects also won't be replicated. 

  To replicate objects that have been tagged after the `PutObject` operation, you must use S3 Batch Replication. For more information about Batch Replication, see [Replicating existing objects](s3-batch-replication-batch.md).

# Requirements and considerations for replication
<a name="replication-requirements"></a>

Amazon S3 replication requires the following:
+ The source bucket owner must have the source and destination AWS Regions enabled for their account. The destination bucket owner must have the destination Region enabled for their account. 

  For more information about enabling or disabling an AWS Region, see [Specify which AWS Regions your account can use](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html) in the *AWS Account Management Reference Guide*.
+ Both source and destination buckets must have versioning enabled. For more information about versioning, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).
+ Amazon S3 must have permissions to replicate objects from the source bucket to the destination bucket or buckets on your behalf. For more information about these permissions, see [Setting up permissions for live replication](setting-repl-config-perm-overview.md).
+ If the owner of the source bucket doesn't own the object in the bucket, the object owner must grant the bucket owner `READ` and `READ_ACP` permissions with the object access control list (ACL). For more information, see [Access control list (ACL) overview](acl-overview.md). 
+ If the source bucket has S3 Object Lock enabled, the destination buckets must also have S3 Object Lock enabled. 

  To enable replication on a bucket that has Object Lock enabled, you must use the AWS Command Line Interface, REST API, or AWS SDKs. For more general information, see [Locking objects with Object Lock](object-lock.md).
**Note**  
You must grant two new permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two new permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission, it satisfies the requirement. For more information, see [Setting up permissions for live replication](setting-repl-config-perm-overview.md).

For more information, see [Setting up live replication overview](replication-how-setup.md). 

If you are setting the replication configuration in a *cross-account scenario*, where the source and destination buckets are owned by different AWS accounts, the following additional requirement applies:
+ The owner of the destination buckets must grant the owner of the source bucket permissions to replicate objects with a bucket policy. For more information, see [(Optional) Step 3: Granting permissions when the source and destination buckets are owned by different AWS accounts](setting-repl-config-perm-overview.md#setting-repl-config-crossacct).
+ The destination buckets cannot be configured as Requester Pays buckets. For more information, see [Using Requester Pays general purpose buckets for storage transfers and usage](RequesterPaysBuckets.md).

## Considerations for replication
<a name="replication-and-other-bucket-configs"></a>

Before you create a replication configuration, be aware of the following considerations. 

**Topics**
+ [Lifecycle configuration and object replicas](#replica-and-lifecycle)
+ [Versioning configuration and replication configuration](#replication-and-versioning)
+ [Using S3 Replication with S3 Intelligent-Tiering](#replication-and-intelligent-tiering)
+ [Logging configuration and replication configuration](#replication-and-logging)
+ [CRR and the destination Region](#replication-and-dest-region)
+ [S3 Batch Replication](#considerations-batch-replication)
+ [S3 Replication Time Control](#considerations-RTC)

### Lifecycle configuration and object replicas
<a name="replica-and-lifecycle"></a>

The time it takes for Amazon S3 to replicate an object depends on the size of the object. For large objects, it can take several hours. Although it might take a while before a replica is available in the destination, it takes the same amount of time to create the replica as it took to create the corresponding object in the source bucket. If a lifecycle configuration is enabled on a destination bucket, the lifecycle rules honor the original creation time of the object, not when the replica became available in the destination bucket. 

Replication configuration requires the bucket to be versioning-enabled. When you enable versioning on a bucket, keep the following in mind:
+ If you have an object Expiration lifecycle configuration, after you enable versioning, add a `NonCurrentVersionExpiration` policy to maintain the same permanent delete behavior as before you enabled versioning.
+ If you have a Transition lifecycle configuration, after you enable versioning, consider adding a `NonCurrentVersionTransition` policy.

### Versioning configuration and replication configuration
<a name="replication-and-versioning"></a>

Both the source and destination buckets must be versioning-enabled when you configure replication on a bucket. After you enable versioning on both the source and destination buckets and configure replication on the source bucket, you will encounter the following issues:
+ If you attempt to disable versioning on the source bucket, Amazon S3 returns an error. You must remove the replication configuration before you can disable versioning on the source bucket.
+ If you disable versioning on the destination bucket, replication fails. The source object has the replication status `FAILED`.

### Using S3 Replication with S3 Intelligent-Tiering
<a name="replication-and-intelligent-tiering"></a>

S3 Intelligent-Tiering is a storage class that is designed to optimize storage costs by automatically moving data to the most cost-effective access tier. For a small monthly object monitoring and automation charge, S3 Intelligent-Tiering monitors access patterns and automatically moves objects that have not been accessed to lower-cost access tiers.

Replicating objects stored in S3 Intelligent-Tiering with S3 Batch Replication or invoking [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) or [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html) constitutes access. In these cases, the source objects of the copy or replication operations are tiered up.

For more information about S3 Intelligent-Tiering see, [Managing storage costs with Amazon S3 Intelligent-Tiering](intelligent-tiering.md).

### Logging configuration and replication configuration
<a name="replication-and-logging"></a>

If Amazon S3 delivers logs to a bucket that has replication enabled, it replicates the log objects.

If [server access logs](ServerLogs.md) or [AWS CloudTrail logs](cloudtrail-logging.md) are enabled on your source or destination bucket, Amazon S3 includes replication-related requests in the logs. For example, Amazon S3 logs each object that it replicates. 

### CRR and the destination Region
<a name="replication-and-dest-region"></a>

Amazon S3 Cross-Region Replication (CRR) is used to copy objects across S3 buckets in different AWS Regions. You might choose the Region for your destination bucket based on either your business needs or cost considerations. For example, inter-Region data transfer charges vary depending on the Regions that you choose. 

Suppose that you chose US East (N. Virginia) (`us-east-1`) as the Region for your source bucket. If you choose US West (Oregon) (`us-west-2`) as the Region for your destination buckets, you pay more than if you choose the US East (Ohio) (`us-east-2`) Region. For pricing information, see "Data Transfer Pricing" in [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). 

There are no data transfer charges associated with Same-Region Replication (SRR).

### S3 Batch Replication
<a name="considerations-batch-replication"></a>

For information about considerations for Batch Replication, see [S3 Batch Replication considerations](s3-batch-replication-batch.md#batch-replication-considerations).

### S3 Replication Time Control
<a name="considerations-RTC"></a>

For information about best practices and considerations for S3 Replication Time Control (S3 RTC), see [Best practices and guidelines for S3 RTC](replication-time-control.md#rtc-best-practices).

# Setting up live replication overview
<a name="replication-how-setup"></a>

**Note**  
Objects that existed before you set up replication aren't replicated automatically. In other words, Amazon S3 doesn't replicate objects retroactively. To replicate objects that were created before your replication configuration, use S3 Batch Replication. For more information about configuring Batch Replication, see [Replicating existing objects](s3-batch-replication-batch.md).

To enable live replication—Same-Region Replication (SRR) or Cross-Region Replication (CRR)—add a replication configuration to your source bucket. This configuration tells Amazon S3 to replicate objects as specified. In the replication configuration, you must provide the following:
+ **The destination buckets** – The bucket or buckets where you want Amazon S3 to replicate the objects.
+ **The objects that you want to replicate** – You can replicate all objects in the source bucket or a subset of objects. You identify a subset by providing a [key name prefix](https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#keyprefix), one or more object tags, or both in the configuration.

  For example, if you configure a replication rule to replicate only objects with the key name prefix `Tax/`, Amazon S3 replicates objects with keys such as `Tax/doc1` or `Tax/doc2`. But it doesn't replicate objects with the key `Legal/doc3`. If you specify both a prefix and one or more tags, Amazon S3 replicates only objects that have the specific key prefix and tags.
+ **An AWS Identity and Access Management (IAM) role** – Amazon S3 assumes this IAM role to replicate objects on your behalf. For more information about creating this IAM role and managing permissions, see [Setting up permissions for live replication](setting-repl-config-perm-overview.md).

In addition to these minimum requirements, you can choose the following options: 
+ **Replica storage class** – By default, Amazon S3 stores object replicas using the same storage class as the source object. You can specify a different storage class for the replicas.
+ **Replica ownership** – Amazon S3 assumes that an object replica continues to be owned by the owner of the source object. So when it replicates objects, it also replicates the corresponding object access control list (ACL) or S3 Object Ownership setting. If the source and destination buckets are owned by different AWS accounts, you can configure replication to change the owner of a replica to the AWS account that owns the destination bucket. For more information, see [Changing the replica owner](replication-change-owner.md).

You can configure replication by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), AWS SDKs, or the Amazon S3 REST API. For detailed walkthroughs of how to set up replication, see [Examples for configuring live replication](replication-example-walkthroughs.md).

 Amazon S3 provides REST API operations to support setting up replication rules. For more information, see the following topics in the *Amazon Simple Storage Service API Reference*:
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketReplication.html) 

**Topics**
+ [Replication configuration file elements](replication-add-config.md)
+ [Setting up permissions for live replication](setting-repl-config-perm-overview.md)
+ [Examples for configuring live replication](replication-example-walkthroughs.md)

# Replication configuration file elements
<a name="replication-add-config"></a>

Amazon S3 stores a replication configuration as XML. If you're configuring replication programmatically through the Amazon S3 REST API, you specify the various elements of your replication configuration in this XML file. If you're configuring replication through the AWS Command Line Interface (AWS CLI), you specify your replication configuration using JSON format. For JSON examples, see the walkthroughs in [Examples for configuring live replication](replication-example-walkthroughs.md).

**Note**  
The latest version of the replication configuration XML format is V2. XML V2 replication configurations are those that contain the `<Filter>` element for rules, and rules that specify S3 Replication Time Control (S3 RTC).  
To see your replication configuration version, you can use the `GetBucketReplication` API operation. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html) in the *Amazon Simple Storage Service API Reference*.   
For backward compatibility, Amazon S3 continues to support the XML V1 replication configuration format. If you've used the XML V1 replication configuration format, see [Backward compatibility considerations](#replication-backward-compat-considerations) for backward compatibility considerations.

In the replication configuration XML file, you must specify an AWS Identity and Access Management (IAM) role and one or more rules, as shown in the following example:

```
<ReplicationConfiguration>
    <Role>IAM-role-ARN</Role>
    <Rule>
        ...
    </Rule>
    <Rule>
         ... 
    </Rule>
     ...
</ReplicationConfiguration>
```

Amazon S3 can't replicate objects without your permission. You grant permissions to Amazon S3 with the IAM role that you specify in the replication configuration. Amazon S3 assumes this IAM role to replicate objects on your behalf. You must grant the required permissions to the IAM role first. For more information about managing permissions, see [Setting up permissions for live replication](setting-repl-config-perm-overview.md).

You add only one rule in a replication configuration in the following scenarios:
+ You want to replicate all objects.
+ You want to replicate only one subset of objects. You identify the object subset by adding a filter in the rule. In the filter, you specify an object key prefix, tags, or a combination of both to identify the subset of objects that the rule applies to. The filters target objects that match the exact values that you specify.

If you want to replicate different subsets of objects, you add multiple rules in a replication configuration. In each rule, you specify a filter that selects a different subset. For example, you might choose to replicate objects that have either `tax/` or `document/` key prefixes. To do this, you add two rules, one that specifies the `tax/` key prefix filter and another that specifies the `document/` key prefix. For more information about object key prefixes, see [Organizing objects using prefixes](using-prefixes.md).

The following sections provide additional information.

**Topics**
+ [Basic rule configuration](#replication-config-min-rule-config)
+ [Optional: Specifying a filter](#replication-config-optional-filter)
+ [Additional destination configurations](#replication-config-optional-dest-config)
+ [Example replication configurations](#replication-config-example-configs)
+ [Backward compatibility considerations](#replication-backward-compat-considerations)

## Basic rule configuration
<a name="replication-config-min-rule-config"></a>

Each rule must include the rule's status and priority. The rule must also indicate whether to replicate delete markers. 
+ The `<Status>` element indicates whether the rule is enabled or disabled by using the values `Enabled` or `Disabled`. If a rule is disabled, Amazon S3 doesn't perform the actions specified in the rule. 
+ The `<Priority>` element indicates which rule has precedence whenever two or more replication rules conflict. Amazon S3 attempts to replicate objects according to all replication rules. However, if there are two or more rules with the same destination bucket, then objects are replicated according to the rule with the highest priority. The higher the number, the higher the priority.
+ The `<DeleteMarkerReplication>` element indicates whether to replicate delete markers by using the values `Enabled` or `Disabled`.

In the `<Destination>` element configuration, you must provide the name of the destination bucket or buckets where you want Amazon S3 to replicate objects. 

The following example shows the minimum requirements for a V2 rule. For backward compatibility, Amazon S3 continues to support the XML V1 format. For more information, see [Backward compatibility considerations](#replication-backward-compat-considerations).

```
...
    <Rule>
        <ID>Rule-1</ID>
        <Status>Enabled-or-Disabled</Status>
        <Filter>
            <Prefix></Prefix>   
        </Filter>
        <Priority>integer</Priority>
        <DeleteMarkerReplication>
           <Status>Enabled-or-Disabled</Status>
        </DeleteMarkerReplication>
        <Destination>        
           <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket> 
        </Destination>    
    </Rule>
    <Rule>
         ...
    </Rule>
     ...
...
```

You can also specify other configuration options. For example, you might choose to use a storage class for object replicas that differs from the class for the source object. 

## Optional: Specifying a filter
<a name="replication-config-optional-filter"></a>

To choose a subset of objects that the rule applies to, add an optional filter. You can filter by object key prefix, object tags, or a combination of both. If you filter on both a key prefix and object tags, Amazon S3 combines the filters by using a logical `AND` operator. In other words, the rule applies to a subset of objects with both a specific key prefix and specific tags. 

**Filter based on object key prefix**  
To specify a rule with a filter based on an object key prefix, use the following XML. You can specify only one prefix per rule.

```
<Rule>
    ...
    <Filter>
        <Prefix>key-prefix</Prefix>   
    </Filter>
    ...
</Rule>
...
```

**Filter based on object tags**  
To specify a rule with a filter based on object tags, use the following XML. You can specify one or more object tags.

```
<Rule>
    ...
    <Filter>
        <And>
            <Tag>
                <Key>key1</Key>
                <Value>value1</Value>
            </Tag>
            <Tag>
                <Key>key2</Key>
                <Value>value2</Value>
            </Tag>
             ...
        </And>
    </Filter>
    ...
</Rule>
...
```

**Filter with a key prefix and object tags**  
To specify a rule filter with a combination of a key prefix and object tags, use the following XML. You wrap these filters in an `<And>` parent element. Amazon S3 performs a logical `AND` operation to combine these filters. In other words, the rule applies to a subset of objects with both a specific key prefix and specific tags. 

```
<Rule>
    ...
    <Filter>
        <And>
            <Prefix>key-prefix</Prefix>
            <Tag>
                <Key>key1</Key>
                <Value>value1</Value>
            </Tag>
            <Tag>
                <Key>key2</Key>
                <Value>value2</Value>
            </Tag>
             ...
    </Filter>
    ...
</Rule>
...
```

**Note**  
If you specify a rule with an empty `<Filter>` element, your rule applies to all objects in your bucket.
When you're using tag-based replication rules with live replication, new objects must be tagged with the matching replication rule tag in the `PutObject` operation. Otherwise, the objects won't be replicated. If objects are tagged after the `PutObject` operation, those objects also won't be replicated.   
To replicate objects that have been tagged after the `PutObject` operation, you must use S3 Batch Replication. For more information about Batch Replication, see [Replicating existing objects](s3-batch-replication-batch.md).

## Additional destination configurations
<a name="replication-config-optional-dest-config"></a>

In the destination configuration, you specify the bucket or buckets where you want Amazon S3 to replicate objects. You can set configurations to replicate objects from one source bucket to one or more destination buckets. 

```
...
<Destination>        
    <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
</Destination>
...
```

You can add the following options in the `<Destination>` element.

**Topics**
+ [Specify storage class](#storage-class-configuration)
+ [Add multiple destination buckets](#multiple-destination-buckets-configuration)
+ [Specify different parameters for each replication rule with multiple destination buckets](#replication-rule-configuration)
+ [Change replica ownership](#replica-ownership-configuration)
+ [Enable S3 Replication Time Control](#rtc-configuration)
+ [Replicate objects created with server-side encryption by using AWS KMS](#sse-kms-configuration)

### Specify storage class
<a name="storage-class-configuration"></a>

You can specify the storage class for the object replicas. By default, Amazon S3 uses the storage class of the source object to create object replicas, as in the following example.

```
...
<Destination>
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
       <StorageClass>storage-class</StorageClass>
</Destination>
...
```

### Add multiple destination buckets
<a name="multiple-destination-buckets-configuration"></a>

You can add multiple destination buckets in a single replication configuration, as follows.

```
...
<Rule>
    <ID>Rule-1</ID>
    <Status>Enabled-or-Disabled</Status>
    <Priority>integer</Priority>
    <DeleteMarkerReplication>
       <Status>Enabled-or-Disabled</Status>
    </DeleteMarkerReplication>
    <Destination>        
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket1</Bucket> 
    </Destination>    
</Rule>
<Rule>
    <ID>Rule-2</ID>
    <Status>Enabled-or-Disabled</Status>
    <Priority>integer</Priority>
    <DeleteMarkerReplication>
       <Status>Enabled-or-Disabled</Status>
    </DeleteMarkerReplication>
    <Destination>        
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket2</Bucket> 
    </Destination>    
</Rule>
...
```

### Specify different parameters for each replication rule with multiple destination buckets
<a name="replication-rule-configuration"></a>

When adding multiple destination buckets in a single replication configuration, you can specify different parameters for each replication rule, as follows.

```
...
<Rule>
    <ID>Rule-1</ID>
    <Status>Enabled-or-Disabled</Status>
    <Priority>integer</Priority>
    <DeleteMarkerReplication>
       <Status>Disabled</Status>
    </DeleteMarkerReplication>
      <Metrics>
    <Status>Enabled</Status>
    <EventThreshold>
      <Minutes>15</Minutes> 
    </EventThreshold>
  </Metrics>
    <Destination>        
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket1</Bucket> 
    </Destination>    
</Rule>
<Rule>
    <ID>Rule-2</ID>
    <Status>Enabled-or-Disabled</Status>
    <Priority>integer</Priority>
    <DeleteMarkerReplication>
       <Status>Enabled</Status>
    </DeleteMarkerReplication>
      <Metrics>
    <Status>Enabled</Status>
    <EventThreshold>
      <Minutes>15</Minutes> 
    </EventThreshold>
  </Metrics>
  <ReplicationTime>
    <Status>Enabled</Status>
    <Time>
      <Minutes>15</Minutes>
    </Time>
  </ReplicationTime>
    <Destination>        
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket2</Bucket> 
    </Destination>    
</Rule>
...
```

### Change replica ownership
<a name="replica-ownership-configuration"></a>

When the source and destination buckets aren't owned by the same accounts, you can change the ownership of the replica to the AWS account that owns the destination bucket. To do so, add the `<AccessControlTranslation>` element. This element takes the value `Destination`.

```
...
<Destination>
   <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
   <Account>destination-bucket-owner-account-id</Account>
   <AccessControlTranslation>
       <Owner>Destination</Owner>
   </AccessControlTranslation>
</Destination>
...
```

If you don't add the `<AccessControlTranslation>` element to the replication configuration, the replicas are owned by the same AWS account that owns the source object. For more information, see [Changing the replica owner](replication-change-owner.md).

### Enable S3 Replication Time Control
<a name="rtc-configuration"></a>

You can enable S3 Replication Time Control (S3 RTC) in your replication configuration. S3 RTC replicates most objects in seconds and 99.99 percent of objects within 15 minutes (backed by a service-level agreement). 

**Note**  
Only a value of `<Minutes>15</Minutes>` is accepted for the `<EventThreshold>` and `<Time>` elements.

```
...
<Destination>
  <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
  <Metrics>
    <Status>Enabled</Status>
    <EventThreshold>
      <Minutes>15</Minutes> 
    </EventThreshold>
  </Metrics>
  <ReplicationTime>
    <Status>Enabled</Status>
    <Time>
      <Minutes>15</Minutes>
    </Time>
  </ReplicationTime>
</Destination>
...
```

For more information, see [Meeting compliance requirements with S3 Replication Time Control](replication-time-control.md). For API examples, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html) in the *Amazon Simple Storage Service API Reference*.

### Replicate objects created with server-side encryption by using AWS KMS
<a name="sse-kms-configuration"></a>

Your source bucket might contain objects that were created with server-side encryption by using AWS Key Management Service (AWS KMS) keys (SSE-KMS). By default, Amazon S3 doesn't replicate these objects. You can optionally direct Amazon S3 to replicate these objects. To do so, first explicitly opt into this feature by adding the `<SourceSelectionCriteria>` element. Then provide the AWS KMS key (for the AWS Region of the destination bucket) to use for encrypting object replicas. The following example shows how to specify these elements.

```
...
<SourceSelectionCriteria>
  <SseKmsEncryptedObjects>
    <Status>Enabled</Status>
  </SseKmsEncryptedObjects>
</SourceSelectionCriteria>
<Destination>
  <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
  <EncryptionConfiguration>
    <ReplicaKmsKeyID>AWS KMS key ID to use for encrypting object replicas</ReplicaKmsKeyID>
  </EncryptionConfiguration>
</Destination>
...
```

For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).

## Example replication configurations
<a name="replication-config-example-configs"></a>

To get started, you can add the following example replication configurations to your bucket, as appropriate.

**Important**  
To add a replication configuration to a bucket, you must have the `iam:PassRole` permission. This permission allows you to pass the IAM role that grants Amazon S3 replication permissions. You specify the IAM role by providing the Amazon Resource Name (ARN) that is used in the `<Role>` element in the replication configuration XML. For more information, see [Granting a User Permissions to Pass a Role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*.

**Example 1: Replication configuration with one rule**  
The following basic replication configuration specifies one rule. The rule specifies an IAM role that Amazon S3 can assume and a single destination bucket for object replicas. The `<Status>` element value of `Enabled` indicates that the rule is in effect.  

```
<?xml version="1.0" encoding="UTF-8"?>
<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Role>arn:aws:iam::account-id:role/role-name</Role>
  <Rule>
    <Status>Enabled</Status>

    <Destination>
      <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>
  </Rule>
</ReplicationConfiguration>
```
To choose a subset of objects to replicate, you can add a filter. In the following configuration, the filter specifies an object key prefix. This rule applies to objects that have the prefix `Tax/` in their key names.   

```
<?xml version="1.0" encoding="UTF-8"?>
<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Role>arn:aws:iam::account-id:role/role-name</Role>
  <Rule>
    <Status>Enabled</Status>
    <Priority>1</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>

    <Filter>
       <Prefix>Tax/</Prefix>
    </Filter>

    <Destination>
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>

  </Rule>
</ReplicationConfiguration>
```
If you specify the `<Filter>` element, you must also include the `<Priority>` and `<DeleteMarkerReplication>` elements. In this example, the value that you set for the `<Priority>` element is irrelevant because there is only one rule.  
In the following configuration, the filter specifies one prefix and two tags. The rule applies to the subset of objects that have the specified key prefix and tags. Specifically, it applies to objects that have the `Tax/` prefix in their key names and the two specified object tags. In this example, the value that you set for the `<Priority>` element is irrelevant because there is only one rule.  

```
<?xml version="1.0" encoding="UTF-8"?>
<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Role>arn:aws:iam::account-id:role/role-name</Role>
  <Rule>
    <Status>Enabled</Status>
    <Priority>1</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>

    <Filter>
        <And>
          <Prefix>Tax/</Prefix>
          <Tag>
             <Tag>
                <Key>tagA</Key>
                <Value>valueA</Value>
             </Tag>
          </Tag>
          <Tag>
             <Tag>
                <Key>tagB</Key>
                <Value>valueB</Value>
             </Tag>
          </Tag>
       </And>

    </Filter>

    <Destination>
        <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>

  </Rule>
</ReplicationConfiguration>
```
You can specify a storage class for the object replicas as follows:  

```
<?xml version="1.0" encoding="UTF-8"?>

<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Role>arn:aws:iam::account-id:role/role-name</Role>
  <Rule>
    <Status>Enabled</Status>
    <Destination>
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
       <StorageClass>storage-class</StorageClass>
    </Destination>
  </Rule>
</ReplicationConfiguration>
```
You can specify any storage class that Amazon S3 supports.

**Example 2: Replication configuration with two rules**  

**Example**  
In the following replication configuration, the rules specify the following:  
+ Each rule filters on a different key prefix so that each rule applies to a distinct subset of objects. In this example, Amazon S3 replicates objects with the key names *`Tax/doc1.pdf`* and *`Project/project1.txt`*, but it doesn't replicate objects with the key name *`PersonalDoc/documentA`*. 
+ Although both rules specify a value for the `<Priority>` element, the rule priority is irrelevant because the rules apply to two distinct sets of objects. The next example shows what happens when rule priority is applied. 
+ The second rule specifies the S3 Standard-IA storage class for object replicas. Amazon S3 uses the specified storage class for those object replicas.
   

```
<?xml version="1.0" encoding="UTF-8"?>

<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Role>arn:aws:iam::account-id:role/role-name</Role>
  <Rule>
    <Status>Enabled</Status>
    <Priority>1</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>
    <Filter>
        <Prefix>Tax</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Destination>
      <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>
     ...
  </Rule>
 <Rule>
    <Status>Enabled</Status>
    <Priority>2</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>
    <Filter>
        <Prefix>Project</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Destination>
      <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
     <StorageClass>STANDARD_IA</StorageClass>
    </Destination>
     ...
  </Rule>


</ReplicationConfiguration>
```

**Example 3: Replication configuration with two rules with overlapping prefixes**  <a name="overlap-rule-example"></a>
In this configuration, the two rules specify filters with overlapping key prefixes, *`star`* and *`starship`*. Both rules apply to objects with the key name *`starship-x`*. In this case, Amazon S3 uses the rule priority to determine which rule to apply. The higher the number, the higher the priority.  

```
<ReplicationConfiguration>

  <Role>arn:aws:iam::account-id:role/role-name</Role>

  <Rule>
    <Status>Enabled</Status>
    <Priority>1</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>
    <Filter>
        <Prefix>star</Prefix>
    </Filter>
    <Destination>
      <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>
  </Rule>
  <Rule>
    <Status>Enabled</Status>
    <Priority>2</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>
    <Filter>
        <Prefix>starship</Prefix>
    </Filter>    
    <Destination>
      <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>
  </Rule>
</ReplicationConfiguration>
```

**Example 4: Example walkthroughs**  
For example walkthroughs, see [Examples for configuring live replication](replication-example-walkthroughs.md).

For more information about the XML structure of replication configuration, see [PutBucketReplication](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html) in the *Amazon Simple Storage Service API Reference*. 

## Backward compatibility considerations
<a name="replication-backward-compat-considerations"></a>

The latest version of the replication configuration XML format is V2. XML V2 replication configurations are those that contain the `<Filter>` element for rules, and rules that specify S3 Replication Time Control (S3 RTC).

To see your replication configuration version, you can use the `GetBucketReplication` API operation. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html) in the *Amazon Simple Storage Service API Reference*. 

For backward compatibility, Amazon S3 continues to support the XML V1 replication configuration format. If you've used the XML V1 replication configuration format, consider the following issues that affect backward compatibility:
+ The replication configuration XML V2 format includes the `<Filter>` element for rules. With the `<Filter>` element, you can specify object filters based on the object key prefix, tags, or both to scope the objects that the rule applies to. The replication configuration XML V1 format supports filtering based only on the key prefix. In that case, you add the `<Prefix>` element directly as a child element of the `<Rule>` element, as in the following example:

  ```
  <?xml version="1.0" encoding="UTF-8"?>
  <ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <Role>arn:aws:iam::account-id:role/role-name</Role>
    <Rule>
      <Status>Enabled</Status>
      <Prefix>key-prefix</Prefix>
      <Destination>
         <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
      </Destination>
  
    </Rule>
  </ReplicationConfiguration>
  ```
+ When you delete an object from your source bucket without specifying an object version ID, Amazon S3 adds a delete marker. If you use the replication configuration XML V1 format, Amazon S3 replicates only delete markers that result from user actions. In other words, Amazon S3 replicates the delete marker only if a user deletes an object. If an expired object is removed by Amazon S3 (as part of a lifecycle action), Amazon S3 doesn't replicate the delete marker. 

  In the replication configuration XML V2 format, you can enable delete marker replication for non-tag-based rules. For more information, see [Replicating delete markers between buckets](delete-marker-replication.md). 

 

# Setting up permissions for live replication
<a name="setting-repl-config-perm-overview"></a>

When setting up live replication in Amazon S3, you must acquire the necessary permissions as follows:
+ You must grant the AWS Identity and Access Management (IAM) principal (user or role) who will be creating replication rules a certain set of permissions.
+ Amazon S3 needs permissions to replicate objects on your behalf. You grant these permissions by creating an IAM role and then specifying that role in your replication configuration.
+ When the source and destination buckets aren't owned by the same accounts, the owner of the destination bucket must also grant the source bucket owner permissions to store the replicas.

**Note**  
If you're using S3 Batch Operations to replicate objects on demand instead of setting up live replication, a different IAM role and policies are required for S3 Batch Replication. For a Batch Replication IAM role and policy examples, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md).

**Topics**
+ [Step 1: Granting permissions to the IAM principal who's creating replication rules](#setting-repl-config-role)
+ [Step 2: Creating an IAM role for Amazon S3 to assume](#setting-repl-config-same-acctowner)
+ [(Optional) Step 3: Granting permissions when the source and destination buckets are owned by different AWS accounts](#setting-repl-config-crossacct)
+ [(Optional) Step 4: Granting permissions to change replica ownership](#change-replica-ownership)

## Step 1: Granting permissions to the IAM principal who's creating replication rules
<a name="setting-repl-config-role"></a>

The IAM user or role that you will use to create replication rules needs permissions to create replication rules for one- or two-way replications. If the user or role doesn't have these permissions, you won't be able to create replication rules. For more information, see [IAM Identities](https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html) in the *IAM User Guide*.

The user or role needs the following actions:
+ `iam:AttachRolePolicy`
+ `iam:CreatePolicy`
+ `iam:CreateServiceLinkedRole`
+ `iam:PassRole`
+ `iam:PutRolePolicy`
+ `s3:GetBucketVersioning`
+ `s3:GetObjectVersionAcl`
+ `s3:GetObjectVersionForReplication`
+ `s3:GetReplicationConfiguration`
+ `s3:PutReplicationConfiguration`

Following is a sample IAM policy that includes these actions.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetAccessPoint",
                "s3:GetAccountPublicAccessBlock",
                "s3:GetBucketAcl",
                "s3:GetBucketLocation",
                "s3:GetBucketPolicyStatus",
                "s3:GetBucketPublicAccessBlock",
                "s3:ListAccessPoints",
                "s3:ListAllMyBuckets",
                "s3:PutReplicationConfiguration",
                "s3:GetReplicationConfiguration",
                "s3:GetBucketVersioning",
                "s3:GetObjectVersionForReplication",
                "s3:GetObjectVersionAcl",
                "s3:GetObject",
                "s3:ListBucket",
                "s3:GetObjectVersion",
                "s3:GetBucketOwnershipControls",
                "s3:PutBucketOwnershipControls",
                "s3:GetObjectLegalHold",
                "s3:GetObjectRetention",
                "s3:GetBucketObjectLockConfiguration"
            ],
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket1-*",
                "arn:aws:s3:::amzn-s3-demo-bucket2-*/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:List*AccessPoint*",
                "s3:GetMultiRegion*"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:Get*",
                "iam:CreateServiceLinkedRole",
                "iam:CreateRole",
                "iam:PassRole"
            ],
            "Resource": "arn:aws:iam::*:role/service-role/s3*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:List*"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:AttachRolePolicy",
                "iam:PutRolePolicy",
                "iam:CreatePolicy"
              ],
            "Resource": [
                "arn:aws:iam::*:policy/service-role/s3*",
                "arn:aws:iam::*:role/service-role/s3*"
            ]
        }
    ]
}
```

------

## Step 2: Creating an IAM role for Amazon S3 to assume
<a name="setting-repl-config-same-acctowner"></a>



By default, all Amazon S3 resources—buckets, objects, and related subresources—are private, and only the resource owner can access the resource. Amazon S3 needs permissions to read and replicate objects from the source bucket. You grant these permissions by creating an IAM role and specifying that role in your replication configuration. 

This section explains the trust policy and the minimum required permissions policy that are attached to this IAM role. The example walkthroughs provide step-by-step instructions to create an IAM role. For more information, see [Examples for configuring live replication](replication-example-walkthroughs.md).

**Note**  
If you're using the console to create your replication configuration, we recommend that you skip this section and instead have the console create this IAM role and the necessary trust and permission policies for you.

The *trust policy* identifies which principal identities can assume the IAM role. The *permissions policy* specifies which actions the IAM role can perform, on which resources, and under what conditions. 
+ The following example shows a *trust policy* where you identify Amazon S3 as the AWS service principal that can assume the role:

------
#### [ JSON ]

****  

  ```
  {
     "Version":"2012-10-17",		 	 	 
     "Statement":[
        {
           "Effect":"Allow",
           "Principal":{
              "Service":"s3.amazonaws.com"
           },
           "Action":"sts:AssumeRole"
        }
     ]
  }
  ```

------
+ The following example shows a *trust policy* where you identify Amazon S3 and S3 Batch Operations as service principals that can assume the role. Use this approach if you're creating a Batch Replication job. For more information, see [Create a Batch Replication job for new replication rules or destinations](s3-batch-replication-new-config.md).

------
#### [ JSON ]

****  

  ```
  {
     "Version":"2012-10-17",		 	 	 
     "Statement":[ 
        {
           "Effect":"Allow",
           "Principal":{
              "Service": [
                "s3.amazonaws.com",
                "batchoperations.s3.amazonaws.com"
             ]
           },
           "Action":"sts:AssumeRole"
        }
     ]
  }
  ```

------

  For more information about IAM roles, see [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) in the *IAM User Guide*.
+ The following example shows the *permissions policy*, where you grant the IAM role permissions to perform replication tasks on your behalf. When Amazon S3 assumes the role, it has the permissions that you specify in this policy. In this policy, `amzn-s3-demo-source-bucket` is the source bucket, and `amzn-s3-demo-destination-bucket` is the destination bucket.

------
#### [ JSON ]

****  

  ```
  {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
        {
           "Effect": "Allow",
           "Action": [
              "s3:GetReplicationConfiguration",
              "s3:ListBucket"
           ],
           "Resource": [
              "arn:aws:s3:::amzn-s3-demo-source-bucket"
           ]
        },
        {
           "Effect": "Allow",
           "Action": [
              "s3:GetObjectVersionForReplication",
              "s3:GetObjectVersionAcl",
              "s3:GetObjectVersionTagging"
           ],
           "Resource": [
              "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
           ]
        },
        {
           "Effect": "Allow",
           "Action": [
              "s3:ReplicateObject",
              "s3:ReplicateDelete",
              "s3:ReplicateTags"
           ],
           "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
        }
     ]
  }
  ```

------

  The permissions policy grants permissions for the following actions:
  +  `s3:GetReplicationConfiguration` and `s3:ListBucket` – Permissions for these actions on the `amzn-s3-demo-source-bucket` bucket allow Amazon S3 to retrieve the replication configuration and list the bucket content. (The current permissions model requires the `s3:ListBucket` permission for accessing delete markers.)
  + `s3:GetObjectVersionForReplication` and `s3:GetObjectVersionAcl` – Permissions for these actions are granted on all objects to allow Amazon S3 to get a specific object version and access control list (ACL) associated with the objects. 

    
  + `s3:ReplicateObject` and `s3:ReplicateDelete` – Permissions for these actions on all objects in the `amzn-s3-demo-destination-bucket` bucket allow Amazon S3 to replicate objects or delete markers to the destination bucket. For information about delete markers, see [How delete operations affect replication](replication-what-is-isnot-replicated.md#replication-delete-op). 
**Note**  
Permissions for the `s3:ReplicateObject` action on the `amzn-s3-demo-destination-bucket` bucket also allow replication of metadata such as object tags and ACLs. Therefore, you don't need to explicitly grant permission for the `s3:ReplicateTags` action.
  + `s3:GetObjectVersionTagging` – Permissions for this action on objects in the `amzn-s3-demo-source-bucket` bucket allow Amazon S3 to read object tags for replication. For more information about object tags, see [Categorizing your objects using tags](object-tagging.md). If Amazon S3 doesn't have the `s3:GetObjectVersionTagging` permission, it replicates the objects, but not the object tags.

  For a list of Amazon S3 actions, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html#list_amazons3-actions-as-permissions) in the *Service Authorization Reference*.

  For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).
**Important**  
The AWS account that owns the IAM role must have permissions for the actions that it grants to the IAM role.   
For example, suppose that the source bucket contains objects owned by another AWS account. The owner of the objects must explicitly grant the AWS account that owns the IAM role the required permissions through the objects' access control lists (ACLs). Otherwise, Amazon S3 can't access the objects, and replication of the objects fails. For information about ACL permissions, see [Access control list (ACL) overview](acl-overview.md).  
  
The permissions described here are related to the minimum replication configuration. If you choose to add optional replication configurations, you must grant additional permissions to Amazon S3:   
To replicate encrypted objects, you also need to grant the necessary AWS Key Management Service (AWS KMS) key permissions. For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication).

## (Optional) Step 3: Granting permissions when the source and destination buckets are owned by different AWS accounts
<a name="setting-repl-config-crossacct"></a>

When the source and destination buckets aren't owned by the same accounts, the owner of the destination bucket must also add a bucket policy to grant the owner of the source bucket permissions to perform replication actions, as shown in the following example. In this example policy, `amzn-s3-demo-destination-bucket` is the destination bucket.

You can also use the Amazon S3 console to automatically generate this bucket policy for you. For more information, see [Enable receiving replicated objects from a source bucket](#receiving-replicated-objects).

**Note**  
The ARN format of the role might appear different. If the role was created by using the console, the ARN format is `arn:aws:iam::account-ID:role/service-role/role-name`. If the role was created by using the AWS CLI, the ARN format is `arn:aws:iam::account-ID:role/role-name`. For more information, see [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html) in the *IAM User Guide*. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "PolicyForDestinationBucket",
    "Statement": [
        {
            "Sid": "Permissions on objects",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/service-role/source-account-IAM-role"
            },
            "Action": [
                "s3:ReplicateDelete",
                "s3:ReplicateObject"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
        },
        {
            "Sid": "Permissions on bucket",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/service-role/source-account-IAM-role"
            },
            "Action": [
                "s3:List*",
                "s3:GetBucketVersioning",
                "s3:PutBucketVersioning"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket"
        }
    ]
}
```

------

For an example, see [Configuring replication for buckets in different accounts](replication-walkthrough-2.md).

If objects in the source bucket are tagged, note the following:
+ If the source bucket owner grants Amazon S3 permission for the `s3:GetObjectVersionTagging` and `s3:ReplicateTags` actions to replicate object tags (through the IAM role), Amazon S3 replicates the tags along with the objects. For information about the IAM role, see [Step 2: Creating an IAM role for Amazon S3 to assume](#setting-repl-config-same-acctowner).
+ If the owner of the destination bucket doesn't want to replicate the tags, they can add the following statement to the destination bucket policy to explicitly deny permission for the `s3:ReplicateTags` action. In this policy, `amzn-s3-demo-destination-bucket` is the destination bucket.

  ```
  ...
     "Statement":[
        {
           "Effect":"Deny",
           "Principal":{
              "AWS":"arn:aws:iam::source-bucket-account-id:role/service-role/source-account-IAM-role"
           },
           "Action":"s3:ReplicateTags",
           "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
        }
     ]
  ...
  ```

**Note**  
If you want to replicate encrypted objects, you also must grant the necessary AWS Key Management Service (AWS KMS) key permissions. For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication). 

**Enable receiving replicated objects from a source bucket**  
Instead of manually adding the preceding policy to your destination bucket, you can quickly generate the policies needed to enable receiving replicated objects from a source bucket through the Amazon S3 console. 

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the bucket that you want to use as a destination bucket.

1. Choose the **Management** tab, and scroll down to **Replication rules**.

1. For **Actions**, choose **Receive replicated objects**. 

   Follow the prompts and enter the AWS account ID of the source bucket account, and then choose **Generate policies**. The console generates an Amazon S3 bucket policy and a KMS key policy.

1. To add this policy to your existing bucket policy, either choose **Apply settings** or choose **Copy** to manually copy the changes. 

1. (Optional) Copy the AWS KMS policy to your desired KMS key policy in the AWS Key Management Service console. 

## (Optional) Step 4: Granting permissions to change replica ownership
<a name="change-replica-ownership"></a>

When different AWS accounts own the source and destination buckets, you can tell Amazon S3 to change the ownership of the replica to the AWS account that owns the destination bucket. To override the ownership of replicas, you must either grant some additional permissions or adjust the S3 Object Ownership settings for the destination bucket. For more information about owner override, see [Changing the replica owner](replication-change-owner.md).

# Examples for configuring live replication
<a name="replication-example-walkthroughs"></a>

The following examples provide step-by-step walkthroughs that show how to configure live replication for common use cases. 

**Note**  
Live replication refers to Same-Region Replication (SRR) and Cross-Region Replication (CRR). Live replication doesn't replicate any objects that existed in the bucket before you set up replication. To replicate objects that existed before you set up replication, use on-demand replication. To sync buckets and replicate existing objects on demand, see [Replicating existing objects](s3-batch-replication-batch.md).

These examples demonstrate how to create a replication configuration by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and AWS SDKs (AWS SDK for Java and AWS SDK for .NET examples are shown). 

For information about installing and configuring the AWS CLI, see the following topics in the *AWS Command Line Interface User Guide*:
+  [Get started with the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) 
+  [Configure the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) – You must set up at least one profile. If you are exploring cross-account scenarios, set up two profiles.

For information about the AWS SDKs, see [AWS SDK for Java](https://aws.amazon.com/sdk-for-java/) and [AWS SDK for .NET](https://aws.amazon.com/sdk-for-net/).

**Tip**  
For a step-by-step tutorial that demonstrates how to use live replication to replicate data, see [Tutorial: Replicating data within and between AWS Regions using S3 Replication](https://aws.amazon.com/getting-started/hands-on/replicate-data-using-amazon-s3-replication/?ref=docs_gateway/amazons3/replication-example-walkthroughs.html).

**Topics**
+ [Configuring for buckets in the same account](replication-walkthrough1.md)
+ [Configuring for buckets in different accounts](replication-walkthrough-2.md)
+ [Using S3 Replication Time Control](replication-time-control.md)
+ [Replicating encrypted objects](replication-config-for-kms-objects.md)
+ [Replicating metadata changes](replication-for-metadata-changes.md)
+ [Replicating delete markers](delete-marker-replication.md)

# Configuring replication for buckets in the same account
<a name="replication-walkthrough1"></a>

Live replication is the automatic, asynchronous copying of objects across general purpose buckets in the same or different AWS Regions. Live replication copies newly created objects and object updates from a source bucket to a destination bucket or buckets. For more information, see [Replicating objects within and across Regions](replication.md).

When you configure replication, you add replication rules to the source bucket. Replication rules define which source bucket objects to replicate and the destination bucket or buckets where the replicated objects are stored. You can create a rule to replicate all the objects in a bucket or a subset of objects with a specific key name prefix, one or more object tags, or both. A destination bucket can be in the same AWS account as the source bucket, or it can be in a different account.

If you specify an object version ID to delete, Amazon S3 deletes that object version in the source bucket. But it doesn't replicate the deletion in the destination bucket. In other words, it doesn't delete the same object version from the destination bucket. This protects data from malicious deletions.

When you add a replication rule to a bucket, the rule is enabled by default, so it starts working as soon as you save it. 

In this example, you set up live replication for source and destination buckets that are owned by the same AWS account. Examples are provided for using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), and the AWS SDK for Java and AWS SDK for .NET.

## Prerequisites
<a name="replication-prerequisites"></a>

Before you use the following procedures, make sure that you've set up the necessary permissions for replication, depending on whether the source and destination buckets are owned by the same or different accounts. For more information, see [Setting up permissions for live replication](setting-repl-config-perm-overview.md).

**Note**  
If you want to replicate encrypted objects, you also must grant the necessary AWS Key Management Service (AWS KMS) key permissions. For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication). 

## Using the S3 console
<a name="enable-replication"></a>

To configure a replication rule when the destination bucket is in the same AWS account as the source bucket, follow these steps.

If the destination bucket is in a different account from the source bucket, you must add a bucket policy to the destination bucket to grant the owner of the source bucket account permission to replicate objects in the destination bucket. For more information, see [(Optional) Step 3: Granting permissions when the source and destination buckets are owned by different AWS accounts](setting-repl-config-perm-overview.md#setting-repl-config-crossacct).

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want.

1. Choose the **Management** tab, scroll down to **Replication rules**, and then choose **Create replication rule**.

    

1. In the **Replication rule configuration** section, under **Replication rule name**, enter a name for your rule to help identify the rule later. The name is required and must be unique within the bucket.

1. Under **Status**, **Enabled** is selected by default. An enabled rule starts to work as soon as you save it. If you want to enable the rule later, choose **Disabled**.

1. If the bucket has existing replication rules, you are instructed to set a priority for the rule. You must set a priority for the rule to avoid conflicts caused by objects that are included in the scope of more than one rule. In the case of overlapping rules, Amazon S3 uses the rule priority to determine which rule to apply. The higher the number, the higher the priority. For more information about rule priority, see [Replication configuration file elements](replication-add-config.md).

1. Under **Source bucket**, you have the following options for setting the replication source:
   + To replicate the whole bucket, choose **Apply to all objects in the bucket**. 
   + To replicate all objects that have the same prefix, choose **Limit the scope of this rule using one or more filters**. This limits replication to all objects that have names that begin with the prefix that you specify (for example `pictures`). Enter a prefix in the **Prefix** box. 
**Note**  
If you enter a prefix that is the name of a folder, you must use **/** (forward slash) as the last character (for example, `pictures/`).
   + To replicate all objects with one or more object tags, choose **Add tag** and enter the key-value pair in the boxes. Repeat the procedure to add another tag. You can combine a prefix and tags. For more information about object tags, see [Categorizing your objects using tags](object-tagging.md).

   The new replication configuration XML schema supports prefix and tag filtering and the prioritization of rules. For more information about the new schema, see [Backward compatibility considerations](replication-add-config.md#replication-backward-compat-considerations). For more information about the XML used with the Amazon S3 API that works behind the user interface, see [Replication configuration file elements](replication-add-config.md). The new schema is described as *replication configuration XML V2*.

1. Under **Destination**, choose the bucket where you want Amazon S3 to replicate objects.
**Note**  
The number of destination buckets is limited to the number of AWS Regions in a given partition. A partition is a grouping of Regions. AWS currently has three partitions: `aws` (Standard Regions), `aws-cn` (China Regions), and `aws-us-gov` (AWS GovCloud (US) Regions). To request an increase in your destination bucket quota, you can use [service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html).
   + To replicate to a bucket or buckets in your account, choose **Choose a bucket in this account**, and enter or browse for the destination bucket names. 
   + To replicate to a bucket or buckets in a different AWS account, choose **Specify a bucket in another account**, and enter the destination bucket account ID and bucket name. 

     If the destination is in a different account from the source bucket, you must add a bucket policy to the destination buckets to grant the owner of the source bucket account permission to replicate objects. For more information, see [(Optional) Step 3: Granting permissions when the source and destination buckets are owned by different AWS accounts](setting-repl-config-perm-overview.md#setting-repl-config-crossacct).

     Optionally, if you want to help standardize ownership of new objects in the destination bucket, choose **Change object ownership to the destination bucket owner**. For more information about this option, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).
**Note**  
If versioning is not enabled on the destination bucket, you get a warning that contains an **Enable versioning** button. Choose this button to enable versioning on the bucket.

1. Set up an AWS Identity and Access Management (IAM) role that Amazon S3 can assume to replicate objects on your behalf.

   To set up an IAM role, in the **IAM role** section, select one of the following from the **IAM role** dropdown list:
   + We highly recommend that you choose **Create new role** to have Amazon S3 create a new IAM role for you. When you save the rule, a new policy is generated for the IAM role that matches the source and destination buckets that you choose.
   + You can choose to use an existing IAM role. If you do, you must choose a role that grants Amazon S3 the necessary permissions for replication. Replication fails if this role does not grant Amazon S3 sufficient permissions to follow your replication rule.
**Important**  
When you add a replication rule to a bucket, you must have the `iam:PassRole` permission to be able to pass the IAM role that grants Amazon S3 replication permissions. For more information, see [Granting a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*.

1. To replicate objects in the source bucket that are encrypted with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), under **Encryption**, select **Replicate objects encrypted with AWS KMS**. Under **AWS KMS keys for encrypting destination objects** are the source keys that you allow replication to use. All source KMS keys are included by default. To narrow the KMS key selection, you can choose an alias or key ID. 

   Objects encrypted by AWS KMS keys that you do not select are not replicated. A KMS key or a group of KMS keys is chosen for you, but you can choose the KMS keys if you want. For information about using AWS KMS with replication, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
**Important**  
When you replicate objects that are encrypted with AWS KMS, the AWS KMS request rate doubles in the source Region and increases in the destination Region by the same amount. These increased call rates to AWS KMS are due to the way that data is re-encrypted by using the KMS key that you define for the replication destination Region. AWS KMS has a request rate quota that is per calling account per Region. For information about the quota defaults, see [AWS KMS Quotas - Requests per Second: Varies](https://docs.aws.amazon.com/kms/latest/developerguide/limits.html#requests-per-second) in the *AWS Key Management Service Developer Guide*.   
If your current Amazon S3 `PUT` object request rate during replication is more than half the default AWS KMS rate limit for your account, we recommend that you request an increase to your AWS KMS request rate quota. To request an increase, create a case in the Support Center at [Contact Us](https://aws.amazon.com/contact-us/). For example, suppose that your current `PUT` object request rate is 1,000 requests per second and you use AWS KMS to encrypt your objects. In this case, we recommend that you ask Support to increase your AWS KMS rate limit to 2,500 requests per second, in both your source and destination Regions (if different), to ensure that there is no throttling by AWS KMS.   
To see your `PUT` object request rate in the source bucket, view `PutRequests` in the Amazon CloudWatch request metrics for Amazon S3. For information about viewing CloudWatch metrics, see [Using the S3 console](configure-request-metrics-bucket.md#configure-metrics).

   If you chose to replicate objects encrypted with AWS KMS, do the following: 

   1. Under **AWS KMS key for encrypting destination objects **, specify your KMS key in one of the following ways:
     + To choose from a list of available KMS keys, choose **Choose from your AWS KMS keys**, and choose your **KMS key** from the list of available keys.

       Both the AWS managed key (`aws/s3`) and your customer managed keys appear in this list. For more information about customer managed keys, see [Customer keys and AWS keys](https://docs.aws.amazon.com//kms/latest/developerguide/concepts.html#key-mgmt) in the *AWS Key Management Service Developer Guide*.
     + To enter the KMS key Amazon Resource Name (ARN), choose **Enter AWS KMS key ARN**, and enter your KMS key ARN in the field that appears. This encrypts the replicas in the destination bucket. You can find the ARN for your KMS key in the [IAM Console](https://console.aws.amazon.com/iam/), under **Encryption keys**. 
     + To create a new customer managed key in the AWS KMS console, choose **Create a KMS key**.

       For more information about creating an AWS KMS key, see [Creating keys](https://docs.aws.amazon.com//kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*.
**Important**  
You can only use KMS keys that are enabled in the same AWS Region as the bucket. When you choose **Choose from your KMS keys**, the S3 console lists only 100 KMS keys per Region. If you have more than 100 KMS keys in the same Region, you can see only the first 100 KMS keys in the S3 console. To use a KMS key that is not listed in the console, choose **Enter AWS KMS key ARN**, and enter your KMS key ARN.  
When you use an AWS KMS key for server-side encryption in Amazon S3, you must choose a symmetric encryption KMS key. Amazon S3 supports only symmetric encryption KMS keys and not asymmetric KMS keys. For more information, see [Identifying symmetric and asymmetric KMS keys](https://docs.aws.amazon.com//kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide*.

     For more information about creating an AWS KMS key, see [Creating keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*. For more information about using AWS KMS with Amazon S3, see [Using server-side encryption with AWS KMS keys (SSE-KMS)](UsingKMSEncryption.md).

1. Under **Destination storage class**, if you want to replicate your data into a specific storage class in the destination, choose **Change the storage class for the replicated objects**. Then choose the storage class that you want to use for the replicated objects in the destination. If you don't choose this option, the storage class for replicated objects is the same class as the original objects.

1. You have the following additional options while setting the **Additional replication options**:
   + If you want to enable S3 Replication Time Control (S3 RTC) in your replication configuration, select **Replication Time Control (RTC)**. For more information about this option, see [Meeting compliance requirements with S3 Replication Time Control](replication-time-control.md).
   + If you want to enable S3 Replication metrics in your replication configuration, select **Replication metrics and events**. For more information, see [Monitoring replication with metrics, event notifications, and statuses](replication-metrics.md).
   + If you want to enable delete marker replication in your replication configuration, select **Delete marker replication**. For more information, see [Replicating delete markers between buckets](delete-marker-replication.md).
   + If you want to enable Amazon S3 replica modification sync in your replication configuration, select **Replica modification sync**. For more information, see [Replicating metadata changes with replica modification sync](replication-for-metadata-changes.md).
**Note**  
When you use S3 RTC or S3 Replication metrics, additional fees apply.

1. To finish, choose **Save**.

1. After you save your rule, you can edit, enable, disable, or delete your rule by selecting your rule and choosing **Edit rule**. 

## Using the AWS CLI
<a name="replication-ex1-cli"></a>

To use the AWS CLI to set up replication when the source and destination buckets are owned by the same AWS account, you do the following:
+ Create source and destination buckets.
+ Enable versioning on the buckets.
+ Create an AWS Identity and Access Management (IAM) role that gives Amazon S3 permission to replicate objects.
+ Add the replication configuration to the source bucket.

To verify your setup, you test it.

**To set up replication when the source and destination buckets are owned by the same AWS account**

1. Set a credentials profile for the AWS CLI. This example uses the profile name `acctA`. For information about setting credential profiles and using named profiles, see [Configuration and credential file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) in the *AWS Command Line Interface User Guide*. 
**Important**  
The profile that you use for this example must have the necessary permissions. For example, in the replication configuration, you specify the IAM role that Amazon S3 can assume. You can do this only if the profile that you use has the `iam:PassRole` permission. For more information, see [Grant a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*. If you use administrator credentials to create a named profile, you can perform all the tasks. 

1. Create a source bucket and enable versioning on it by using the following AWS CLI commands. To use these commands, replace the *`user input placeholders`* with your own information. 

   The following `create-bucket` command creates a source bucket named `amzn-s3-demo-source-bucket` in the US East (N. Virginia) (`us-east-1`) Region:

   

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-source-bucket \
   --region us-east-1 \
   --profile acctA
   ```

   The following `put-bucket-versioning` command enables S3 Versioning on the `amzn-s3-demo-source-bucket` bucket: 

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-source-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctA
   ```

1. Create a destination bucket and enable versioning on it by using the following AWS CLI commands. To use these commands, replace the *`user input placeholders`* with your own information. 
**Note**  
To set up a replication configuration when both source and destination buckets are in the same AWS account, you use the same profile for the source and destination buckets. This example uses `acctA`.   
To test a replication configuration when the buckets are owned by different AWS accounts, specify different profiles for each account. For example, use an `acctB` profile for the destination bucket.

   

   The following `create-bucket` command creates a destination bucket named `amzn-s3-demo-destination-bucket` in the US West (Oregon) (`us-west-2`) Region:

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-destination-bucket \
   --region us-west-2 \
   --create-bucket-configuration LocationConstraint=us-west-2 \
   --profile acctA
   ```

   The following `put-bucket-versioning` command enables S3 Versioning on the `amzn-s3-demo-destination-bucket` bucket: 

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-destination-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctA
   ```

1. Create an IAM role. You specify this role in the replication configuration that you add to the source bucket later. Amazon S3 assumes this role to replicate objects on your behalf. You create an IAM role in two steps:
   + Create a role.
   + Attach a permissions policy to the role.

   1. Create the IAM role.

      1. Copy the following trust policy and save it to a file named `s3-role-trust-policy.json` in the current directory on your local computer. This policy grants the Amazon S3 service principal permissions to assume the role.

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Effect":"Allow",
                  "Principal":{
                     "Service":"s3.amazonaws.com"
                  },
                  "Action":"sts:AssumeRole"
               }
            ]
         }
         ```

------

      1. Run the following command to create a role.

         ```
         $ aws iam create-role \
         --role-name replicationRole \
         --assume-role-policy-document file://s3-role-trust-policy.json  \
         --profile acctA
         ```

   1. Attach a permissions policy to the role.

      1. Copy the following permissions policy and save it to a file named `s3-role-permissions-policy.json` in the current directory on your local computer. This policy grants permissions for various Amazon S3 bucket and object actions. 

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:GetObjectVersionForReplication",
                     "s3:GetObjectVersionAcl",
                     "s3:GetObjectVersionTagging"
                  ],
                  "Resource":[
                     "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
                  ]
               },
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:ListBucket",
                     "s3:GetReplicationConfiguration"
                  ],
                  "Resource":[
                     "arn:aws:s3:::amzn-s3-demo-source-bucket"
                  ]
               },
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:ReplicateObject",
                     "s3:ReplicateDelete",
                     "s3:ReplicateTags"
                  ],
                  "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
               }
            ]
         }
         ```

------
**Note**  
If you want to replicate encrypted objects, you also must grant the necessary AWS Key Management Service (AWS KMS) key permissions. For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication). 

      1. Run the following command to create a policy and attach it to the role. Replace the *`user input placeholders`* with your own information.

         ```
         $ aws iam put-role-policy \
         --role-name replicationRole \
         --policy-document file://s3-role-permissions-policy.json \
         --policy-name replicationRolePolicy \
         --profile acctA
         ```

1. Add a replication configuration to the source bucket. 

   1. Although the Amazon S3 API requires that you specify the replication configuration as XML, the AWS CLI requires that you specify the replication configuration as JSON. Save the following JSON in a file called `replication.json` to the local directory on your computer.

      ```
      {
        "Role": "IAM-role-ARN",
        "Rules": [
          {
            "Status": "Enabled",
            "Priority": 1,
            "DeleteMarkerReplication": { "Status": "Disabled" },
            "Filter" : { "Prefix": "Tax"},
            "Destination": {
              "Bucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket"
            }
          }
        ]
      }
      ```

   1. Update the JSON by replacing the values for the `amzn-s3-demo-destination-bucket` and `IAM-role-ARN` with your own information. Save the changes.

   1. Run the following `put-bucket-replication` command to add the replication configuration to your source bucket. Be sure to provide the source bucket name:

      ```
      $ aws s3api put-bucket-replication \
      --replication-configuration file://replication.json \
      --bucket amzn-s3-demo-source-bucket \
      --profile acctA
      ```

   To retrieve the replication configuration, use the `get-bucket-replication` command:

   ```
   $ aws s3api get-bucket-replication \
   --bucket amzn-s3-demo-source-bucket \
   --profile acctA
   ```

1. Test the setup in the Amazon S3 console, by doing the following steps:

   1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. In the left navigation pane, choose **Buckets**. In the **General purpose buckets** list, choose the source bucket.

   1. In the source bucket, create a folder named `Tax`. 

   1. Add sample objects to the `Tax` folder in the source bucket. 
**Note**  
The amount of time that it takes for Amazon S3 to replicate an object depends on the size of the object. For information about how to see the status of replication, see [Getting replication status information](replication-status.md).

      In the destination bucket, verify the following:
      + That Amazon S3 replicated the objects.
      + That the objects are replicas. On the **Properties** tab for your objects, scroll down to the **Object management overview** section. Under **Management configurations**, see the value under **Replication status**. Make sure that this value is set to `REPLICA`.
      + That the replicas are owned by the source bucket account. You can verify the object ownership on the **Permissions** tab for your objects. 

        If the source and destination buckets are owned by different accounts, you can add an optional configuration to tell Amazon S3 to change the replica ownership to the destination account. For an example, see [How to change the replica owner](replication-change-owner.md#replication-walkthrough-3). 

## Using the AWS SDKs
<a name="replication-ex1-sdk"></a>

Use the following code examples to add a replication configuration to a bucket with the AWS SDK for Java and AWS SDK for .NET, respectively.

**Note**  
If you want to replicate encrypted objects, you also must grant the necessary AWS Key Management Service (AWS KMS) key permissions. For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication). 

------
#### [ Java ]

To add a replication configuration to a bucket and then retrieve and verify the configuration using the AWS SDK for Java, you can use the S3Client to manage replication settings programmatically.

For examples of how to configure replication with the AWS SDK for Java, see [Set replication configuration on a bucket](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_PutBucketReplication_section.html) in the *Amazon S3 API Reference*.

------
#### [ C\$1 ]

The following AWS SDK for .NET code example adds a replication configuration to a bucket and then retrieves it. To use this code, provide the names for your buckets and the Amazon Resource Name (ARN) for your IAM role. For information about setting up and running the code examples, see [Getting Started with the AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config.html) in the *AWS SDK for .NET Developer Guide*. 

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
    class CrossRegionReplicationTest
    {
        private const string sourceBucket = "*** source bucket ***";
        // Bucket ARN example - arn:aws:s3:::destinationbucket
        private const string destinationBucketArn = "*** destination bucket ARN ***";
        private const string roleArn = "*** IAM Role ARN ***";
        // Specify your bucket region (an example region is shown).
        private static readonly RegionEndpoint sourceBucketRegion = RegionEndpoint.USWest2;
        private static IAmazonS3 s3Client;
        public static void Main()
        {
            s3Client = new AmazonS3Client(sourceBucketRegion);
            EnableReplicationAsync().Wait();
        }
        static async Task EnableReplicationAsync()
        {
            try
            {
                ReplicationConfiguration replConfig = new ReplicationConfiguration
                {
                    Role = roleArn,
                    Rules =
                        {
                            new ReplicationRule
                            {
                                Prefix = "Tax",
                                Status = ReplicationRuleStatus.Enabled,
                                Destination = new ReplicationDestination
                                {
                                    BucketArn = destinationBucketArn
                                }
                            }
                        }
                };

                PutBucketReplicationRequest putRequest = new PutBucketReplicationRequest
                {
                    BucketName = sourceBucket,
                    Configuration = replConfig
                };

                PutBucketReplicationResponse putResponse = await s3Client.PutBucketReplicationAsync(putRequest);

                // Verify configuration by retrieving it.
                await RetrieveReplicationConfigurationAsync(s3Client);
            }
            catch (AmazonS3Exception e)
            {
                Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message);
            }
            catch (Exception e)
            {
                Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
            }
        }
        private static async Task RetrieveReplicationConfigurationAsync(IAmazonS3 client)
        {
            // Retrieve the configuration.
            GetBucketReplicationRequest getRequest = new GetBucketReplicationRequest
            {
                BucketName = sourceBucket
            };
            GetBucketReplicationResponse getResponse = await client.GetBucketReplicationAsync(getRequest);
            // Print.
            Console.WriteLine("Printing replication configuration information...");
            Console.WriteLine("Role ARN: {0}", getResponse.Configuration.Role);
            foreach (var rule in getResponse.Configuration.Rules)
            {
                Console.WriteLine("ID: {0}", rule.Id);
                Console.WriteLine("Prefix: {0}", rule.Prefix);
                Console.WriteLine("Status: {0}", rule.Status);
            }
        }
    }
}
```

------

# Configuring replication for buckets in different accounts
<a name="replication-walkthrough-2"></a>

Live replication is the automatic, asynchronous copying of objects across buckets in the same or different AWS Regions. Live replication copies newly created objects and object updates from a source bucket to a destination bucket or buckets. For more information, see [Replicating objects within and across Regions](replication.md).

When you configure replication, you add replication rules to the source bucket. Replication rules define which source bucket objects to replicate and the destination bucket or buckets where the replicated objects are stored. You can create a rule to replicate all the objects in a bucket or a subset of objects with a specific key name prefix, one or more object tags, or both. A destination bucket can be in the same AWS account as the source bucket, or it can be in a different account.

If you specify an object version ID to delete, Amazon S3 deletes that object version in the source bucket. But it doesn't replicate the deletion in the destination bucket. In other words, it doesn't delete the same object version from the destination bucket. This protects data from malicious deletions.

When you add a replication rule to a bucket, the rule is enabled by default, so it starts working as soon as you save it. 

Setting up live replication when the source and destination buckets are owned by different AWS accounts is similar to setting up replication when both buckets are owned by the same account. However, there are several differences when you're configuring replication in a cross-account scenario: 
+ The destination bucket owner must grant the source bucket owner permission to replicate objects in the destination bucket policy. 
+ If you're replicating objects that are encrypted with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) in a cross-account scenario, the owner of the KMS key must grant the source bucket owner permission to use the KMS key. For more information, see [Granting additional permissions for cross-account scenarios](replication-config-for-kms-objects.md#replication-kms-cross-acct-scenario). 
+ By default, replicated objects are owned by the source bucket owner. In a cross-account scenario, you might want to configure replication to change the ownership of the replicated objects to the owner of the destination bucket. For more information, see [Changing the replica owner](replication-change-owner.md).

**To configure replication when the source and destination buckets are owned by different AWS accounts**

1. In this example, you create source and destination buckets in two different AWS accounts. You must have two credential profiles set for the AWS CLI. This example uses `acctA` and `acctB` for those profile names. For information about setting credential profiles and using named profiles, see [Configuration and credential file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) in the *AWS Command Line Interface User Guide*. 

1. Follow the step-by-step instructions in [Configuring replication for buckets in the same account](replication-walkthrough1.md) with the following changes:
   + For all AWS CLI commands related to source bucket activities (such as creating the source bucket, enabling versioning, and creating the IAM role), use the `acctA` profile. Use the `acctB` profile to create the destination bucket. 
   + Make sure that the permissions policy for the IAM role specifies the source and destination buckets that you created for this example.

1. In the console, add the following bucket policy on the destination bucket to allow the owner of the source bucket to replicate objects. For instructions, see [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md). Be sure to edit the policy by providing the AWS account ID of the source bucket owner, the IAM role name, and the destination bucket name. 
**Note**  
To use the following example, replace the `user input placeholders` with your own information. Replace `amzn-s3-demo-destination-bucket` with your destination bucket name. Replace `source-bucket-account-ID:role/service-role/source-account-IAM-role` in the IAM Amazon Resource Name (ARN) with the IAM role that you're using for this replication configuration.  
If you created the IAM service role manually, set the role path in the IAM ARN as `role/service-role/`, as shown in the following policy example. For more information, see [IAM ARNs](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns) in the *IAM User Guide*. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Id": "",
       "Statement": [
           {
               "Sid": "Set-permissions-for-objects",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:role/service-role/source-account-IAM-role"
               },
               "Action": [
                   "s3:ReplicateObject",
                   "s3:ReplicateDelete"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
           },
           {
               "Sid": "Set-permissions-on-bucket",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:role/service-role/source-account-IAM-role"
               },
               "Action": [
                   "s3:GetBucketVersioning",
                   "s3:PutBucketVersioning"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket"
           }
       ]
   }
   ```

------

1. (Optional) If you're replicating objects that are encrypted with SSE-KMS, the owner of the KMS key must grant the source bucket owner permission to use the KMS key. For more information, see [Granting additional permissions for cross-account scenarios](replication-config-for-kms-objects.md#replication-kms-cross-acct-scenario).

1. (Optional) In replication, the owner of the source object owns the replica by default. When the source and destination buckets are owned by different AWS accounts, you can add optional configuration settings to change replica ownership to the AWS account that owns the destination buckets. This includes granting the `ObjectOwnerOverrideToBucketOwner` permission. For more information, see [Changing the replica owner](replication-change-owner.md).

# Changing the replica owner
<a name="replication-change-owner"></a>

In replication, the owner of the source object also owns the replica by default. However, when the source and destination buckets are owned by different AWS accounts, you might want to change the replica ownership. For example, you might want to change the ownership to restrict access to object replicas. In your replication configuration, you can add optional configuration settings to change replica ownership to the AWS account that owns the destination buckets. 

To change the replica owner, you do the following:
+ Add the *owner override* option to the replication configuration to tell Amazon S3 to change replica ownership. 
+ Grant Amazon S3 the `s3:ObjectOwnerOverrideToBucketOwner` permission to change replica ownership. 
+ Add the `s3:ObjectOwnerOverrideToBucketOwner` permission in the destination bucket policy to allow changing replica ownership. The `s3:ObjectOwnerOverrideToBucketOwner` permission allows the owner of the destination buckets to accept the ownership of object replicas.

For more information, see [Considerations for the ownership override option](#repl-ownership-considerations) and [Adding the owner override option to the replication configuration](#repl-ownership-owneroverride-option). For a working example with step-by-step instructions, see [How to change the replica owner](#replication-walkthrough-3).

**Important**  
Instead of using the owner override option, you can use the bucket owner enforced setting for Object Ownership. When you use replication and the source and destination buckets are owned by different AWS accounts, the bucket owner of the destination bucket can use the bucket owner enforced setting for Object Ownership to change replica ownership to the AWS account that owns the destination bucket. This setting disables object access control lists (ACLs).   
The bucket owner enforced setting mimics the existing owner override behavior without the need of the `s3:ObjectOwnerOverrideToBucketOwner` permission. All objects that are replicated to the destination bucket with the bucket owner enforced setting are owned by the destination bucket owner. For more information about Object Ownership, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

## Considerations for the ownership override option
<a name="repl-ownership-considerations"></a>

When you configure the ownership override option, the following considerations apply:
+ By default, the owner of the source object also owns the replica. Amazon S3 replicates the object version and the ACL associated with it.

  If you add the owner override option to your replication configuration, Amazon S3 replicates only the object version, not the ACL. In addition, Amazon S3 doesn't replicate subsequent changes to the source object ACL. Amazon S3 sets the ACL on the replica that grants full control to the destination bucket owner. 
+  When you update a replication configuration to enable or disable the owner override, the following behavior occurs:
  + If you add the owner override option to the replication configuration:

    When Amazon S3 replicates an object version, it discards the ACL that's associated with the source object. Instead, Amazon S3 sets the ACL on the replica, giving full control to the owner of the destination bucket. Amazon S3 doesn't replicate subsequent changes to the source object ACL. However, this ACL change doesn't apply to object versions that were replicated before you set the owner override option. ACL updates on source objects that were replicated before the owner override was set continue to be replicated (because the object and its replicas continue to have the same owner).
  + If you remove the owner override option from the replication configuration:

    Amazon S3 replicates new objects that appear in the source bucket and the associated ACLs to the destination buckets. For objects that were replicated before you removed the owner override, Amazon S3 doesn't replicate the ACLs because the object ownership change that Amazon S3 made remains in effect. That is, ACLs put on the object version that were replicated when the owner override was set continue to be not replicated.

## Adding the owner override option to the replication configuration
<a name="repl-ownership-owneroverride-option"></a>

**Warning**  
Add the owner override option only when the source and destination buckets are owned by different AWS accounts. Amazon S3 doesn't check if the buckets are owned by the same or different accounts. If you add the owner override when both buckets are owned by same AWS account, Amazon S3 applies the owner override. This option grants full permissions to the owner of the destination bucket and doesn't replicate subsequent updates to the source objects' access control lists (ACLs). The replica owner can directly change the ACL associated with a replica with a `PutObjectAcl` request, but not through replication.

To specify the owner override option, add the following to each `Destination` element: 
+ The `AccessControlTranslation` element, which tells Amazon S3 to change replica ownership
+ The `Account` element, which specifies the AWS account of the destination bucket owner 

```
<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    ...
    <Destination>
      ...
      <AccessControlTranslation>
           <Owner>Destination</Owner>
       </AccessControlTranslation>
      <Account>destination-bucket-owner-account-id</Account>
    </Destination>
  </Rule>
</ReplicationConfiguration>
```

The following example replication configuration tells Amazon S3 to replicate objects that have the *`Tax`* key prefix to the `amzn-s3-demo-destination-bucket` destination bucket and change ownership of the replicas. To use this example, replace the `user input placeholders` with your own information.

```
<?xml version="1.0" encoding="UTF-8"?>
<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
   <Role>arn:aws:iam::account-id:role/role-name</Role>
   <Rule>
      <ID>Rule-1</ID>
      <Priority>1</Priority>
      <Status>Enabled</Status>
      <DeleteMarkerReplication>
         <Status>Disabled</Status>
      </DeleteMarkerReplication>
      <Filter>
         <Prefix>Tax</Prefix>
      </Filter>
      <Destination>
         <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
         <Account>destination-bucket-owner-account-id</Account>
         <AccessControlTranslation>
            <Owner>Destination</Owner>
         </AccessControlTranslation>
      </Destination>
   </Rule>
</ReplicationConfiguration>
```

## Granting Amazon S3 permission to change replica ownership
<a name="repl-ownership-add-role-permission"></a>

Grant Amazon S3 permissions to change replica ownership by adding permission for the `s3:ObjectOwnerOverrideToBucketOwner` action in the permissions policy that's associated with the AWS Identity and Access Management (IAM) role. This role is the IAM role that you specified in the replication configuration that allows Amazon S3 to assume and replicate objects on your behalf. To use the following example, replace `amzn-s3-demo-destination-bucket` with the name of the destination bucket.

```
...
{
    "Effect":"Allow",
         "Action":[
       "s3:ObjectOwnerOverrideToBucketOwner"
    ],
    "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
}
...
```

## Adding permission in the destination bucket policy to allow changing replica ownership
<a name="repl-ownership-accept-ownership-b-policy"></a>

The owner of the destination bucket must grant the owner of the source bucket permission to change replica ownership. The owner of the destination bucket grants the owner of the source bucket permission for the `s3:ObjectOwnerOverrideToBucketOwner` action. This permission allows the destination bucket owner to accept ownership of the object replicas. The following example bucket policy statement shows how to do this. To use this example, replace the `user input placeholders` with your own information.

```
...
{
    "Sid":"1",
    "Effect":"Allow",
    "Principal":{"AWS":"source-bucket-account-id"},
    "Action":["s3:ObjectOwnerOverrideToBucketOwner"],
    "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
}
...
```

## How to change the replica owner
<a name="replication-walkthrough-3"></a>

When the source and destination buckets in a replication configuration are owned by different AWS accounts, you can tell Amazon S3 to change replica ownership to the AWS account that owns the destination bucket. The following examples show how to use the Amazon S3 console, the AWS Command Line Interface (AWS CLI), and the AWS SDKs to change replica ownership. 

### Using the S3 console
<a name="replication-ex3-console"></a>

For step-by-step instructions, see [Configuring replication for buckets in the same account](replication-walkthrough1.md). This topic provides instructions for setting up a replication configuration when the source and destination buckets are owned by the same and different AWS accounts.

### Using the AWS CLI
<a name="replication-ex3-cli"></a>

The following procedure shows how to change replica ownership by using the AWS CLI. In this procedure, you do the following: 
+ Create the source and destination buckets.
+ Enable versioning on the buckets.
+ Create an AWS Identity and Access Management (IAM) role that gives Amazon S3 permission to replicate objects.
+ Add the replication configuration to the source bucket.
+ In the replication configuration, you direct Amazon S3 to change the replica ownership.
+ You test your replication configuration.

**To change replica ownership when the source and destination buckets are owned by different AWS accounts (AWS CLI)**

To use the example AWS CLI commands in this procedure, replace the `user input placeholders` with your own information. 

1. In this example, you create the source and destination buckets in two different AWS accounts. To work with these two accounts, configure the AWS CLI with two named profiles. This example uses profiles named *`acctA`* and *`acctB`*, respectively. For information about setting credential profiles and using named profiles, see [Configuration and credential file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) in the *AWS Command Line Interface User Guide*. 
**Important**  
The profiles that you use for this procedure must have the necessary permissions. For example, in the replication configuration, you specify the IAM role that Amazon S3 can assume. You can do this only if the profile that you use has the `iam:PassRole` permission. If you use administrator user credentials to create a named profile, then you can perform all of the tasks in this procedure. For more information, see [Granting a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*. 

1. Create the source bucket and enable versioning. This example creates a source bucket named `amzn-s3-demo-source-bucket` in the US East (N. Virginia) (`us-east-1`) Region. 

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-source-bucket \
   --region us-east-1 \
   --profile acctA
   ```

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-source-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctA
   ```

1. Create a destination bucket and enable versioning. This example creates a destination bucket named `amzn-s3-demo-destination-bucket` in the US West (Oregon) (`us-west-2`) Region. Use an AWS account profile that's different from the one that you used for the source bucket.

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-destination-bucket \
   --region us-west-2 \
   --create-bucket-configuration LocationConstraint=us-west-2 \
   --profile acctB
   ```

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-destination-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctB
   ```

1. You must add permissions to your destination bucket policy to allow changing the replica ownership.

   1.  Save the following policy to a file named `destination-bucket-policy.json`. Make sure to replace the *`user input placeholders`* with your own information.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Sid": "destination_bucket_policy_sid",
                  "Principal": {
                      "AWS": "source-bucket-owner-123456789012"
                  },
                  "Action": [
                      "s3:ReplicateObject",
                      "s3:ReplicateDelete",
                      "s3:ObjectOwnerOverrideToBucketOwner",
                      "s3:ReplicateTags",
                      "s3:GetObjectVersionTagging"
                  ],
                  "Effect": "Allow",
                  "Resource": [
                      "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
                  ]
              }
          ]
      }
      ```

------

   1. Add the preceding policy to the destination bucket by using the following `put-bucket-policy` command:

      ```
      aws s3api put-bucket-policy --region $ {destination-region} --bucket $ {amzn-s3-demo-destination-bucket} --policy file://destination_bucket_policy.json
      ```

1. Create an IAM role. You specify this role in the replication configuration that you add to the source bucket later. Amazon S3 assumes this role to replicate objects on your behalf. You create an IAM role in two steps:
   + Create the role.
   + Attach a permissions policy to the role.

   1. Create the IAM role.

      1. Copy the following trust policy and save it to a file named `s3-role-trust-policy.json` in the current directory on your local computer. This policy grants Amazon S3 permissions to assume the role.

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Effect":"Allow",
                  "Principal":{
                     "Service":"s3.amazonaws.com"
                  },
                  "Action":"sts:AssumeRole"
               }
            ]
         }
         ```

------

      1. Run the following AWS CLI `create-role` command to create the IAM role:

         ```
         $ aws iam create-role \
         --role-name replicationRole \
         --assume-role-policy-document file://s3-role-trust-policy.json  \
         --profile acctA
         ```

         Make note of the Amazon Resource Name (ARN) of the IAM role that you created. You will need this ARN in a later step.

   1. Attach a permissions policy to the role.

      1. Copy the following permissions policy and save it to a file named `s3-role-perm-pol-changeowner.json` in the current directory on your local computer. This policy grants permissions for various Amazon S3 bucket and object actions. In the following steps, you attach this policy to the IAM role that you created earlier. 

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:GetObjectVersionForReplication",
                     "s3:GetObjectVersionAcl"
                  ],
                  "Resource":[
                     "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
                  ]
               },
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:ListBucket",
                     "s3:GetReplicationConfiguration"
                  ],
                  "Resource":[
                     "arn:aws:s3:::amzn-s3-demo-source-bucket"
                  ]
               },
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:ReplicateObject",
                     "s3:ReplicateDelete",
                     "s3:ObjectOwnerOverrideToBucketOwner",
                     "s3:ReplicateTags",
                     "s3:GetObjectVersionTagging"
                  ],
                  "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
               }
            ]
         }
         ```

------

      1. To attach the preceding permissions policy to the role, run the following `put-role-policy` command:

         ```
         $ aws iam put-role-policy \
         --role-name replicationRole \
         --policy-document file://s3-role-perm-pol-changeowner.json \
         --policy-name replicationRolechangeownerPolicy \
         --profile acctA
         ```

1. Add a replication configuration to your source bucket.

   1. The AWS CLI requires specifying the replication configuration as JSON. Save the following JSON in a file named `replication.json` in the current directory on your local computer. In the configuration, the `AccessControlTranslation` specifies the change in replica ownership from the source bucket owner to the destination bucket owner. 

      ```
      {
         "Role":"IAM-role-ARN",
         "Rules":[
            {
               "Status":"Enabled",
               "Priority":1,
               "DeleteMarkerReplication":{
                  "Status":"Disabled"
               },
               "Filter":{
               },
               "Status":"Enabled",
               "Destination":{
                  "Bucket":"arn:aws:s3:::amzn-s3-demo-destination-bucket",
                  "Account":"destination-bucket-owner-account-id",
                  "AccessControlTranslation":{
                     "Owner":"Destination"
                  }
               }
            }
         ]
      }
      ```

   1. Edit the JSON by providing values for the destination bucket name, the destination bucket owner account ID, and the `IAM-role-ARN`. Replace *`IAM-role-ARN`* with the ARN of the IAM role that you created earlier. Save the changes.

   1. To add the replication configuration to the source bucket, run the following command:

      ```
      $ aws s3api put-bucket-replication \
      --replication-configuration file://replication.json \
      --bucket amzn-s3-demo-source-bucket \
      --profile acctA
      ```

1. Test your replication configuration by checking replica ownership in the Amazon S3 console.

   1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. Add objects to the source bucket. Verify that the destination bucket contains the object replicas and that the ownership of the replicas has changed to the AWS account that owns the destination bucket.

### Using the AWS SDKs
<a name="replication-ex3-sdk"></a>

 For a code example to add a replication configuration, see [Using the AWS SDKs](replication-walkthrough1.md#replication-ex1-sdk). You must modify the replication configuration appropriately. For conceptual information, see [Changing the replica owner](#replication-change-owner). 

# Meeting compliance requirements with S3 Replication Time Control
<a name="replication-time-control"></a>

S3 Replication Time Control (S3 RTC) helps you meet compliance or business requirements for data replication and provides visibility into Amazon S3 replication times. S3 RTC replicates most objects that you upload to Amazon S3 in seconds, and 99.9 percent of those objects within 15 minutes. 

By default, S3 RTC includes two ways to track the progress of replication: 
+ **S3 Replication metrics** – You can use S3 Replication metrics to monitor the total number of S3 API operations that are pending replication, the total size of objects pending replication, the maximum replication time to the destination Region, and the total number of operations that failed replication. You can then monitor each dataset that you replicate separately. You can also enable S3 Replication metrics independently of S3 RTC. For more information, see [Using S3 Replication metrics](repl-metrics.md).

  Replication rules with S3 Replication Time Control (S3 RTC) enabled publish S3 Replication metrics. Replication metrics are available within 15 minutes of enabling S3 RTC. Replication metrics are available through the Amazon S3 console, the Amazon S3 API, the AWS SDKs, the AWS Command Line Interface (AWS CLI), and Amazon CloudWatch. For more information about CloudWatch metrics, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md). For more information about viewing replication metrics through the Amazon S3 console, see [Viewing replication metrics](repl-metrics.md#viewing-replication-metrics).

  S3 Replication metrics are billed at the same rate as Amazon CloudWatch custom metrics. For information, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/). 
+ **Amazon S3 Event Notifications** – S3 RTC provides `OperationMissedThreshold` and `OperationReplicatedAfterThreshold` events that notify the bucket owner if object replication exceeds or occurs after the 15-minute threshold. With S3 RTC, Amazon S3 Event Notifications can notify you in the rare instance when objects don't replicate within 15 minutes and when those objects replicate after the 15-minute threshold. 

  Replication events are available within 15 minutes of enabling S3 RTC. Amazon S3 Event Notifications are available through Amazon SQS, Amazon SNS, or AWS Lambda. For more information, see [Receiving replication failure events with Amazon S3 Event Notifications](replication-metrics-events.md).

 

## Best practices and guidelines for S3 RTC
<a name="rtc-best-practices"></a>

When replicating data in Amazon S3 with S3 Replication Time Control (S3 RTC) enabled, follow these best practice guidelines to optimize replication performance for your workloads. 

**Topics**
+ [Amazon S3 Replication and request rate performance guidelines](#rtc-request-rate-performance)
+ [Estimating your replication request rates](#estimating-replication-request-rates)
+ [Exceeding S3 RTC data transfer rate quotas](#exceed-rtc-data-transfer-limits)
+ [AWS KMS encrypted object replication request rates](#kms-object-replication-request-rates)

### Amazon S3 Replication and request rate performance guidelines
<a name="rtc-request-rate-performance"></a>

When uploading and retrieving storage from Amazon S3, your applications can achieve thousands of transactions per second in request performance. For example, an application can achieve at least 3,500 `PUT`/`COPY`/`POST`/`DELETE` or 5,500 `GET`/`HEAD` requests per second per prefix in an S3 bucket, including the requests that S3 Replication makes on your behalf. There are no limits to the number of prefixes in a bucket. You can increase your read or write performance by parallelizing reads. For example, if you create 10 prefixes in an S3 bucket to parallelize reads, you can scale your read performance to 55,000 read requests per second. 

Amazon S3 automatically scales in response to sustained request rates above these guidelines, or sustained request rates concurrent with `LIST` requests. While Amazon S3 is internally optimizing for the new request rate, you might receive HTTP 503 request responses temporarily until the optimization is complete. This behavior might occur with increases in request per second rates, or when you first enable S3 RTC. During these periods, your replication latency might increase. The S3 RTC service level agreement (SLA) doesn’t apply to time periods when Amazon S3 performance guidelines on requests per second are exceeded. 

The S3 RTC SLA also doesn't apply during time periods where your replication data transfer rate exceeds the default 1 gigabit per second (Gbps) quota. If you expect your replication transfer rate to exceed 1 Gbps, you can contact [AWS Support Center](https://console.aws.amazon.com/support/home#/) or use [Service Quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) to request an increase in your replication transfer rate quota. 

### Estimating your replication request rates
<a name="estimating-replication-request-rates"></a>

Your total request rate including the requests that Amazon S3 replication makes on your behalf must be within the Amazon S3 request rate guidelines for both the replication source and destination buckets. For each object replicated, Amazon S3 replication makes up to five `GET`/`HEAD` requests and one `PUT` request to the source bucket, and one `PUT` request to each destination bucket.

For example, if you expect to replicate 100 objects per second, Amazon S3 replication might perform an additional 100 `PUT` requests on your behalf for a total of 200 `PUT` requests per second to the source S3 bucket. Amazon S3 replication also might perform up to 500 `GET`/`HEAD` requests (5 `GET`/`HEAD` requests for each object that's replicated.) 

**Note**  
You incur costs for only one `PUT` request per object replicated. For more information, see the pricing information in the [Amazon S3 FAQs about replication](https://aws.amazon.com/s3/faqs/#Replication). 

### Exceeding S3 RTC data transfer rate quotas
<a name="exceed-rtc-data-transfer-limits"></a>

If you expect your S3 RTC data transfer rate to exceed the default 1 Gbps quota, contact [AWS Support Center](https://console.aws.amazon.com/support/home#/) or use [Service Quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) to request an increase in your replication transfer rate quota. 

### AWS KMS encrypted object replication request rates
<a name="kms-object-replication-request-rates"></a>

When you replicate objects that are encrypted with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), AWS KMS requests per second quotas apply. AWS KMS might reject an otherwise valid request because your request rate exceeds the quota for the number of requests per second. When a request is throttled, AWS KMS returns a `ThrottlingException` error. The AWS KMS request rate quota applies to requests that you make directly and to requests made by Amazon S3 replication on your behalf. 

For example, if you expect to replicate 1,000 objects per second, you can subtract 2,000 requests from your AWS KMS request rate quota. The resulting request rate per second is available for your AWS KMS workloads excluding replication. You can use [AWS KMS request metrics in Amazon CloudWatch](https://docs.aws.amazon.com/kms/latest/developerguide/monitoring-cloudwatch.html) to monitor the total AWS KMS request rate on your AWS account.

To request an increase to your AWS KMS requests per second quota, contact [AWS Support Center](https://console.aws.amazon.com/support/home#/) or use [Service Quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html). 

## Enabling S3 Replication Time Control
<a name="replication-walkthrough-5"></a>

You can start using S3 Replication Time Control (S3 RTC) with a new or existing replication rule. You can choose to apply your replication rule to an entire bucket, or to objects with a specific prefix or tag. When you enable S3 RTC, S3 Replication metrics are also enabled on your replication rule. 

You can configure S3 RTC by using the Amazon S3 console, the Amazon S3 API, the AWS SDKs, and the AWS Command Line Interface (AWS CLI).

**Topics**

### Using the S3 console
<a name="replication-ex5-console"></a>

For step-by-step instructions, see [Configuring replication for buckets in the same account](replication-walkthrough1.md). This topic provides instructions for enabling S3 RTC in your replication configuration when the source and destination buckets are owned by the same and different AWS accounts.

### Using the AWS CLI
<a name="replication-ex5-cli"></a>

To use the AWS CLI to replicate objects with S3 RTC enabled, you create buckets, enable versioning on the buckets, create an IAM role that gives Amazon S3 permission to replicate objects, and add the replication configuration to the source bucket. The replication configuration must have S3 RTC enabled, as shown in the following example. 

For step-by-step instructions for setting up your replication configuration by using the AWS CLI, see [Configuring replication for buckets in the same account](replication-walkthrough1.md).

The following example replication configuration enables and sets the `ReplicationTime` and `EventThreshold` values for a replication rule. Enabling and setting these values enables S3 RTC on the rule.

```
{
    "Rules": [
        {
            "Status": "Enabled",
            "Filter": {
                "Prefix": "Tax"
            },
            "DeleteMarkerReplication": {
                "Status": "Disabled"
            },
            "Destination": {
                "Bucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket",
                "Metrics": {
                    "Status": "Enabled",
                    "EventThreshold": {
                        "Minutes": 15
                    }
                },
                "ReplicationTime": {
                    "Status": "Enabled",
                    "Time": {
                        "Minutes": 15
                    }
                }
            },
            "Priority": 1
        }
    ],
    "Role": "IAM-Role-ARN"
}
```

**Important**  
 `Metrics:EventThreshold:Minutes` and `ReplicationTime:Time:Minutes` can only have `15` as a valid value. 

### Using the AWS SDK for Java
<a name="replication-ex5-sdk"></a>

 The following Java example adds replication configuration with S3 Replication Time Control (S3 RTC) enabled.

```
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.model.DeleteMarkerReplication;
import software.amazon.awssdk.services.s3.model.Destination;
import software.amazon.awssdk.services.s3.model.Metrics;
import software.amazon.awssdk.services.s3.model.MetricsStatus;
import software.amazon.awssdk.services.s3.model.PutBucketReplicationRequest;
import software.amazon.awssdk.services.s3.model.ReplicationConfiguration;
import software.amazon.awssdk.services.s3.model.ReplicationRule;
import software.amazon.awssdk.services.s3.model.ReplicationRuleFilter;
import software.amazon.awssdk.services.s3.model.ReplicationTime;
import software.amazon.awssdk.services.s3.model.ReplicationTimeStatus;
import software.amazon.awssdk.services.s3.model.ReplicationTimeValue;

public class Main {

  public static void main(String[] args) {
    S3Client s3 = S3Client.builder()
      .region(Region.US_EAST_1)
      .credentialsProvider(() -> AwsBasicCredentials.create(
          "AWS_ACCESS_KEY_ID",
          "AWS_SECRET_ACCESS_KEY")
      )
      .build();

    ReplicationConfiguration replicationConfig = ReplicationConfiguration
      .builder()
      .rules(
          ReplicationRule
            .builder()
            .status("Enabled")
            .priority(1)
            .deleteMarkerReplication(
                DeleteMarkerReplication
                    .builder()
                    .status("Disabled")
                    .build()
            )
            .destination(
                Destination
                    .builder()
                    .bucket("destination_bucket_arn")
                    .replicationTime(
                        ReplicationTime.builder().time(
                            ReplicationTimeValue.builder().minutes(15).build()
                        ).status(
                            ReplicationTimeStatus.ENABLED
                        ).build()
                    )
                    .metrics(
                        Metrics.builder().eventThreshold(
                            ReplicationTimeValue.builder().minutes(15).build()
                        ).status(
                            MetricsStatus.ENABLED
                        ).build()
                    )
                    .build()
            )
            .filter(
                ReplicationRuleFilter
                    .builder()
                    .prefix("testtest")
                    .build()
            )
        .build())
        .role("role_arn")
        .build();

    // Put replication configuration
    PutBucketReplicationRequest putBucketReplicationRequest = PutBucketReplicationRequest
      .builder()
      .bucket("source_bucket")
      .replicationConfiguration(replicationConfig)
      .build();

    s3.putBucketReplication(putBucketReplicationRequest);
  }
}
```

# Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)
<a name="replication-config-for-kms-objects"></a>

**Important**  
Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. Starting January 5, 2023, all new object uploads to Amazon S3 are automatically encrypted at no additional cost and with no impact on performance. The automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in CloudTrail logs, S3 Inventory, S3 Storage Lens, the Amazon S3 console, and as an additional Amazon S3 API response header in the AWS CLI and AWS SDKs. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html).

There are some special considerations when you're replicating objects that have been encrypted by using server-side encryption. Amazon S3 supports the following types of server-side encryption:
+ Server-side encryption with Amazon S3 managed keys (SSE-S3)
+ Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS)
+ Dual-layer server-side encryption with AWS KMS keys (DSSE-KMS)
+ Server-side encryption with customer-provided keys (SSE-C)

For more information about server-side encryption, see [Protecting data with server-side encryption](serv-side-encryption.md).

This topic explains the permissions that you need to direct Amazon S3 to replicate objects that have been encrypted by using server-side encryption. This topic also provides additional configuration elements that you can add and example AWS Identity and Access Management (IAM) policies that grant the necessary permissions for replicating encrypted objects. 

For an example with step-by-step instructions, see [Enabling replication for encrypted objects](#replication-walkthrough-4). For information about creating a replication configuration, see [Replicating objects within and across Regions](replication.md). 

**Note**  
You can use multi-Region AWS KMS keys in Amazon S3. However, Amazon S3 currently treats multi-Region keys as though they were single-Region keys, and does not use the multi-Region features of the key. For more information, see [ Using multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the *AWS Key Management Service Developer Guide*.

**Topics**
+ [How default bucket encryption affects replication](#replication-default-encryption)
+ [Replicating objects encrypted with SSE-C](#replicationSSEC)
+ [Replicating objects encrypted with SSE-S3, SSE-KMS, or DSSE-KMS](#replications)
+ [Enabling replication for encrypted objects](#replication-walkthrough-4)

## How default bucket encryption affects replication
<a name="replication-default-encryption"></a>

When you enable default encryption for a replication destination bucket, the following encryption behavior applies:
+ If objects in the source bucket are not encrypted, the replica objects in the destination bucket are encrypted by using the default encryption settings of the destination bucket. As a result, the entity tags (ETags) of the source objects differ from the ETags of the replica objects. If you have applications that use ETags, you must update those applications to account for this difference.
+ If objects in the source bucket are encrypted by using server-side encryption with Amazon S3 managed keys (SSE-S3), server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), or dual-layer server-side encryption with AWS KMS keys (DSSE-KMS), the replica objects in the destination bucket use the same type of encryption as the source objects. The default encryption settings of the destination bucket are not used.

## Replicating objects encrypted with SSE-C
<a name="replicationSSEC"></a>

By using server-side encryption with customer-provided keys (SSE-C), you can manage your own proprietary encryption keys. With SSE-C, you manage the keys while Amazon S3 manages the encryption and decryption process. You must provide an encryption key as part of your request, but you don't need to write any code to perform object encryption or decryption. When you upload an object, Amazon S3 encrypts the object by using the key that you provided. Amazon S3 then purges that key from memory. When you retrieve an object, you must provide the same encryption key as part of your request. For more information, see [Using server-side encryption with customer-provided keys (SSE-C)](ServerSideEncryptionCustomerKeys.md).

S3 Replication supports objects that are encrypted with SSE-C. You can configure SSE-C object replication in the Amazon S3 console or with the AWS SDKs in the same way that you configure replication for unencrypted objects. There aren't additional SSE-C permissions beyond what are currently required for replication. 

S3 Replication automatically replicates newly uploaded SSE-C encrypted objects if they are eligible, as specified in your S3 Replication configuration. To replicate existing objects in your buckets, use S3 Batch Replication. For more information about replicating objects, see [Setting up live replication overview](replication-how-setup.md) and [Replicating existing objects with Batch Replication](s3-batch-replication-batch.md).

There are no additional charges for replicating SSE-C objects. For details about replication pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). 

## Replicating objects encrypted with SSE-S3, SSE-KMS, or DSSE-KMS
<a name="replications"></a>

By default, Amazon S3 doesn't replicate objects that are encrypted with SSE-KMS or DSSE-KMS. This section explains the additional configuration elements that you can add to direct Amazon S3 to replicate these objects. 

For an example with step-by-step instructions, see [Enabling replication for encrypted objects](#replication-walkthrough-4). For information about creating a replication configuration, see [Replicating objects within and across Regions](replication.md). 

### Specifying additional information in the replication configuration
<a name="replication-kms-extra-config"></a>

In the replication configuration, you do the following:
+ In the `Destination` element in your replication configuration, add the ID of the symmetric AWS KMS customer managed key that you want Amazon S3 to use to encrypt object replicas, as shown in the following example replication configuration. 
+ Explicitly opt in by enabling replication of objects encrypted by using KMS keys (SSE-KMS or DSSE-KMS). To opt in, add the `SourceSelectionCriteria` element, as shown in the following example replication configuration.

 

```
<ReplicationConfiguration>
   <Rule>
      ...
      <SourceSelectionCriteria>
         <SseKmsEncryptedObjects>
           <Status>Enabled</Status>
         </SseKmsEncryptedObjects>
      </SourceSelectionCriteria>

      <Destination>
          ...
          <EncryptionConfiguration>
             <ReplicaKmsKeyID>AWS KMS key ARN or Key Alias ARN that's in the same AWS Region as the destination bucket.</ReplicaKmsKeyID>
          </EncryptionConfiguration>
       </Destination>
      ...
   </Rule>
</ReplicationConfiguration>
```

**Important**  
The KMS key must have been created in the same AWS Region as the destination bucket. 
The KMS key *must* be valid. The `PutBucketReplication` API operation doesn't check the validity of KMS keys. If you use a KMS key that isn't valid, you will receive the HTTP `200 OK` status code in response, but replication fails.

The following example shows a replication configuration that includes optional configuration elements. This replication configuration has one rule. The rule applies to objects with the `Tax` key prefix. Amazon S3 uses the specified AWS KMS key ID to encrypt these object replicas.

```
<?xml version="1.0" encoding="UTF-8"?>
<ReplicationConfiguration>
   <Role>arn:aws:iam::account-id:role/role-name</Role>
   <Rule>
      <ID>Rule-1</ID>
      <Priority>1</Priority>
      <Status>Enabled</Status>
      <DeleteMarkerReplication>
         <Status>Disabled</Status>
      </DeleteMarkerReplication>
      <Filter>
         <Prefix>Tax</Prefix>
      </Filter>
      <Destination>
         <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
         <EncryptionConfiguration>
            <ReplicaKmsKeyID>AWS KMS key ARN or Key Alias ARN that's in the same AWS Region as the destination bucket.</ReplicaKmsKeyID>
         </EncryptionConfiguration>
      </Destination>
      <SourceSelectionCriteria>
         <SseKmsEncryptedObjects>
            <Status>Enabled</Status>
         </SseKmsEncryptedObjects>
      </SourceSelectionCriteria>
   </Rule>
</ReplicationConfiguration>
```

### Granting additional permissions for the IAM role
<a name="replication-kms-permissions"></a>

To replicate objects that are encrypted at rest by using SSE-S3, SSE-KMS, or DSSE-KMS, grant the following additional permissions to the AWS Identity and Access Management (IAM) role that you specify in the replication configuration. You grant these permissions by updating the permissions policy that's associated with the IAM role. 
+ **`s3:GetObjectVersionForReplication` action for source objects** – This action allows Amazon S3 to replicate both unencrypted objects and objects created with server-side encryption by using SSE-S3, SSE-KMS, or DSSE-KMS.
**Note**  
We recommend that you use the `s3:GetObjectVersionForReplication` action instead of the `s3:GetObjectVersion` action because `s3:GetObjectVersionForReplication` provides Amazon S3 with only the minimum permissions necessary for replication. In addition, the `s3:GetObjectVersion` action allows replication of unencrypted and SSE-S3-encrypted objects, but not of objects that are encrypted by using KMS keys (SSE-KMS or DSSE-KMS). 
+ **`kms:Decrypt` and `kms:Encrypt` AWS KMS actions for the KMS keys**
  + You must grant `kms:Decrypt` permissions for the AWS KMS key that's used to decrypt the source object.
  + You must grant `kms:Encrypt` permissions for the AWS KMS key that's used to encrypt the object replica.
+ **`kms:GenerateDataKey` action for replicating plaintext objects** – If you're replicating plaintext objects to a bucket with SSE-KMS or DSSE-KMS encryption enabled by default, you must include the `kms:GenerateDataKey` permission for the destination encryption context and the KMS key in the IAM policy.

**Important**  
If you use S3 Batch Replication to replicate datasets cross region and your objects previously had their server-side encryption type updated from SSE-S3 to SSE-KMS, you may need additional permissions. On the source region bucket, you must have `kms:decrypt` permissions. Then, you will need the `kms:decrypt` and `kms:encrypt` permissions for the bucket in the destination region. 

We recommend that you restrict these permissions only to the destination buckets and objects by using AWS KMS condition keys. The AWS account that owns the IAM role must have permissions for the `kms:Encrypt` and `kms:Decrypt` actions for the KMS keys that are listed in the policy. If the KMS keys are owned by another AWS account, the owner of the KMS keys must grant these permissions to the AWS account that owns the IAM role. For more information about managing access to these KMS keys, see [Using IAM policies with AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/iam-policies.html) in the* AWS Key Management Service Developer Guide*.

### S3 Bucket Keys and replication
<a name="bk-replication"></a>

To use replication with an S3 Bucket Key, the AWS KMS key policy for the KMS key that's used to encrypt the object replica must include the `kms:Decrypt` permission for the calling principal. The call to `kms:Decrypt` verifies the integrity of the S3 Bucket Key before using it. For more information, see [Using an S3 Bucket Key with replication](bucket-key.md#bucket-key-replication).

When an S3 Bucket Key is enabled for the source or destination bucket, the encryption context will be the bucket's Amazon Resource Name (ARN), not the object's ARN (for example, `arn:aws:s3:::bucket_ARN`). You must update your IAM policies to use the bucket ARN for the encryption context:

```
"kms:EncryptionContext:aws:s3:arn": [
"arn:aws:s3:::bucket_ARN"
]
```

For more information, see [Encryption context (`x-amz-server-side-encryption-context`)](specifying-kms-encryption.md#s3-kms-encryption-context) (in the "Using the REST API" section) and [Changes to note before enabling an S3 Bucket Key](bucket-key.md#bucket-key-changes).

### Example policies: Using SSE-S3 and SSE-KMS with replication
<a name="kms-replication-examples"></a>

The following example IAM policies show statements for using SSE-S3 and SSE-KMS with replication.

**Example – Using SSE-KMS with separate destination buckets**  
The following example policy shows statements for using SSE-KMS with separate destination buckets. 

**Example – Replicating objects created with SSE-S3 and SSE-KMS**  
The following is a complete IAM policy that grants the necessary permissions to replicate unencrypted objects, objects created with SSE-S3, and objects created with SSE-KMS.    
****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetReplicationConfiguration",
            "s3:ListBucket"
         ],
         "Resource":[
            "arn:aws:s3:::amzn-s3-demo-source-bucket"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetObjectVersionForReplication",
            "s3:GetObjectVersionAcl"
         ],
         "Resource":[
            "arn:aws:s3:::amzn-s3-demo-source-bucket/key-prefix1*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:ReplicateObject",
            "s3:ReplicateDelete"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/key-prefix1*"
      },
      {
         "Action":[
            "kms:Decrypt"
         ],
         "Effect":"Allow",
         "Condition":{
            "StringLike":{
               "kms:ViaService":"s3.us-east-1.amazonaws.com",
               "kms:EncryptionContext:aws:s3:arn":[
                  "arn:aws:s3:::amzn-s3-demo-source-bucket/key-prefix1*"
               ]
            }
         },
         "Resource":[
           "arn:aws:kms:us-east-1:111122223333:key/key-id"
         ]
      },
      {
         "Action":[
            "kms:Encrypt"
         ],
         "Effect":"Allow",
         "Condition":{
            "StringLike":{
               "kms:ViaService":"s3.us-east-1.amazonaws.com",
               "kms:EncryptionContext:aws:s3:arn":[
                  "arn:aws:s3:::amzn-s3-demo-destination-bucket/prefix1*"
               ]
            }
         },
         "Resource":[
            "arn:aws:kms:us-east-1:111122223333:key/key-id"
         ]
      }
   ]
}
```

**Example – Replicating objects with S3 Bucket Keys**  
The following is a complete IAM policy that grants the necessary permissions to replicate objects with S3 Bucket Keys.    
****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetReplicationConfiguration",
            "s3:ListBucket"
         ],
         "Resource":[
            "arn:aws:s3:::amzn-s3-demo-source-bucket"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetObjectVersionForReplication",
            "s3:GetObjectVersionAcl"
         ],
         "Resource":[
            "arn:aws:s3:::amzn-s3-demo-source-bucket/key-prefix1*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:ReplicateObject",
            "s3:ReplicateDelete"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/key-prefix1*"
      },
      {
         "Action":[
            "kms:Decrypt"
         ],
         "Effect":"Allow",
         "Condition":{
            "StringLike":{
               "kms:ViaService":"s3.us-east-1.amazonaws.com",
               "kms:EncryptionContext:aws:s3:arn":[
                  "arn:aws:s3:::amzn-s3-demo-source-bucket"
               ]
            }
         },
         "Resource":[
           "arn:aws:kms:us-east-1:111122223333:key/key-id"
         ]
      },
      {
         "Action":[
            "kms:Encrypt"
         ],
         "Effect":"Allow",
         "Condition":{
            "StringLike":{
               "kms:ViaService":"s3.us-east-1.amazonaws.com",
               "kms:EncryptionContext:aws:s3:arn":[
                  "arn:aws:s3:::amzn-s3-demo-destination-bucket"
               ]
            }
         },
         "Resource":[
            "arn:aws:kms:us-east-1:111122223333:key/key-id"
         ]
      }
   ]
}
```

### Granting additional permissions for cross-account scenarios
<a name="replication-kms-cross-acct-scenario"></a>

In a cross-account scenario, where the source and destination buckets are owned by different AWS accounts, you can use a KMS key to encrypt object replicas. However, the KMS key owner must grant the source bucket owner permission to use the KMS key. 

**Note**  
If you need to replicate SSE-KMS data cross-account, then your replication rule must specify a [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) from AWS KMS for the destination account. [AWS managed keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) don't allow cross-account use, and therefore can't be used to perform cross-account replication.<a name="cross-acct-kms-key-permission"></a>

**To grant the source bucket owner permission to use the KMS key (AWS KMS console)**

1. Sign in to the AWS Management Console and open the AWS KMS console at [https://console.aws.amazon.com/kms](https://console.aws.amazon.com/kms).

1. To change the AWS Region, use the Region selector in the upper-right corner of the page.

1. To view the keys in your account that you create and manage, in the navigation pane choose **Customer managed keys**.

1. Choose the KMS key.

1. Under the **General configuration** section, choose the **Key policy** tab.

1. Scroll down to **Other AWS accounts**.

1. Choose **Add other AWS accounts**. 

   The **Other AWS accounts** dialog box appears. 

1. In the dialog box, choose **Add another AWS account**. For **arn:aws:iam::**, enter the source bucket account ID.

1. Choose **Save changes**.

**To grant the source bucket owner permission to use the KMS key (AWS CLI)**
+ For information about the `put-key-policy` AWS Command Line Interface (AWS CLI) command, see [https://docs.aws.amazon.com/cli/latest/reference/kms/put-key-policy.html](https://docs.aws.amazon.com/cli/latest/reference/kms/put-key-policy.html) in the* AWS CLI Command Reference*. For information about the underlying `PutKeyPolicy` API operation, see [https://docs.aws.amazon.com/kms/latest/APIReference/API_PutKeyPolicy.html](https://docs.aws.amazon.com/kms/latest/APIReference/API_PutKeyPolicy.html) in the [AWS Key Management Service API Reference](https://docs.aws.amazon.com/kms/latest/APIReference/).

### AWS KMS transaction quota considerations
<a name="crr-kms-considerations"></a>

When you add many new objects with AWS KMS encryption after enabling Cross-Region Replication (CRR), you might experience throttling (HTTP `503 Service Unavailable` errors). Throttling occurs when the number of AWS KMS transactions per second exceeds the current quota. For more information, see [Quotas](https://docs.aws.amazon.com/kms/latest/developerguide/limits.html) in the *AWS Key Management Service Developer Guide*.

To request a quota increase, use Service Quotas. For more information, see [Requesting a quota increase](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html). If Service Quotas isn't supported in your Region, [open an AWS Support case](https://console.aws.amazon.com/support/home#/). 

## Enabling replication for encrypted objects
<a name="replication-walkthrough-4"></a>

By default, Amazon S3 doesn't replicate objects that are encrypted by using server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) or dual-layer server-side encryption with AWS KMS keys (DSSE-KMS). To replicate objects encrypted with SSE-KMS or DSS-KMS, you must modify the bucket replication configuration to tell Amazon S3 to replicate these objects. This example explains how to use the Amazon S3 console and the AWS Command Line Interface (AWS CLI) to change the bucket replication configuration to enable replicating encrypted objects.

**Note**  
When an S3 Bucket Key is enabled for the source or destination bucket, the encryption context will be the bucket's Amazon Resource Name (ARN), not the object's ARN. You must update your IAM policies to use the bucket ARN for the encryption context. For more information, see [S3 Bucket Keys and replication](#bk-replication).

**Note**  
You can use multi-Region AWS KMS keys in Amazon S3. However, Amazon S3 currently treats multi-Region keys as though they were single-Region keys, and does not use the multi-Region features of the key. For more information, see [ Using multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the *AWS Key Management Service Developer Guide*.

### Using the S3 console
<a name="replication-ex4-console"></a>

For step-by-step instructions, see [Configuring replication for buckets in the same account](replication-walkthrough1.md). This topic provides instructions for setting a replication configuration when the source and destination buckets are owned by the same and different AWS accounts.

### Using the AWS CLI
<a name="replication-ex4-cli"></a>

To replicate encrypted objects with the AWS CLI, you do the following: 
+ Create source and destination buckets and enable versioning on these buckets. 
+ Create an AWS Identity and Access Management (IAM) service role that gives Amazon S3 permission to replicate objects. The IAM role's permissions include the necessary permissions to replicate the encrypted objects.
+ Add a replication configuration to the source bucket. The replication configuration provides information related to replicating objects that are encrypted by using KMS keys.
+ Add encrypted objects to the source bucket. 
+ Test the setup to confirm that your encrypted objects are being replicated to the destination bucket.

The following procedures walk you through this process. 

**To replicate server-side encrypted objects (AWS CLI)**

To use the examples in this procedure, replace the `user input placeholders` with your own information.

1. In this example, you create both the source (*`amzn-s3-demo-source-bucket`*) and destination (*`amzn-s3-demo-destination-bucket`*) buckets in the same AWS account. You also set a credentials profile for the AWS CLI. This example uses the profile name `acctA`. 

   For information about setting credential profiles and using named profiles, see [Configuration and credential file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) in the *AWS Command Line Interface User Guide*. 

1. Use the following commands to create the `amzn-s3-demo-source-bucket` bucket and enable versioning on it. The following example commands create the `amzn-s3-demo-source-bucket` bucket in the US East (N. Virginia) (`us-east-1`) Region.

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-source-bucket \
   --region us-east-1 \
   --profile acctA
   ```

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-source-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctA
   ```

1. Use the following commands to create the `amzn-s3-demo-destination-bucket` bucket and enable versioning on it. The following example commands create the `amzn-s3-demo-destination-bucket` bucket in the US West (Oregon) (`us-west-2`) Region. 
**Note**  
To set up a replication configuration when both `amzn-s3-demo-source-bucket` and `amzn-s3-demo-destination-bucket` buckets are in the same AWS account, you use the same profile. This example uses `acctA`. To configure replication when the buckets are owned by different AWS accounts, you specify different profiles for each. 

   

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-destination-bucket \
   --region us-west-2 \
   --create-bucket-configuration LocationConstraint=us-west-2 \
   --profile acctA
   ```

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-destination-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctA
   ```

1. Next, you create an IAM service role. You will specify this role in the replication configuration that you add to the `amzn-s3-demo-source-bucket` bucket later. Amazon S3 assumes this role to replicate objects on your behalf. You create an IAM role in two steps:
   + Create a service role.
   + Attach a permissions policy to the role.

   1. To create an IAM service role, do the following:

      1. Copy the following trust policy and save it to a file called `s3-role-trust-policy-kmsobj.json` in the current directory on your local computer. This policy grants the Amazon S3 service principal permissions to assume the role so that Amazon S3 can perform tasks on your behalf.

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Effect":"Allow",
                  "Principal":{
                     "Service":"s3.amazonaws.com"
                  },
                  "Action":"sts:AssumeRole"
               }
            ]
         }
         ```

------

      1. Use the following command to create the role:

         ```
         $ aws iam create-role \
         --role-name replicationRolekmsobj \
         --assume-role-policy-document file://s3-role-trust-policy-kmsobj.json  \
         --profile acctA
         ```

   1. Next, you attach a permissions policy to the role. This policy grants permissions for various Amazon S3 bucket and object actions. 

      1. Copy the following permissions policy and save it to a file named `s3-role-permissions-policykmsobj.json` in the current directory on your local computer. You will create an IAM role and attach the policy to it later. 
**Important**  
In the permissions policy, you specify the AWS KMS key IDs that will be used for encryption of the `amzn-s3-demo-source-bucket` and `amzn-s3-demo-destination-bucket` buckets. You must create two separate KMS keys for the `amzn-s3-demo-source-bucket` and `amzn-s3-demo-destination-bucket` buckets. AWS KMS keys aren't shared outside the AWS Region in which they were created. 

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Action":[
                     "s3:ListBucket",
                     "s3:GetReplicationConfiguration",
                     "s3:GetObjectVersionForReplication",
                     "s3:GetObjectVersionAcl",
                     "s3:GetObjectVersionTagging"
                  ],
                  "Effect":"Allow",
                  "Resource":[
                     "arn:aws:s3:::amzn-s3-demo-source-bucket",
                     "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
                  ]
               },
               {
                  "Action":[
                     "s3:ReplicateObject",
                     "s3:ReplicateDelete",
                     "s3:ReplicateTags"
                  ],
                  "Effect":"Allow",
                  "Condition":{
                     "StringLikeIfExists":{
                        "s3:x-amz-server-side-encryption":[
                           "aws:kms",
                           "AES256",
                           "aws:kms:dsse"
                        ],
                        "s3:x-amz-server-side-encryption-aws-kms-key-id":[
                           "AWS KMS key IDs(in ARN format) to use for encrypting object replicas"  
                        ]
                     }
                  },
                  "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
               },
               {
                  "Action":[
                     "kms:Decrypt"
                  ],
                  "Effect":"Allow",
                  "Condition":{
                     "StringLike":{
                        "kms:ViaService":"s3.us-east-1.amazonaws.com",
                        "kms:EncryptionContext:aws:s3:arn":[
                           "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
                        ]
                     }
                  },
                  "Resource":[
                     "arn:aws:kms:us-east-1:111122223333:key/key-id" 
                  ]
               },
               {
                  "Action":[
                     "kms:Encrypt"
                  ],
                  "Effect":"Allow",
                  "Condition":{
                     "StringLike":{
                        "kms:ViaService":"s3.us-west-2.amazonaws.com",
                        "kms:EncryptionContext:aws:s3:arn":[
                           "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
                        ]
                     }
                  },
                  "Resource":[
                     "arn:aws:kms:us-west-2:111122223333:key/key-id" 
                  ]
               }
            ]
         }
         ```

------

      1. Create a policy and attach it to the role.

         ```
         $ aws iam put-role-policy \
         --role-name replicationRolekmsobj \
         --policy-document file://s3-role-permissions-policykmsobj.json \
         --policy-name replicationRolechangeownerPolicy \
         --profile acctA
         ```

1. Next, add the following replication configuration to the `amzn-s3-demo-source-bucket` bucket. It tells Amazon S3 to replicate objects with the `Tax/` prefix to the `amzn-s3-demo-destination-bucket` bucket. 
**Important**  
In the replication configuration, you specify the IAM role that Amazon S3 can assume. You can do this only if you have the `iam:PassRole` permission. The profile that you specify in the CLI command must have this permission. For more information, see [Granting a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*.

   ```
    <ReplicationConfiguration>
     <Role>IAM-Role-ARN</Role>
     <Rule>
       <Priority>1</Priority>
       <DeleteMarkerReplication>
          <Status>Disabled</Status>
       </DeleteMarkerReplication>
       <Filter>
          <Prefix>Tax</Prefix>
       </Filter>
       <Status>Enabled</Status>
       <SourceSelectionCriteria>
         <SseKmsEncryptedObjects>
           <Status>Enabled</Status>
         </SseKmsEncryptedObjects>
       </SourceSelectionCriteria>
       <Destination>
         <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
         <EncryptionConfiguration>
           <ReplicaKmsKeyID>AWS KMS key IDs to use for encrypting object replicas</ReplicaKmsKeyID>
         </EncryptionConfiguration>
       </Destination>
     </Rule>
   </ReplicationConfiguration>
   ```

   To add a replication configuration to the `amzn-s3-demo-source-bucket` bucket, do the following:

   1. The AWS CLI requires you to specify the replication configuration as JSON. Save the following JSON in a file (`replication.json`) in the current directory on your local computer. 

      ```
      {
         "Role":"IAM-Role-ARN",
         "Rules":[
            {
               "Status":"Enabled",
               "Priority":1,
               "DeleteMarkerReplication":{
                  "Status":"Disabled"
               },
               "Filter":{
                  "Prefix":"Tax"
               },
               "Destination":{
                  "Bucket":"arn:aws:s3:::amzn-s3-demo-destination-bucket",
                  "EncryptionConfiguration":{
                     "ReplicaKmsKeyID":"AWS KMS key IDs (in ARN format) to use for encrypting object replicas"
                  }
               },
               "SourceSelectionCriteria":{
                  "SseKmsEncryptedObjects":{
                     "Status":"Enabled"
                  }
               }
            }
         ]
      }
      ```

   1. Edit the JSON to provide values for the `amzn-s3-demo-destination-bucket` bucket, `AWS KMS key IDs (in ARN format)`, and `IAM-role-ARN`. Save the changes.

   1. Use the following command to add the replication configuration to your `amzn-s3-demo-source-bucket` bucket. Be sure to provide the `amzn-s3-demo-source-bucket` bucket name.

      ```
      $ aws s3api put-bucket-replication \
      --replication-configuration file://replication.json \
      --bucket amzn-s3-demo-source-bucket \
      --profile acctA
      ```

1. Test the configuration to verify that encrypted objects are replicated. In the Amazon S3 console, do the following:

   1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. In the `amzn-s3-demo-source-bucket` bucket, create a folder named `Tax`. 

   1. Add sample objects to the folder. Be sure to choose the encryption option and specify your KMS key to encrypt the objects. 

   1. Verify that the `amzn-s3-demo-destination-bucket` bucket contains the object replicas and that they are encrypted by using the KMS key that you specified in the configuration. For more information, see [Getting replication status information](replication-status.md).

### Using the AWS SDKs
<a name="replication-ex4-sdk"></a>

For a code example that shows how to add a replication configuration, see [Using the AWS SDKs](replication-walkthrough1.md#replication-ex1-sdk). You must modify the replication configuration appropriately. 

 

# Replicating metadata changes with replica modification sync
<a name="replication-for-metadata-changes"></a>

Amazon S3 replica modification sync can help you keep object metadata such as tags, access control lists (ACLs), and Object Lock settings replicated between replicas and source objects. By default, Amazon S3 replicates metadata from the source objects to the replicas only. When replica modification sync is enabled, Amazon S3 replicates metadata changes made to the replica copies back to the source object, making the replication bidirectional (two-way replication).

## Enabling replica modification sync
<a name="enabling-replication-for-metadata-changes"></a>

You can use Amazon S3 replica modification sync with new or existing replication rules. You can apply it to an entire bucket or to objects that have a specific prefix.

To enable replica modification sync by using the Amazon S3 console, see [Examples for configuring live replication](replication-example-walkthroughs.md). This topic provides instructions for enabling replica modification sync in your replication configuration when the source and destination buckets are owned by the same or different AWS accounts.

To enable replica modification sync by using the AWS Command Line Interface (AWS CLI), you must add a replication configuration to the bucket containing the replicas with `ReplicaModifications` enabled. To set up two-way replication, create a replication rule from the source bucket (`amzn-s3-demo-source-bucket`) to the bucket containing the replicas (`amzn-s3-demo-destination-bucket`). Then, create a second replication rule from the bucket containing the replicas (`amzn-s3-demo-destination-bucket`) to the source bucket (`amzn-s3-demo-source-bucket`). The source and destination buckets can be in the same or different AWS Regions.

**Note**  
You must enable replica modification sync on both the source and destination buckets to replicate replica metadata changes like object access control lists (ACLs), object tags, or Object Lock settings on the replicated objects. Like all replication rules, you can apply these rules to the entire bucket or to a subset of objects filtered by prefix or object tags.

In the following example configuration, Amazon S3 replicates metadata changes under the prefix `Tax` to the bucket `amzn-s3-demo-source-bucket`, which contains the source objects.

```
{
    "Rules": [
        {
            "Status": "Enabled",
            "Filter": {
                "Prefix": "Tax"
            },
            "SourceSelectionCriteria": {
                "ReplicaModifications":{
                    "Status": "Enabled"
                }
            },
            "Destination": {
                "Bucket": "arn:aws:s3:::amzn-s3-demo-source-bucket"
            },
            "Priority": 1
        }
    ],
    "Role": "IAM-Role-ARN"
}
```

For full instructions on creating replication rules by using the AWS CLI, see [Configuring replication for buckets in the same account](replication-walkthrough1.md).

# Replicating delete markers between buckets
<a name="delete-marker-replication"></a>

By default, when S3 Replication is enabled and an object is deleted in the source bucket, Amazon S3 adds a delete marker in the source bucket only. This action helps protect data in the destination buckets from accidental or malicious deletions. If you have *delete marker replication* enabled, these markers are copied to the destination buckets, and Amazon S3 behaves as if the object was deleted in both the source and destination buckets. For more information about how delete markers work, see [Working with delete markers](DeleteMarker.md).

**Note**  
Delete marker replication isn't supported for tag-based replication rules. Delete marker replication also doesn't adhere to the 15-minute service-level agreement (SLA) that's granted when you're using S3 Replication Time Control (S3 RTC).
If you're not using the latest replication configuration XML version, delete operations affect replication differently. For more information, see [How delete operations affect replication](replication-what-is-isnot-replicated.md#replication-delete-op).
If you enable delete marker replication and your source bucket has an S3 Lifecycle expiration rule, the delete markers added by the S3 Lifecycle expiration rule won't be replicated to the destination bucket.

## Enabling delete marker replication
<a name="enabling-delete-marker-replication"></a>

You can start using delete marker replication with a new or existing replication rule. You can apply delete marker replication to an entire bucket or to objects that have a specific prefix.

To enable delete marker replication by using the Amazon S3 console, see [Using the S3 console](replication-walkthrough1.md#enable-replication). This topic provides instructions for enabling delete marker replication in your replication configuration when the source and destination buckets are owned by the same or different AWS accounts.

To enable delete marker replication by using the AWS Command Line Interface (AWS CLI), you must add a replication configuration to the source bucket with `DeleteMarkerReplication` enabled, as shown in the following example configuration. 

In the following example replication configuration, delete markers are replicated to the destination bucket `amzn-s3-demo-destination-bucket` for objects under the prefix `Tax`.

```
{
    "Rules": [
        {
            "Status": "Enabled",
            "Filter": {
                "Prefix": "Tax"
            },
            "DeleteMarkerReplication": {
                "Status": "Enabled"
            },
            "Destination": {
                "Bucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket"
            },
            "Priority": 1
        }
    ],
    "Role": "IAM-Role-ARN"
}
```

For full instructions on creating replication rules through the AWS CLI, see [Configuring replication for buckets in the same account](replication-walkthrough1.md).

# Managing or pausing live replication
<a name="disable-replication"></a>

Live replication is the automatic, asynchronous copying of objects across buckets in the same or different AWS Regions. After you set up your replication configuration, Amazon S3 replicates newly created objects and object updates from a source bucket to one or more specified destination buckets. 

You use the Amazon S3 console to add replication rules to the source bucket. Replication rules define the source bucket objects to replicate and the destination bucket or buckets where the replicated objects are stored. For more information about replication, see [Replicating objects within and across Regions](replication.md).

You can manage replication rules on the **Replication** page in the Amazon S3 console. You can add, view, edit, enable, disable, or delete replication rules. You can also change the priority of your replication rules. For information about adding replication rules to a bucket, see [Using the S3 console](replication-walkthrough1.md#enable-replication).

**To manage the replication rules for a bucket by using the Amazon S3 console**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**. 

1. On the **General purpose buckets** tab, choose the name of the bucket that you want.

1. Choose the **Management** tab, and then scroll down to **Replication rules**.

1. You can change your replication rules in the following ways:
   + To enable or disable a replication rule, choose the option button to the left of the rule. On the **Actions** menu, choose **Enable rule** or **Disable rule**. You can also disable, enable, or delete all the rules in the bucket from the **Actions** menu.
**Note**  
If you disable a replication rule and then later re-enable the rule, any new or changed objects that weren't replicated while the rule was disabled are *not* automatically replicated when the rule is re-enabled. To replicate those objects, you must use S3 Batch Replication. For more information, see [Replicating existing objects with Batch Replication](s3-batch-replication-batch.md).
   + To change the priority of a rule, choose the option button to the left of the rule, and then choose **Edit rule**.

     You set rule priorities to avoid conflicts caused by objects that are included in the scope of more than one rule. In the case of overlapping rules, Amazon S3 uses the rule priority to determine which rule to apply. The higher the number, the higher the priority. For more information about rule priority, see [Replication configuration file elements](replication-add-config.md).

## Pausing or stopping replication
<a name="replication-pause"></a>

To temporarily pause replication and have it automatically resume later, you can use the `aws:s3:bucket-pause-replication` action in AWS Fault Injection Service. For more information, see [https://docs.aws.amazon.com/fis/latest/userguide/fis-actions-reference.html#bucket-pause-replication](https://docs.aws.amazon.com/fis/latest/userguide/fis-actions-reference.html#bucket-pause-replication) and [Pause S3 Replication](https://docs.aws.amazon.com/fis/latest/userguide/cross-region-scenario.html#cross-region-scenario-actions-pause-s3-replication) in the *AWS Fault Injection Service User Guide*.

To stop replication in Amazon S3, we recommend disabling your replication rules. If you disable a replication rule and then later re-enable the rule, any new or changed objects that weren't replicated while the rule was disabled are *not* automatically replicated when the rule is re-enabled. To replicate those objects, you must use S3 Batch Replication. For more information, see [Replicating existing objects with Batch Replication](s3-batch-replication-batch.md).

Replication will also stop if you remove the AWS Identity and Access Management (IAM) role, the AWS Key Management Service (AWS KMS) permissions, or the bucket policy permissions that grant Amazon S3 the required permissions. However, we don't recommend these approaches because they cause replication to fail. Amazon S3 reports the replication status for affected objects as `FAILED`. If permissions are later restored, objects marked as `FAILED` are *not* automatically replicated. To replicate those objects, you must use S3 Batch Replication.

# Replicating existing objects with Batch Replication
<a name="s3-batch-replication-batch"></a>

S3 Batch Replication differs from live replication, which continuously and automatically replicates new objects across Amazon S3 buckets. Instead, S3 Batch Replication occurs on demand on existing objects. You can use S3 Batch Replication to replicate the following types of objects: 
+ Objects that existed before a replication configuration was in place
+ Objects that have previously been replicated
+ Objects that have failed replication

You can replicate these objects on demand by using a Batch Operations job.

To get started with Batch Replication, you can:
+ **Initiate Batch Replication for a new replication rule or destination** – You can create a one-time Batch Replication job when you're creating the first rule in a new replication configuration or when you're adding a new destination bucket to an existing configuration through the Amazon S3 console. 
+ **Initiate Batch Replication for an existing replication configuration** – You can create a new Batch Replication job by using S3 Batch Operations through the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the Amazon S3 REST API.

When the Batch Replication job finishes, you receive a completion report. For more information about how to use this report to examine the job, see [Tracking job status and completion reports](batch-ops-job-status.md).

## S3 Batch Replication considerations
<a name="batch-replication-considerations"></a>

Before using S3 Batch Replication, review the following list of considerations: 
+ Your source bucket must have an existing replication configuration. To enable replication, see [Setting up live replication overview](replication-how-setup.md) and [Examples for configuring live replication](replication-example-walkthroughs.md).
+ If you have S3 Lifecycle configured for your bucket, we recommend disabling your lifecycle rules while the Batch Replication job is active. Doing so helps ensure parity between the source and destination buckets. Otherwise, these buckets could diverge, and the destination bucket won't be an exact replica of the source bucket. For example, consider the following scenario:
  + Your source bucket has multiple versions of an object and a delete marker on that object.
  + Your source and destination buckets have a lifecycle configuration to remove expired delete markers.

  In this scenario, Batch Replication might replicate the delete marker to the destination bucket before replicating the object versions. This behavior could result in your lifecycle configuration marking the delete marker as expired and the delete marker being removed from the destination bucket before the object versions are replicated.
+ The AWS Identity and Access Management (IAM) role that you specify to run the Batch Operations job must have the necessary permissions to perform the underlying Batch Replication operation. For more information about creating IAM roles, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md).
+ Batch Replication requires a manifest, which can be generated by Amazon S3. The generated manifest must be stored in the same AWS Region as the source bucket. If you choose not to generate the manifest, you can supply an Amazon S3 Inventory report or CSV file that contains the objects that you want to replicate. For more information, see [Specifying a manifest for a Batch Replication job](#batch-replication-manifest). 
+ Batch Replication doesn't support re-replicating objects that were deleted by specifying the version ID of the object from the destination bucket. To re-replicate these objects, you can copy the source objects in place with a Batch Copy job. Copying those objects in place creates new versions of the objects in the source bucket and automatically initiates replication to the destination bucket. Deleting and recreating the destination bucket doesn't initiate replication.

  For more information about Batch Copy, see [Examples that use Batch Operations to copy objects](batch-ops-examples-copy.md).
+ If you're using a replication rule on the source bucket, make sure to [update your replication configuration](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-walkthrough-2.html) by granting the IAM role that's attached to the replication rule the proper permissions to replicate objects. This IAM role must have the necessary permissions to perform replication on both the source and destination buckets.
+ If you submit multiple Batch Replication jobs for the same bucket within a short time frame, Amazon S3 runs those jobs concurrently.
+ If you submit multiple Batch Replication jobs for two different buckets, be aware that Amazon S3 might not run all jobs concurrently. If you exceed the number of Batch Replication jobs that can run at one time on your account, Amazon S3 pauses the lower priority jobs to work on the higher priority ones. After the higher priority jobs are completed, any paused jobs become active again.
+ Batch Replication isn't supported for objects that are stored in the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes.
+ To batch replicate S3 Intelligent-Tiering objects that are stored in the Archive Access or Deep Archive Access storage tiers, you must first initiate a [restore](https://docs.aws.amazon.com/AmazonS3/latest/userguide/intelligent-tiering-managing.html#restore-data-from-int-tier-archive) request and wait until the objects are moved to the Frequent Access tier. 
+ A single Batch Replication job can support a manifest with up to 20 billion objects.
+ If you use S3 Batch Replication to replicate datasets cross region and your objects previously had their server-side encryption type updated from SSE-S3 to SSE-KMS, you may need additional permissions. On the source region bucket, you must have `kms:decrypt` permissions. Then, you will need the `kms:decrypt` and `kms:encrypt` permissions for the bucket in the destination region. For more information, see [Replicating encrypted objects](replication-config-for-kms-objects.md).

## Specifying a manifest for a Batch Replication job
<a name="batch-replication-manifest"></a>

A manifest is an Amazon S3 object that contains the object keys that you want Amazon S3 to act upon. If you want to create a Batch Replication job, you must supply either a user-generated manifest or have Amazon S3 generate a manifest based on your replication configuration.

If you supply a user-generated manifest, it must be in the form of an Amazon S3 Inventory report or a CSV file. If the objects in your manifest are in a versioned bucket, you must specify the version IDs for the objects. Only the object with the version ID that's specified in the manifest will be replicated. To learn more about specifying a manifest, see [Specifying a manifest](batch-ops-create-job.md#specify-batchjob-manifest).

If you choose to have Amazon S3 generate a manifest file on your behalf, the objects listed use the same source bucket, prefix, and tags as your replication configurations on the source bucket. With a generated manifest, Amazon S3 replicates all eligible versions of your objects.

**Note**  
If you choose to have Amazon S3 generate the manifest, the manifest must be stored in the same AWS Region as the source bucket.

## Filters for a Batch Replication job
<a name="batch-replication-filters"></a>

When creating your Batch Replication job, you can optionally specify additional filters, such as the object creation date and replication status, to reduce the scope of the job.

You can filter objects to replicate based on the `ObjectReplicationStatuses` value, by providing one or more of the following values:
+ `"NONE"` – Indicates that Amazon S3 has never attempted to replicate the object before.
+ `"FAILED"` – Indicates that Amazon S3 has attempted, but failed, to replicate the object before.
+ `"COMPLETED"` – Indicates that Amazon S3 has successfully replicated the object before.
+ `"REPLICA"` – Indicates that this object is a replica that Amazon S3 has replicated from another source bucket.

For more information about replication statuses, see [Getting replication status information](replication-status.md).

If you don't filter your Batch Replication job, Batch Operations attempts to replicate all objects (no matter their `ObjectReplicationStatus`) in your manifest that match the rules in your replication configuration, except for certain objects that aren't replicated by default. For more information, see [What isn't replicated with replication configurations?](replication-what-is-isnot-replicated.md#replication-what-is-not-replicated)

Depending on your goal, you might set `ObjectReplicationStatuses` to one or more of the following values:
+ To replicate only existing objects that have never been replicated, only include `"NONE"`.
+ To retry replicating only objects that previously failed to replicate, only include `"FAILED"`.
+ To both replicate existing objects and retry replicating objects that previously failed to replicate, include both `"NONE"` and `"FAILED"`.
+ To backfill a destination bucket with objects that have been replicated to another destination, include `"COMPLETED"`.
+ To replicate objects that were previously replicated, include `"REPLICA"`.

## Batch Replication completion report
<a name="batch-replication-completion-report"></a>

When you create a Batch Replication job, you can request a CSV completion report. This report shows the objects, replication success or failure codes, outputs, and descriptions. For more information about job tracking and completion reports, see [Completion reports](batch-ops-job-status.md#batch-ops-completion-report). 

For a list of replication failure codes and descriptions, see [Amazon S3 replication failure reasons](replication-metrics-events.md#replication-failure-codes).

For information about troubleshooting Batch Replication, see [Batch Replication errors](replication-troubleshoot.md#troubleshoot-batch-replication-errors).

## Getting started with Batch Replication
<a name="batch-replication-tutorial"></a>

To learn more about how to use Batch Replication, see [Tutorial: Replicating existing objects in your Amazon S3 buckets with S3 Batch Replication](https://aws.amazon.com/getting-started/hands-on/replicate-existing-objects-with-amazon-s3-batch-replication/).

# Configuring an IAM role for S3 Batch Replication
<a name="s3-batch-replication-policies"></a>

Because Amazon S3 Batch Replication is a type of Batch Operations job, you must create an AWS Identity and Access Management (IAM) role to grant Batch Operations permissions to perform actions on your behalf. You also must attach a Batch Replication IAM policy to the Batch Operations IAM role. 

Use the following procedures to create a policy and an IAM role that give Batch Operations permission to initiate a Batch Replication job.

**To create a policy for Batch Replication**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. Under **Access management**, choose **Policies**.

1. Choose **Create policy**.

1. On the **Specify permissions** page, choose **JSON**.

1. Insert one of the following policies, depending on whether your manifest is generated by Amazon S3 or whether you are supplying your own manifest. For more information about manifests, see [Specifying a manifest for a Batch Replication job](s3-batch-replication-batch.md#batch-replication-manifest). 

   Before using these policies, replace the `user input placeholders` in the following policies with the names of your replication source bucket, manifest bucket, and completion report bucket. 
**Note**  
Your IAM role for Batch Replication needs different permissions, depending on whether you are generating a manifest or supplying one, so make sure that you choose the appropriate policy from the following examples.

**Policy if using and storing an Amazon S3 generated manifest**

------
#### [ JSON ]

****  

   ```
   {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
         {
            "Action": [
               "s3:InitiateReplication"
            ],
            "Effect": "Allow",
            "Resource": [
               "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
            ]
         },
         {
            "Action": [
               "s3:GetReplicationConfiguration",
               "s3:PutInventoryConfiguration"
            ],
            "Effect": "Allow",
            "Resource": [
               "arn:aws:s3:::amzn-s3-demo-source-bucket"
            ]
         },
         {
            "Action": [
               "s3:GetObject",
               "s3:GetObjectVersion"
            ],
            "Effect": "Allow",
            "Resource": [
               "arn:aws:s3:::amzn-s3-demo-manifest-bucket/*"
            ]
         },
         {
            "Effect": "Allow",
            "Action": [
               "s3:PutObject"
            ],
            "Resource": [
               "arn:aws:s3:::amzn-s3-demo-completion-report-bucket/*",
               "arn:aws:s3:::amzn-s3-demo-manifest-bucket/*"    
            ]
         }
      ]
   }
   ```

------

**Policy if using a user-supplied manifest**

------
#### [ JSON ]

****  

   ```
   {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
         {
            "Action": [
               "s3:InitiateReplication"
            ],
            "Effect": "Allow",
            "Resource": [
               "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
            ]
         },
         {
            "Action": [
               "s3:GetObject",
               "s3:GetObjectVersion"
            ],
            "Effect": "Allow",
            "Resource": [
               "arn:aws:s3:::amzn-s3-demo-manifest-bucket/*"
            ]
         },
         {
            "Effect": "Allow",
            "Action": [
               "s3:PutObject"
            ],
            "Resource": [
               "arn:aws:s3:::amzn-s3-demo-completion-report-bucket/*"    
            ]
         }
      ]
   }
   ```

------

1. Choose **Next**.

1. Specify a name for the policy, and then choose **Create policy**.

**To create an IAM role for Batch Replication**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. Under **Access management**, choose **Roles**.

1. Choose **Create role**.

1. Choose **AWS service** as the type of trusted entity. In the **Use case** section, choose **S3** as the service, and **S3 Batch Operations** as the use case.

1. Choose **Next**. The **Add permissions** page appears. In the search box, search for the policy that you created in the preceding procedure. Select the checkbox next to the policy name, then choose **Next**. 

1. On the **Name, review, and create** page, specify a name for your IAM role.

1. In the **Step 1: Trust identities** section, verify that your IAM role is using the following trust policy:

------
#### [ JSON ]

****  

   ```
   {
      "Version":"2012-10-17",		 	 	 
      "Statement":[
         {
            "Effect":"Allow",
            "Principal":{
               "Service":"batchoperations.s3.amazonaws.com"
            },
            "Action":"sts:AssumeRole"
         }
      ]
   }
   ```

------

1. In the **Step 2: Add permissions** section, verify that your IAM role is using the policy that you created earlier. 

1. Choose **Create role**. 

# Create a Batch Replication job for new replication rules or destinations
<a name="s3-batch-replication-new-config"></a>

In Amazon S3, live replication doesn't replicate any objects that already existed in your source bucket before you created a replication configuration. Live replication automatically replicates only new and updated objects that are written to the bucket after the replication configuration is created. To replicate already existing objects, you can use S3 Batch Replication to replicate these objects on demand. 

When you create the first rule in a new live replication configuration or add a new destination bucket to an existing replication configuration through the Amazon S3 console, you can optionally create a Batch Replication job. You can use this Batch Replication job to replicate existing objects in the source bucket to the destination bucket. 

To use Batch Replication for an existing configuration without adding a new destination bucket, see [Create a Batch Replication job for existing replication rules](s3-batch-replication-existing-config.md).

**Prerequisites**  
Before creating your Batch Replication job, you must create a Batch Operations AWS Identity and Access Management (IAM) role to grant Amazon S3 permissions to perform actions on your behalf. For more information, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md).

## Using Batch Replication for a new replication rule or destination through the Amazon S3 console
<a name="batch-replication-new-config-console"></a>

When you create the first rule in a new replication configuration or add a new destination bucket to an existing configuration through the Amazon S3 console, you can choose to create a Batch Replication job to replicate existing objects in the source bucket.

**To create a Batch Replication job when creating or updating a replication configuration**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**. 

1. In the **General purpose buckets** list, choose the name of the bucket that contains the objects that you want to replicate.

1. To create a new replication rule or edit an existing rule, choose the **Management** tab, and scroll down to **Replication rules**:
   + To create a new replication rule, choose **Create replication rule**. For examples of how to set up a basic replication rule, see [Examples for configuring live replication](replication-example-walkthroughs.md).
   + To edit an existing replication rule, select the option button next to the rule name, and then choose **Edit rule**.

1. Create your new replication rule or edit the destination for your existing replication rule, and choose **Save**.

   After you create the first rule in a new replication configuration or edit an existing configuration to add a new destination, a **Replicate existing objects?** dialog appears, giving you the option to create a Batch Replication job.

1. If you want to create and run this job now, choose **Yes, replicate existing objects**.

   If you want to create a Batch Replication job at a later time, choose **No, do not replicate existing objects**.

1. If you chose **Yes, replicate existing objects**, the **Create Batch Operations job** page appears. The S3 Batch Replication job has the following settings:   
**Job run options**  
If you want the S3 Batch Replication job to run immediately, choose **Automatically run the job when it's ready**. If you want to run the job at a later time, choose **Wait to run the job when it's ready**.  
If you choose **Automatically run the job when it's ready**, you won't be able to create and save a Batch Operations manifest. To save the Batch Operations manifest, choose **Wait to run the job when it's ready**.  
**Batch Operations manifest**  
If you chose **Wait to run the job when it's ready**, the **Batch Operations manifest** section appears. The manifest is a list of all of the objects that you want to run the specified action on. You can choose to save the manifest. Similar to S3 Inventory files, the manifest will be saved as a CSV file and stored in a bucket. To learn more about Batch Operations manifests, see [Specifying a manifest](batch-ops-create-job.md#specify-batchjob-manifest).  
**Completion report**  
S3 Batch Operations executes one task for each object specified in the manifest. Completion reports provide an easy way to view the results of your tasks in a consolidated format with no additional setup required. You can request a completion report for all tasks or only for failed tasks. To learn more about completion reports, see [Completion reports](batch-ops-job-status.md#batch-ops-completion-report).  
**Permissions**  
One of the most common causes of replication failures is insufficient permissions in the provided AWS Identity and Access Management (IAM) role. For information about creating this role, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md). Make sure that you create or choose an IAM role that has the required permissions for Batch Replication. 

1. Choose **Save**.

# Create a Batch Replication job for existing replication rules
<a name="s3-batch-replication-existing-config"></a>

In Amazon S3, live replication doesn't replicate any objects that already existed in your source bucket before you created a replication configuration. Live replication automatically replicates only new and updated objects that are written to the bucket after the replication configuration is created. To replicate already existing objects, you can use S3 Batch Replication to replicate these objects on demand. 

You can configure S3 Batch Replication for an existing replication configuration by using the AWS SDKs, AWS Command Line Interface (AWS CLI), or the Amazon S3 console. For an overview of Batch Replication, see [Replicating existing objects with Batch Replication](s3-batch-replication-batch.md).

When the Batch Replication job finishes, you receive a completion report. For more information about how to use the report to examine the job, see [Tracking job status and completion reports](batch-ops-job-status.md).

**Prerequisites**  
Before creating your Batch Replication job, you must create a Batch Operations AWS Identity and Access Management (IAM) role to grant Amazon S3 permissions to perform actions on your behalf. For more information, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md).

## Using the S3 console
<a name="batch-replication-existing-config-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Batch Operations**.

1. Choose **Create job**.

1. Verify that the **AWS Region** section shows the Region where you want to create your job. 

1. In the **Manifest** section, specify the manifest format that you want to use. The manifest is a list of all of the objects that you want to run the specified action on. To learn more about Batch Operations manifests, see [Specifying a manifest](batch-ops-create-job.md#specify-batchjob-manifest).
   + If you have a manifest prepared, choose **S3 inventory report (manifest.json)** or **CSV**. If your manifest is in a versioned bucket, you can specify the version ID for the manifest. If you don't specify a version ID, Batch Operations uses the current version of your manifest. For more information about creating a manifest, see [Specifying a manifest](batch-ops-create-job.md#specify-batchjob-manifest).
**Note**  
If the objects in your manifest are in a versioned bucket, you must specify the version IDs for the objects. For more information, see [Specifying a manifest](batch-ops-create-job.md#specify-batchjob-manifest).
   + To create a manifest based on your replication configuration, choose **Create manifest using S3 Replication configuration**. Then choose the source bucket of your replication configuration.

1. (Optional) If you chose **Create manifest using S3 Replication configuration**, you can include additional filters, such as the object creation date and replication status. For examples of how to filter by replication status, see [Specifying a manifest for a Batch Replication job](s3-batch-replication-batch.md#batch-replication-manifest). 

1. (Optional) If you chose **Create manifest using S3 Replication configuration**, you can save the generated manifest. To save this manifest, select **Save Batch Operations manifest**. Then specify the destination bucket for the manifest and choose whether to encrypt the manifest. 
**Note**  
The generated manifest must be stored in the same AWS Region as the source bucket.

1. Choose **Next**.

1. On the **Operations** page, choose **Replicate**, then choose **Next**. 

1. (Optional) Provide a **Description**. 

1. Adjust the **Priority** of the job if needed. Higher numbers indicate higher priority. Amazon S3 attempts to run higher priority jobs before lower priority jobs. For more information about job priority, see [Assigning job priority](batch-ops-job-priority.md).

1. (Optional) Generate a completion report. To generate this report, select **Generate completion report**.

   If you choose to generate a completion report, you must choose either to report **Failed tasks only** or **All tasks**, and provide a destination bucket for the report.

1. In the **Permissions** section, make sure that you choose an IAM role that has the required permissions for Batch Replication. One of the most common causes of replication failures is insufficient permissions in the provided IAM role. For information about creating this role, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md). 

1. (Optional) Add job tags to the Batch Replication job.

1. Choose **Next**.

1. Review your job configuration, and then choose **Create job**.

## Using the AWS CLI with an S3 manifest
<a name="batch-replication-existing-config-cli"></a>

The following example `create-job` command creates an S3 Batch Replication job by using an S3 generated manifest for the AWS account `111122223333`. This example replicates existing objects and objects that previously failed to replicate. For information about filtering by replication status, see [Specifying a manifest for a Batch Replication job](s3-batch-replication-batch.md#batch-replication-manifest). 

To use this command, replace the *`user input placeholders`* with your own information. Replace the IAM role `role/batch-Replication-IAM-policy` with the IAM role that you previously created. For more information, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md).

```
aws s3control create-job --account-id 111122223333 \ 
--operation '{"S3ReplicateObject":{}}' \ 
--report '{"Bucket":"arn:aws:s3:::amzn-s3-demo-completion-report-bucket",\ 
"Prefix":"batch-replication-report", \ 
"Format":"Report_CSV_20180820","Enabled":true,"ReportScope":"AllTasks"}' \ 
--manifest-generator '{"S3JobManifestGenerator": {"ExpectedBucketOwner": "111122223333", \ 
"SourceBucket": "arn:aws:s3:::amzn-s3-demo-source-bucket", \ 
"EnableManifestOutput": false, "Filter": {"EligibleForReplication": true, \ 
"ObjectReplicationStatuses": ["NONE","FAILED"]}}}' \ 
--priority 1 \ 
--role-arn arn:aws:iam::111122223333:role/batch-Replication-IAM-policy \ 
--no-confirmation-required \ 
--region source-bucket-region
```

**Note**  
You must initiate the job from the same AWS Region as the replication source bucket. 

After you have successfully initiated a Batch Replication job, you receive the job ID as the response. You can monitor this job by using the following `describe-job` command. To use this command, replace the *`user input placeholders`* with your own information. 

```
aws s3control describe-job --account-id 111122223333 --job-id job-id --region source-bucket-region
```

## Using the AWS CLI with a user-provided manifest
<a name="batch-replication-existing-config-cli-customer-manifest"></a>

The following example creates an S3 Batch Replication job by using a user-defined manifest for AWS account `111122223333`. If the objects in your manifest are in a versioned bucket, you must specify the version IDs for the objects. Only the object with the version ID specified in the manifest will be replicated. For more information about creating a manifest, see [Specifying a manifest](batch-ops-create-job.md#specify-batchjob-manifest). 

To use this command, replace the *`user input placeholders`* with your own information. Replace the IAM role `role/batch-Replication-IAM-policy` with the IAM role that you previously created. For more information, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md).

```
aws s3control create-job --account-id 111122223333 \ 
--operation '{"S3ReplicateObject":{}}' \
--report '{"Bucket":"arn:aws:s3:::amzn-s3-demo-completion-report-bucket",\
"Prefix":"batch-replication-report", \
"Format":"Report_CSV_20180820","Enabled":true,"ReportScope":"AllTasks"}' \
--manifest '{"Spec":{"Format":"S3BatchOperations_CSV_20180820",\
"Fields":["Bucket","Key","VersionId"]},\
"Location":{"ObjectArn":"arn:aws:s3:::amzn-s3-demo-manifest-bucket/manifest.csv",\
"ETag":"Manifest Etag"}}' \
--priority 1 \
--role-arn arn:aws:iam::111122223333:role/batch-Replication-IAM-policy \
--no-confirmation-required \
--region source-bucket-region
```

**Note**  
You must initiate the job from the same AWS Region as the replication source bucket. 

After you have successfully initiated a Batch Replication job, you receive the job ID as the response. You can monitor this job by using the following `describe-job` command.

```
aws s3control describe-job --account-id 111122223333 --job-id job-id --region source-bucket-region
```

# Troubleshooting replication
<a name="replication-troubleshoot"></a>

This section lists troubleshooting tips for Amazon S3 Replication and information about S3 Batch Replication errors.

**Topics**
+ [Troubleshooting tips for S3 Replication](#troubleshoot-replication-tips)
+ [Batch Replication errors](#troubleshoot-batch-replication-errors)

## Troubleshooting tips for S3 Replication
<a name="troubleshoot-replication-tips"></a>

If object replicas don't appear in the destination bucket after you configure replication, use these troubleshooting tips to identify and fix issues.
+ The majority of objects replicate within 15 minutes. The time that it takes Amazon S3 to replicate an object depends on several factors, including the source and destination Region pair, and the size of the object. For large objects, replication can take up to several hours. For visibility into replication times, you can use [S3 Replication Time Control (S3 RTC)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-time-control.html#enabling-replication-time-control).

  If the object that is being replicated is large, wait a while before checking to see whether it appears in the destination. You can also check the replication status of the source object. If the object replication status is `PENDING`, Amazon S3 hasn't completed the replication. If the object replication status is `FAILED`, check the replication configuration that's set on the source bucket. 

  Additionally, to receive information about failures during replication, you can set up Amazon S3 Event Notifications replication to receive failure events. For more information, see [Receiving replication failure events with Amazon S3 Event Notifications](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-metrics.html).
+ To check the replication status of an object, you can call the `HeadObject` API operation. The `HeadObject` API operation returns the `PENDING`, `COMPLETED`, or `FAILED` replication status of an object. In a response to a `HeadObject` API call, the replication status is returned in the `x-amz-replication-status` header.
**Note**  
To run `HeadObject`, you must have read access to the object that you're requesting. A `HEAD` request has the same options as a `GET` request, without performing a `GET` operation. For example, to run a `HeadObject` request by using the AWS Command Line Interface (AWS CLI), you can run the following command. Replace the `user input placeholders` with your own information.   

  ```
  aws s3api head-object --bucket amzn-s3-demo-source-bucket --key index.html
  ```
+ If `HeadObject` returns objects with a `FAILED` replication status, you can use S3 Batch Replication to replicate those failed objects. For more information, see [Replicating existing objects with Batch Replication](s3-batch-replication-batch.md). Alternatively, you can re-upload the failed objects to the source bucket, which will initiate replication for the new objects. 
+ In the replication configuration on the source bucket, verify the following:
  + The Amazon Resource Name (ARN) of the destination bucket is correct.
  + The key name prefix is correct. For example, if you set the configuration to replicate objects with the prefix `Tax`, then only objects with key names such as `Tax/document1` or `Tax/document2` are replicated. An object with the key name `document3` is not replicated.
  + The status of the replication rule is `Enabled`.
+ Verify that versioning hasn't been suspended on any bucket in the replication configuration. Both the source and destination buckets must have versioning enabled.
+ If a replication rule is set to **Change object ownership to the destination bucket owner**, then the AWS Identity and Access Management (IAM) role that's used for replication must have the `s3:ObjectOwnerOverrideToBucketOwner` permission. This permission is granted on the resource (in this case, the destination bucket). For example, the following `Resource` statement shows how to grant this permission on the destination bucket:

  ```
  {
    "Effect":"Allow",
    "Action":[
      "s3:ObjectOwnerOverrideToBucketOwner"
    ],
    "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
  }
  ```
+ If the destination bucket is owned by another account, the owner of the destination bucket must also grant the `s3:ObjectOwnerOverrideToBucketOwner` permission to the source bucket owner through the destination bucket policy. To use the following example bucket policy, replace the `user input placeholders` with your own information: 

------
#### [ JSON ]

****  

  ```
  {
    "Version":"2012-10-17",		 	 	 
    "Id": "Policy1644945280205",
    "Statement": [
      {
        "Sid": "Stmt1644945277847",
        "Effect": "Allow",
        "Principal": {
          "AWS": "arn:aws:iam::123456789101:role/s3-replication-role"
        },
        "Action": [
          "s3:ReplicateObject",
          "s3:ReplicateTags",
          "s3:ObjectOwnerOverrideToBucketOwner"
        ],
        "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
      }
    ]
  }
  ```

------
**Note**  
If the destination bucket's object ownership settings include **Bucket owner enforced**, then you don't need to update the setting to **Change object ownership to the destination bucket owner** in the replication rule. The object ownership change will occur by default. For more information about changing replica ownership, see [Changing the replica owner](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-change-owner.html).
+ If you're setting the replication configuration in a cross-account scenario, where the source and destination buckets are owned by different AWS accounts, the destination buckets can't be configured as a Requester Pays bucket. For more information, see [Using Requester Pays general purpose buckets for storage transfers and usage](RequesterPaysBuckets.md).
+ If a bucket's source objects are encrypted by using server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), then the replication rule must be configured to include AWS KMS-encrypted objects. Make sure to select **Replicate objects encrypted with AWS KMS** under your **Encryption** settings in the Amazon S3 console. Then, select an AWS KMS key for encrypting the destination objects.
**Note**  
If the destination bucket is in a different account, specify an AWS KMS customer managed key that is owned by the destination account. Don't use the default Amazon S3 managed key (`aws/s3`). Using the default key encrypts the objects with the Amazon S3 managed key that's owned by the source account, preventing the object from being shared with another account. As a result, the destination account won't be able to access the objects in the destination bucket.

  To use an AWS KMS key that belongs to the destination account to encrypt the destination objects, the destination account must grant the `kms:GenerateDataKey` and `kms:Encrypt` permissions to the replication role in the KMS key policy. To use the following example statement in your KMS key policy, replace the `user input placeholders` with your own information:

  ```
  {    
      "Sid": "AllowS3ReplicationSourceRoleToUseTheKey",
      "Effect": "Allow",
      "Principal": {
          "AWS": "arn:aws:iam::123456789101:role/s3-replication-role"
      },
      "Action": ["kms:GenerateDataKey", "kms:Encrypt"],
      "Resource": "*"
  }
  ```

  If you use an asterisk (`*`) for the `Resource` statement in the AWS KMS key policy, the policy grants permission to use the KMS key to only the replication role. The policy doesn't allow the replication role to elevate its permissions. 

  By default, the KMS key policy grants the root user full permissions to the key. These permissions can be delegated to other users in the same account. Unless there are `Deny` statements in the source KMS key policy, using an IAM policy to grant the replication role permissions to the source KMS key is sufficient.
**Note**  
KMS key policies that restrict access to specific CIDR ranges, virtual private cloud (VPC) endpoints, or S3 access points can cause replication to fail.

  If either the source or destination KMS keys grant permissions based on the encryption context, confirm that Amazon S3 Bucket Keys are turned on for the buckets. If the buckets have S3 Bucket Keys turned on, the encryption context must be the bucket-level resource, like this:

  ```
  "kms:EncryptionContext:arn:aws:arn": [
       "arn:aws:s3:::amzn-s3-demo-source-bucket"
       ]
  "kms:EncryptionContext:arn:aws:arn": [
       "arn:aws:s3:::amzn-s3-demo-destination-bucket"
       ]
  ```

  In addition to the permissions granted by the KMS key policy, the source account must add the following minimum permissions to the replication role's IAM policy:

  ```
  {
      "Effect": "Allow",
      "Action": [
          "kms:Decrypt",
          "kms:GenerateDataKey"
      ],
      "Resource": [
          "Source-KMS-Key-ARN"
      ]
  },
  {
      "Effect": "Allow",
      "Action": [
          "kms:GenerateDataKey",
          "kms:Encrypt"
      ],
      "Resource": [
          "Destination-KMS-Key-ARN"
      ]
  }
  ```
**Important**  
If you use S3 Batch Replication to replicate datasets cross region and your objects previously had their server-side encryption type updated from SSE-S3 to SSE-KMS, you may need additional permissions. On the source region bucket, you must have `kms:decrypt` permissions. Then, you will need the `kms:decrypt` and `kms:encrypt` permissions for the bucket in the destination region.

  For more information about how to replicate objects that are encrypted with AWS KMS, see [Replicating encrypted objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-walkthrough-4.html).
+ If the destination bucket is owned by another AWS account, verify that the bucket owner has a bucket policy on the destination bucket that allows the source bucket owner to replicate objects. For an example, see [Configuring replication for buckets in different accounts](replication-walkthrough-2.md).
+ To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication).
+ If your objects still aren't replicating after you've validated the permissions, check for any explicit `Deny` statements in the following locations:
  + `Deny` statements in the source or destination bucket policies. Replication fails if the bucket policy denies access to the replication role for any of the following actions:

    Source bucket:

    ```
    1.            "s3:GetReplicationConfiguration",
    2.            "s3:ListBucket",
    3.            "s3:GetObjectVersionForReplication",
    4.            "s3:GetObjectVersionAcl",
    5.            "s3:GetObjectVersionTagging"
    ```

    Destination buckets:

    ```
    1.            "s3:ReplicateObject",
    2.            "s3:ReplicateDelete",
    3.            "s3:ReplicateTags"
    ```
  + `Deny` statements or permissions boundaries attached to the IAM role can cause replication to fail.
  + `Deny` statements in AWS Organizations service control policies (SCPs) that are attached to either the source or destination accounts can cause replication to fail.
  + `Deny` statements in AWS Organizations resource control policies (RCPs) that are attached to either the source or destination buckets can cause replication to fail.
+ If an object replica doesn't appear in the destination bucket, the following issues might have prevented replication:
  + Amazon S3 doesn't replicate an object in a source bucket that is a replica created by another replication configuration. For example, if you set a replication configuration from bucket A to bucket B to bucket C, Amazon S3 doesn't replicate object replicas in bucket B to bucket C.
  + A source bucket owner can grant other AWS accounts permission to upload objects. By default, the source bucket owner doesn't have permissions for the objects created by other accounts. The replication configuration replicates only the objects for which the source bucket owner has access permissions. To avoid this problem, the source bucket owner can grant other AWS accounts permissions to create objects conditionally, requiring explicit access permissions on those objects. For an example policy, see [Grant cross-account permissions to upload objects while ensuring that the bucket owner has full control](example-bucket-policies.md#example-bucket-policies-acl-2).
+ Suppose that in the replication configuration, you add a rule to replicate a subset of objects that have a specific tag. In this case, you must assign the specific tag key and value at the time the object is created in order for Amazon S3 to replicate the object. If you first create an object and then add the tag to the existing object, Amazon S3 doesn't replicate the object. 
+ Use Amazon S3 Event Notifications to notify you of instances when objects don't replicate to their destination AWS Region. Amazon S3 Event Notifications are available through Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS), or AWS Lambda. For more information, see [Receiving replication failure events with Amazon S3 Event Notifications](replication-metrics-events.md).

  You can also view replication failure reasons by using Amazon S3 Event Notifications. To review the list of failure reasons, see [Amazon S3 replication failure reasons](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-failure-codes.html).

## Batch Replication errors
<a name="troubleshoot-batch-replication-errors"></a>

To troubleshoot objects that aren't replicating to the destination bucket, check the different types of permissions for your buckets, replication role, and IAM role that's used to create the Batch Replication job. Also, make sure to check the Block Public Access settings and S3 Object Ownership settings for your buckets.

For additional troubleshooting tips for working with Batch Operations, see [Troubleshooting S3 Batch Operations](troubleshooting-batch-operations.md). 

If you've set up replication and objects aren't replicating, see [Why aren't my Amazon S3 objects replicating when I set up replication between my buckets?](https://repost.aws/knowledge-center/s3-troubleshoot-replication) in the AWS re:Post Knowledge Center.

While using Batch Replication, you might encounter one of these errors:
+ Manifest generation found no keys matching the filter criteria.

  This error occurs for one of the following reasons:
  + When objects in the source bucket are stored in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes.

    To use Batch Replication on these objects, first restore them to the S3 Standard storage class by using a **Restore** (`S3InitiateRestoreObjectOperation`) operation in a Batch Operations job. For more information, see [Restoring an archived object](restoring-objects.md) and [Restore objects (Batch Operations)](batch-ops-initiate-restore-object.md). After you've restored the objects, you can replicate them by using a Batch Replication job.
  + When the provided filter criteria doesn’t match any valid objects in the source bucket.

    Verify and correct the filter criteria. For example, in the Batch Replication rule, the filter criteria is looking for all objects in the source bucket with the prefix `Tax/`. If the prefix name was entered inaccurately, with a slash in the beginning and the end `/Tax/` instead of only at the end, then no S3 objects were found. To resolve the error, correct the prefix, in this case, from `/Tax/` to `Tax/` in the replication rule.
+ Batch operation status is failed with reason: The job report could not be written to your report bucket.

  This error occurs if the IAM role that's used for the Batch Operations job is unable to put the completion report into the location that was specified when you created the job. To resolve this error, check that the IAM role has the `s3:PutObject` permission for the bucket where you want to save the Batch Operations completion report. We recommend delivering the report to a bucket different from the source bucket.
+ Batch operation is completed with failures and Total failed is not 0.

  This error occurs if there are insufficient object permissions issues with the Batch Replication job that is running. If you're using a replication rule for your Batch Replication job, make sure that the IAM role that's used for replication has the proper permissions to access objects from either the source or destination bucket. You can also check the [Batch Replication completion report](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-batch-replication-batch.html#batch-replication-completion-report) to review the specific [Amazon S3 replication failure reason](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-failure-codes.html).
+ Batch job ran successfully but the number of objects expected in destination bucket is not the same.

  This error occurs when there's a mismatch between the objects listed in the manifest that's supplied in the Batch Replication job and the filters that you selected when you created the job. You might also receive this message when the objects in your source bucket don't match any replication rules and aren't included in the generated manifest.

### Batch Operations failures occur after adding a new replication rule to an existing replication configuration
<a name="new-replication-rule"></a>

Batch Operations attempts to perform existing object replication for every rule in the source bucket's replication configuration. If there are problems with any of the existing replication rules, failures might occur. 

The Batch Operations job's completion report explains the job failure reasons. For a list of common errors, see [Amazon S3 replication failure reasons](replication-metrics-events.md#replication-failure-codes).

# Monitoring replication with metrics, event notifications, and statuses
<a name="replication-metrics"></a>

You can monitor your live replication configurations and your S3 Batch Replication jobs through the following mechanisms: 
+ **S3 Replication metrics** – When you enable S3 Replication metrics, Amazon CloudWatch emits metrics that you can use to track bytes pending, operations pending, and replication latency at the replication rule level. You can view S3 Replication metrics through the Amazon S3 console and the Amazon CloudWatch console. In the Amazon S3 console, you can view these metrics in the source bucket's **Metrics** tab. For more information about S3 Replication metrics, see [Using S3 Replication metrics](repl-metrics.md). 
+ **S3 Storage Lens metrics** – In addition to S3 Replication metrics, you can use the replication-related Data Protection metrics provided by S3 Storage Lens dashboards. For example, if you use the free metrics in S3 Storage Lens, you can see metrics such as the total number of bytes that are replicated from the source bucket or the count of replicated objects from the source bucket. 

  To audit your overall replication stance, you can enable advanced metrics in S3 Storage Lens. With advanced metrics in S3 Storage Lens, you can see how many replication rules you have of various types, including the count of replication rules with a replication destination that's not valid. 

  For more information about working with replication metrics in S3 Storage Lens, see [Viewing replication metrics in S3 Storage Lens dashboards](viewing-replication-metrics-storage-lens.md).
+ **S3 Event Notifications** – S3 Event Notifications can notify you at the object level in instances when objects don't replicate to their destination AWS Region or when objects aren't replicated within certain thresholds. S3 Event Notifications provides the following replication event types: `s3:Replication:OperationFailedReplication`, `s3:Replication:OperationMissedThreshold`, `s3:Replication:OperationReplicatedAfterThreshold`, and `s3:Replication:OperationNotTracked`. 

  Amazon S3 events are available through Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS), or AWS Lambda. For more information, see [Receiving replication failure events with Amazon S3 Event Notifications](replication-metrics-events.md).
+ **Replication status values** – You can also retrieve the replication status of your objects. The replication status can help you determine the current state of an object that's being replicated. The replication status of a source object will return either `PENDING`, `COMPLETED`, or `FAILED`. The replication status of a replica will return `REPLICA`. 

  You can also use replication status values when you're creating S3 Batch Replication jobs. For example, you can use these status values to replicate objects that have either never been replicated or that have failed replication. 

  For more information about retrieving the replication status of your objects, see [Getting replication status information](replication-status.md). For more information about using these values with Batch Replication, see [Filters for a Batch Replication job](s3-batch-replication-batch.md#batch-replication-filters).

**Topics**
+ [Using S3 Replication metrics](repl-metrics.md)
+ [Viewing replication metrics in S3 Storage Lens dashboards](viewing-replication-metrics-storage-lens.md)
+ [Receiving replication failure events with Amazon S3 Event Notifications](replication-metrics-events.md)
+ [Getting replication status information](replication-status.md)

# Using S3 Replication metrics
<a name="repl-metrics"></a>

S3 Replication metrics provide detailed metrics for the replication rules in your replication configuration. With replication metrics, you can monitor minute-by-minute progress by tracking bytes pending, operations pending, operations that failed replication, and replication latency.

**Note**  
S3 Replication metrics are billed at the same rate as Amazon CloudWatch custom metrics. For more information, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/).
If you're using S3 Replication Time Control, Amazon CloudWatch begins reporting replication metrics 15 minutes after you enable S3 RTC on the respective replication rule. 

S3 Replication metrics are turned on automatically when you enable S3 Replication Time Control (S3 RTC). You can also enable S3 Replication metrics independently of S3 RTC while [ creating or editing a rule](replication-walkthrough1.md). S3 RTC includes other features, such as a service level agreement (SLA) and notifications for missed thresholds. For more information, see [Meeting compliance requirements with S3 Replication Time Control](replication-time-control.md).

When S3 Replication metrics are enabled, Amazon S3 publishes the following metrics to Amazon CloudWatch. CloudWatch metrics are delivered on a best-effort basis.


| Metric name | Metric description | Which objects does this metric apply to? | Which Region is this metric published in? | Is this metric still published if the destination bucket is deleted? | Is this metric still published if replication doesn't occur? | 
| --- | --- | --- | --- | --- | --- | 
| **Bytes Pending Replication** |  The total number of bytes of objects that are pending replication for a given replication rule.  | This metric applies only to new objects that are replicated with S3 Cross-Region Replication (S3 CRR) or S3 Same-Region Replication (S3 SRR). | This metric is published in the Region of the destination bucket. | No | Yes | 
| **Replication Latency** |  The maximum number of seconds by which the replication destination bucket is behind the source bucket for a given replication rule.  | This metric applies only to new objects that are replicated with S3 CRR or S3 SRR. | This metric is published in the Region of the destination bucket. | No | Yes | 
| **Operations Pending Replication** |  The number of operations that are pending replication for a given replication rule. This metric tracks operations related to objects, delete markers, tags, access control lists (ACLs), and S3 Object Lock.  | This metric applies only to new objects that are replicated with S3 CRR or S3 SRR. | This metric is published in the Region of the destination bucket. | No | Yes | 
| **Operations Failed Replication** |  The number of operations that failed replication for a given replication rule. This metric tracks operations related to objects, delete markers, tags, access control lists (ACLs), and Object Lock. **Operations Failed Replication** tracks S3 Replication failures aggregated at a per-minute interval. To identify the specific objects that have failed replication and their failure reasons, subscribe to the `OperationFailedReplication` event in Amazon S3 Event Notifications. For more information, see [Receiving replication failure events with Amazon S3 Event Notifications](replication-metrics-events.md).  |  This metric applies both to new objects that are replicated with S3 CRR or S3 SRR and also to existing objects that are replicated with S3 Batch Replication.  If an S3 Batch Replication job fails to run at all, metrics aren't sent to Amazon CloudWatch. For example, your job won't run if you don't have the necessary permissions to run an S3 Batch Replication job, or if the tags or prefix in your replication configuration don't match.   | This metric is published in the Region of the source bucket. | Yes | No | 

For information about working with these metrics in CloudWatch, see [S3 Replication metrics in CloudWatch](metrics-dimensions.md#s3-cloudwatch-replication-metrics).

## Enabling S3 Replication metrics
<a name="enabling-replication-metrics"></a>

You can start using S3 Replication metrics with a new or existing replication rule. For full instructions on creating replication rules, see [Configuring replication for buckets in the same account](replication-walkthrough1.md). You can choose to apply your replication rule to an entire S3 bucket, or to Amazon S3 objects with a specific prefix or tag.

This topic provides instructions for enabling S3 Replication metrics in your replication configuration when the source and destination buckets are owned by the same or different AWS accounts.

To enable replication metrics by using the AWS Command Line Interface (AWS CLI), you must add a replication configuration to the source bucket with `Metrics` enabled. In this example configuration, objects under the prefix `Tax` are replicated to the destination bucket `amzn-s3-demo-bucket`, and metrics are generated for those objects.

```
{
    "Rules": [
        {
            "Status": "Enabled",
            "Filter": {
                "Prefix": "Tax"
            },
            "Destination": {
                "Bucket": "arn:aws:s3:::amzn-s3-demo-bucket",
                "Metrics": {
                    "Status": "Enabled"
                }
            },
            "Priority": 1
        }
    ],
    "Role": "IAM-Role-ARN"
}
```

## Viewing replication metrics
<a name="viewing-replication-metrics"></a>

You can view S3 Replication metrics in the source general purpose bucket's **Metrics** tab in the Amazon S3 console. These Amazon CloudWatch metrics are also available in the Amazon CloudWatch console. When you enable S3 Replication metrics, Amazon CloudWatch emits metrics that you can use to track bytes pending, operations pending, and replication latency at the replication rule level. 

S3 Replication metrics are turned on automatically when you enable replication with S3 Replication Time Control (S3 RTC) by using the Amazon S3 console or the Amazon S3 REST API. You can also enable S3 Replication metrics independently of S3 RTC while [ creating or editing a rule](replication-walkthrough1.md).

If you're using S3 Replication Time Control, Amazon CloudWatch begins reporting replication metrics 15 minutes after you enable S3 RTC on the respective replication rule. For more information, see [Using S3 Replication metrics](#repl-metrics).

Replication metrics track the rule IDs of the replication configuration. A replication rule ID can be specific to a prefix, a tag, or a combination of both.

 For more information about CloudWatch metrics for Amazon S3, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md).

**Prerequisites**  
Create a replication rule that has S3 Replication metrics enabled. For more information, see [Enabling S3 Replication metrics](#enabling-replication-metrics).

**To view S3 Replication metrics through the source bucket's **Metrics** tab**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**. 

1. In the buckets list, choose the name of the source bucket that contains the objects that you want replication metrics for.

1. Choose the **Metrics** tab.

1. Under **Replication metrics**, choose the replication rules that you want to see metrics for.

1. Choose **Display charts**.

   Amazon S3 displays **Replication latency**, **Bytes pending replication**, **Operations pending replication**, and **Operations failed replication** charts for the rules that you selected.

# Viewing replication metrics in S3 Storage Lens dashboards
<a name="viewing-replication-metrics-storage-lens"></a>

In addition to [S3 Replication metrics](repl-metrics.md), you can use the replication-related Data Protection metrics provided by S3 Storage Lens. S3 Storage Lens is a cloud-storage analytics feature that you can use to gain organization-wide visibility into object-storage usage and activity. For more information, see [Using S3 Storage Lens to protect your data](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-lens-data-protection.html#storage-lens-data-protection-replication-rule). 

S3 Storage Lens has two tiers of metrics: free metrics, and advanced metrics and recommendations, which you can upgrade to for an additional charge. With advanced metrics and recommendations, you can access additional metrics and features for gaining insight into your storage. For information about S3 Storage Lens pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing). 

If you use the free metrics in S3 Storage Lens, you can see metrics such as the total number of bytes that are replicated from the source bucket or the count of replicated objects from the source bucket. 

To audit your overall replication stance, you can enable advanced metrics in S3 Storage Lens. With advanced metrics in S3 Storage Lens, you can see how many replication rules you have of various types, including the count of replication rules with a replication destination that's not valid. 

For a complete list of S3 Storage Lens metrics, including which replication metrics are in each tier, see the [S3 Storage Lens metrics glossary](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens_metrics_glossary.html?icmpid=docs_s3_user_guide_replication.html). 

**Prerequisites**  
Create a [live replication configuration](replication-how-setup.md) or an [S3 Batch Replication job](s3-batch-replication-batch.md). 

**To view replication metrics in Amazon S3 Storage Lens**

1. Create an S3 Storage Lens dashboard. For step-by-step instructions, see [Using the S3 console](storage_lens_creating_dashboard.md#storage_lens_console_creating).

1. (Optional) During your dashboard setup, if you want to see all S3 Storage Lens replication metrics, select **Advanced metrics and recommendations** and then select **Advanced data protection metrics**. For a complete list of metrics, see the [S3 Storage Lens metrics glossary](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens_metrics_glossary.html?icmpid=docs_s3_user_guide_replication.html).

   If you enable advanced metrics and recommendations, you can gain further insights into your replication configurations. For example, you can use S3 Storage Lens replication rule count metrics to get detailed information about your buckets that are configured for replication. This information includes replication rules within and across buckets and Regions. For more information, see [Count the total number of replication rules for each bucket](storage-lens-data-protection.md#storage-lens-data-protection-replication-rule).

1. After you've created your dashboard, open the dashboard, and choose the **Buckets** tab.

1. Scroll down to the **Buckets** section. Under **Metrics categories**, choose **Data protection**. Then clear **Summary**.

1. To filter the **Buckets** list to display only replication metrics, choose the preferences icon (![\[The preferences icon in the S3 Storage Lens dashboard.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/preferences.png)).

1. Clear the toggles for all data-protection metrics until only the replication metrics remain selected.

1. (Optional) Under **Page size**, choose the number of buckets to display in the list.

1. Choose **Continue**.

# Receiving replication failure events with Amazon S3 Event Notifications
<a name="replication-metrics-events"></a>

If you've enabled S3 Replication metrics on your replication configuration, you can set up Amazon S3 Event Notifications to notify you when objects don't replicate to their destination AWS Region. If you've enabled S3 Replication Time Control (S3 RTC) on your replication configuration, you can also be notified when objects don't replicate within the 15-minute S3 RTC threshold for replication. 

By using the following `Replication` event types, you can monitor the minute-by-minute progress of replication events by tracking bytes pending, operations pending, and replication latency. For more information about S3 Replication metrics, see [Using S3 Replication metrics](repl-metrics.md).
+ The `s3:Replication:OperationFailedReplication` event type notifies you when an object that was eligible for replication failed to replicate. 
+ The `s3:Replication:OperationMissedThreshold` event type notifies you when an object that was eligible for replication that uses S3 RTC exceeds the 15-minute threshold for replication.
+ The `s3:Replication:OperationReplicatedAfterThreshold` event type notifies you when an object that was eligible for replication that uses S3 RTC replicates after the 15-minute threshold.
+ The `s3:Replication:OperationNotTracked` event type notifies you when an object that was eligible for live replication (either Same-Region Replication [SRR] or Cross-Region Replication [CRR]) is no longer being tracked by replication metrics.

For full descriptions of all the supported replication event types, see [Supported event types for SQS, SNS, and Lambda](notification-how-to-event-types-and-destinations.md#supported-notification-event-types).

For a list of the failure codes captured by S3 Event Notifications, see [Amazon S3 replication failure reasons](#replication-failure-codes).

You can receive S3 Event Notifications through Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS), or AWS Lambda. For more information, see [Amazon S3 Event Notifications](EventNotifications.md).

For instructions on how to configure Amazon S3 Event Notifications, see [Enabling event notifications](how-to-enable-disable-notification-intro.md).

**Note**  
In addition to enabling event notifications, make sure that you also enable S3 Replication metrics. For more information, see [Enabling S3 Replication metrics](repl-metrics.md#enabling-replication-metrics).

The following is an example of a message that Amazon S3 sends to publish an `s3:Replication:OperationFailedReplication` event. For more information, see [Event message structure](notification-content-structure.md).

```
{
  "Records": [
    {
      "eventVersion": "2.2",
      "eventSource": "aws:s3",
      "awsRegion": "us-east-1",
      "eventTime": "2024-09-05T21:04:32.527Z",
      "eventName": "Replication:OperationFailedReplication",
      "userIdentity": {
        "principalId": "s3.amazonaws.com"
      },
      "requestParameters": {
        "sourceIPAddress": "s3.amazonaws.com"
      },
      "responseElements": {
        "x-amz-request-id": "123bf045-2b4b-4ca8-a211-c34a63c59426",
        "x-amz-id-2": "12VAWNDIHnwJsRhTccqQTeAPoXQmRt22KkewMV8G3XZihAuf9CLDdmkApgZzudaIe2KlLfDqGS0="
      },
      "s3": {
        "s3SchemaVersion": "1.0",
        "configurationId": "ReplicationEventName",
        "bucket": {
          "name": "amzn-s3-demo-bucket1",
          "ownerIdentity": {
            "principalId": "111122223333"
          },
          "arn": "arn:aws:s3:::amzn-s3-demo-bucket1"
        },
        "object": {
          "key": "replication-object-put-test.png",
          "size": 520080,
          "eTag": "e12345ca7e88a38428305d3ff7fcb99f",
          "versionId": "abcdeH0Xp66ep__QDjR76LK7Gc9X4wKO",
          "sequencer": "0066DA1CBF104C0D51"
        }
      },
      "replicationEventData": {
        "replicationRuleId": "notification-test-replication-rule",
        "destinationBucket": "arn:aws:s3:::amzn-s3-demo-bucket2",
        "s3Operation": "OBJECT_PUT",
        "requestTime": "2024-09-05T21:03:59.168Z",
        "failureReason": "AssumeRoleNotPermitted"
      }
    }
  ]
}
```

## Amazon S3 replication failure reasons
<a name="replication-failure-codes"></a>

The following table lists Amazon S3 Replication failure reasons. You can view these reasons by receiving the `s3:Replication:OperationFailedReplication` event with Amazon S3 Event Notifications and then looking at the `failureReason` value. 

You can also view these failure reasons in an S3 Batch Replication completion report. For more information, see [Batch Replication completion report](s3-batch-replication-batch.md#batch-replication-completion-report).


| Replication failure reason | Description | 
| --- | --- | 
| `AssumeRoleNotPermitted` | Amazon S3 can't assume the AWS Identity and Access Management (IAM) role that's specified in the replication configuration or in the Batch Operations job. | 
| `DstBucketInvalidRegion` | The destination bucket is not in the same AWS Region as specified by the Batch Operations job. This error is specific to Batch Replication. | 
| `DstBucketNotFound` | Amazon S3 is unable to find the destination bucket that's specified in the replication configuration. | 
| `DstBucketObjectLockConfigMissing` | To replicate objects from a source bucket with Object Lock enabled, the destination bucket must also have Object Lock enabled. This error indicates that Object Lock might not be enabled in the destination bucket. For more information, see [Object Lock considerations](object-lock-managing.md). | 
| `DstBucketUnversioned` | Versioning is not enabled for the S3 destination bucket. To replicate objects with S3 Replication, enable versioning for the destination bucket. | 
| `DstDelObjNotPermitted` | Amazon S3 is unable to replicate delete markers to the destination bucket. The `s3:ReplicateDelete` permission might be missing for the destination bucket. | 
| `DstKmsKeyInvalidState` | The AWS Key Management Service (AWS KMS) key for the destination bucket isn't in a valid state. Review and enable the required AWS KMS key. For more information about managing AWS KMS keys, see [Key states of AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) in the *AWS Key Management Service Developer Guide*. | 
| `DstKmsKeyNotFound` | The AWS KMS key that's configured for the destination bucket in the replication configuration doesn't exist. | 
| `DstMultipartCompleteNotPermitted` | Amazon S3 is unable to complete multipart uploads of objects in the destination bucket. The `s3:ReplicateObject` permission might be missing for the destination bucket. | 
| `DstMultipartInitNotPermitted` | Amazon S3 is unable to initiate multipart uploads of objects to the destination bucket. The `s3:ReplicateObject` permission might be missing for the destination bucket.  | 
| `DstMultipartUploadNotPermitted` | Amazon S3 is unable to upload multipart upload objects to the destination bucket. The `s3:ReplicateObject` permission might be missing for the destination bucket.  | 
| `DstObjectHardDeleted` | S3 Batch Replication does not support re-replicating objects deleted with the version ID of the object from the destination bucket. This error is specific to Batch Replication. | 
| `DstPutAclNotPermitted` | Amazon S3 is unable to replicate object access control lists (ACLs) to the destination bucket. The `s3:ReplicateObject` permission might be missing for the destination bucket. | 
| `DstPutLegalHoldNotPermitted` | Amazon S3 is unable to put an Object Lock legal hold on the destination objects when it's replicating immutable objects. The `s3:PutObjectLegalHold` permission might be missing for the destination bucket. For more information, see [Legal holds](object-lock.md#object-lock-legal-holds). | 
|  `DstPutObjectNotPermitted` | Amazon S3 is unable to replicate objects to the destination bucket. This can occur when required permissions (`s3:ReplicateObject` or `s3:ObjectOwnerOverrideToBucketOwner` permissions) are missing for the destination bucket or when the AWS KMS key policy doesn't allow the source bucket's replication role to use the AWS KMS key (`kms:Decrypt` and `kms:GenerateDataKey*` actions) at the destination bucket.  | 
|  `DstPutRetentionNotPermitted` | Amazon S3 is unable to put a retention period on the destination objects when it's replicating immutable objects. The `s3:PutObjectRetention` permission might be missing for the destination bucket. | 
| `DstPutTaggingNotPermitted` | Amazon S3 is unable to replicate object tags to the destination bucket. The `s3:ReplicateObject` permission might be missing for the destination bucket. | 
| `DstVersionNotFound ` | Amazon S3 is unable to find the required object version in the destination bucket for which metadata needs to be replicated.  | 
| `InitiateReplicationNotPermitted` | Amazon S3 is unable to initiate replication on objects. The `s3:InitiateReplication` permission might be missing for the Batch Operations job. This error is specific to Batch Replication. | 
| `SrcBucketInvalidRegion` | The source bucket isn't in the same AWS Region as specified by the Batch Operations job. This error is specific to Batch Replication. | 
| `SrcBucketNotFound` | Amazon S3 is unable to find the source bucket. | 
| `SrcBucketReplicationConfigMissing` | Amazon S3 couldn't find a replication configuration for the source bucket. | 
| `SrcGetAclNotPermitted` |  Amazon S3 is unable to access the object in the source bucket for replication. The `s3:GetObjectVersionAcl` permission might be missing for the source bucket object. The objects in the source bucket must be owned by the bucket owner. If ACLs are enabled, then verify if Object Ownership is set to Bucket owner preferred or Object writer. If Object Ownership is set to Bucket owner preferred, then the source bucket objects must have the `bucket-owner-full-control` ACL for the bucket owner to become the object owner. The source account can take ownership of all objects in their bucket by setting Object Ownership to Bucket owner enforced and disabling ACLs.  | 
| `SrcGetLegalHoldNotPermitted` | Amazon S3 is unable to access the S3 Object Lock legal hold information. | 
| `SrcGetObjectNotPermitted` | Amazon S3 is unable to access the object in the source bucket for replication. The `s3:GetObjectVersionForReplication` permission might be missing for the source bucket.  | 
| `SrcGetRetentionNotPermitted` | Amazon S3 is unable to access the S3 Object Lock retention period information. | 
| `SrcGetTaggingNotPermitted` | Amazon S3 is unable to access object tag information from the source bucket. The `s3:GetObjectVersionTagging` permission might be missing for the source bucket. | 
| `SrcHeadObjectNotPermitted` | Amazon S3 is unable to retrieve object metadata from the source bucket. The `s3:GetObjectVersionForReplication` permission might be missing for the source bucket.  | 
| `SrcKeyNotFound` | Amazon S3 is unable to find the source object key to replicate. Source object may have been deleted before replication was complete. | 
| `SrcKmsKeyInvalidState` | The AWS KMS key for the source bucket isn't in a valid state. Review and enable the required AWS KMS key. For more information about managing AWS KMS keys, see [Key states of AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/key-state.html) in the *AWS Key Management Service Developer Guide*. | 
| `SrcObjectNotEligible` | Some objects aren't eligible for replication. This may be due to the object's storage class or the object tags don't match the replication configuration. | 
| `SrcObjectNotFound` | Source object does not exist. | 
| `SrcReplicationNotPending` | Amazon S3 has already replicated this object. This object is no longer pending replication. | 
| `SrcVersionNotFound` | Amazon S3 is unable to find the source object version to replicate. Source object version may have been deleted before replication was complete. | 

### Related topics
<a name="replication-metrics-related-topics"></a>

[Setting up permissions for live replication](setting-repl-config-perm-overview.md)

[Troubleshooting replication](replication-troubleshoot.md)

# Getting replication status information
<a name="replication-status"></a>

Replication status can help you determine the current state of an object being replicated. The replication status of a source object will return either `PENDING`, `COMPLETED`, or `FAILED`. The replication status of a replica will return `REPLICA`.

You can also use replication status values when you're creating S3 Batch Replication jobs. For example, you can use these status values to replicate objects that have either never been replicated or that have failed replication. For more information about using these values with Batch Replication, see [Using replication status information with Batch Replication jobs](#replication-status-batch-replication).

**Topics**
+ [Replication status overview](#replication-status-overview)
+ [Replication status if replicating to multiple destination buckets](#replication-status-multiple-destinations)
+ [Replication status if Amazon S3 replica modification sync is enabled](#replication-status-replica-mod-syn)
+ [Using replication status information with Batch Replication jobs](#replication-status-batch-replication)
+ [Finding replication status](#replication-status-usage)

## Replication status overview
<a name="replication-status-overview"></a>

In replication, you have a source bucket on which you configure replication and one or more destination buckets where Amazon S3 replicates objects. When you request an object (by using `GetObject`) or object metadata (by using `HeadObject`) from these buckets, Amazon S3 returns the `x-amz-replication-status` header in the response: 
+ When you request an object from the source bucket, Amazon S3 returns the `x-amz-replication-status` header if the object in your request is eligible for replication. 

  For example, suppose that you specify the object prefix `TaxDocs` in your replication configuration to tell Amazon S3 to replicate only objects with the key name prefix `TaxDocs`. Any objects that you upload that have this key name prefix—for example, `TaxDocs/document1.pdf`—will be replicated. For object requests with this key name prefix, Amazon S3 returns the `x-amz-replication-status` header with one of the following values for the object's replication status: `PENDING`, `COMPLETED`, or `FAILED`.
**Note**  
If object replication fails after you upload an object, you can't retry replication. You must upload the object again, or you must use S3 Batch Replication to replicate any failed objects. S3 Lifecycle blocks expiration and transition actions on objects with `PENDING` or `FAILED` replication status. For more information about using Batch Replication, see [Replicating existing objects with Batch Replication](s3-batch-replication-batch.md).   
Objects transition to a `FAILED` state for issues such as missing replication role permissions, AWS Key Management Service (AWS KMS) permissions, or bucket permissions. For temporary failures, such as if a bucket or Region is unavailable, replication status doesn't transition to `FAILED`, but remains `PENDING`. After the resource is back online, Amazon S3 resumes replicating those objects.
+ When you request an object from a destination bucket, if the object in your request is a replica that Amazon S3 created, Amazon S3 returns the `x-amz-replication-status` header with the value `REPLICA`.

**Note**  
Before deleting an object from a source bucket that has replication enabled, check the object's replication status to make sure that the object has been replicated.   
If an S3 Lifecycle configuration is enabled on the source bucket, Amazon S3 suspends lifecycle actions until it marks the object's replication status as `COMPLETED`. If replication status is `FAILED`, S3 Lifecycle continues to block expiration and transition actions on the object until you resolve the underlying replication issue. For more information, see [S3 Lifecycle and](lifecycle-and-other-bucket-config.md#lifecycle-and-replication).

## Replication status if replicating to multiple destination buckets
<a name="replication-status-multiple-destinations"></a>

When you replicate objects to multiple destination buckets, the `x-amz-replication-status` header acts differently. The header of the source object returns a value of `COMPLETED` only when replication is successful to all destinations. The header remains at the `PENDING` value until replication has completed for all destinations. If one or more destinations fail replication, the header returns `FAILED`.

## Replication status if Amazon S3 replica modification sync is enabled
<a name="replication-status-replica-mod-syn"></a>

When your replication rules enable Amazon S3 replica modification sync, replicas can report statuses other than `REPLICA`. If metadata changes are in the process of replicating, the `x-amz-replication-status` header returns `PENDING`. If replica modification sync fails to replicate metadata, the header returns `FAILED`. If metadata is replicated correctly, the replicas return the header `REPLICA`.

## Using replication status information with Batch Replication jobs
<a name="replication-status-batch-replication"></a>

When creating a Batch Replication job, you can optionally specify additional filters, such as the object creation date and replication status, to reduce the scope of the job.

You can filter objects to replicate based on the `ObjectReplicationStatuses` value, by providing one or more of the following values:
+ `"NONE"` – Indicates that Amazon S3 has never attempted to replicate the object before.
+ `"FAILED"` – Indicates that Amazon S3 has attempted, but failed, to replicate the object before.
+ `"COMPLETED"` – Indicates that Amazon S3 has successfully replicated the object before.
+ `"REPLICA"` – Indicates that this is a replica object that Amazon S3 has replicated from another source.

For more information about using these replication status values with Batch Replication, see [Filters for a Batch Replication job](s3-batch-replication-batch.md#batch-replication-filters).

## Finding replication status
<a name="replication-status-usage"></a>

To get the replication status of the objects in a bucket, you can use the Amazon S3 Inventory tool. Amazon S3 sends a CSV file to the destination bucket that you specify in the inventory configuration. You can also use Amazon Athena to query the replication status in the inventory report. For more information about Amazon S3 Inventory, see [Cataloging and analyzing your data with S3 Inventory](storage-inventory.md).

You can also find the object replication status by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), or the AWS SDK. 

### Using the S3 console
<a name="replication-status-console"></a>

In the Amazon S3 console, you can view the replication status for an object on the object's details page.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **General purpose buckets** list, choose the name of the replication source bucket.

1. In the **Objects** list, choose the object name. The object's details page appears. 

1. On the **Properties** tab, scroll down to the **Object management overview** section. Under **Management configurations**, see the value under **Replication status**.

### Using the AWS CLI
<a name="replication-status-cli"></a>

Use the AWS Command Line Interface (AWS CLI) `head-object` command to retrieve object metadata, as shown in the following example. Replace the `amzn-s3-demo-source-bucket1` with the name of your replication source bucket, and replace the other `user input placeholders` with your own information.

```
aws s3api head-object --bucket amzn-s3-demo-source-bucket1 --key object-key --version-id object-version-id           
```

The command returns object metadata, including the `ReplicationStatus` as shown in the following example response.

```
{
   "AcceptRanges":"bytes",
   "ContentType":"image/jpeg",
   "LastModified":"Mon, 23 Mar 2015 21:02:29 GMT",
   "ContentLength":3191,
   "ReplicationStatus":"COMPLETED",
   "VersionId":"jfnW.HIMOfYiD_9rGbSkmroXsFj3fqZ.",
   "ETag":"\"6805f2cfc46c0f04559748bb039d69ae\"",
   "Metadata":{

   }
}
```

### Using the AWS SDKs
<a name="replication-status-sdk"></a>

The following code fragments get your replication status by using the AWS SDK for Java and AWS SDK for .NET, respectively. 

------
#### [ Java ]

```
GetObjectMetadataRequest metadataRequest = new GetObjectMetadataRequest(bucketName, key);
ObjectMetadata metadata = s3Client.getObjectMetadata(metadataRequest);

System.out.println("Replication Status : " + metadata.getRawMetadataValue(Headers.OBJECT_REPLICATION_STATUS));
```

------
#### [ .NET ]

```
GetObjectMetadataRequest getmetadataRequest = new GetObjectMetadataRequest
    {
         BucketName = sourceBucket,
         Key        = objectKey
    };

GetObjectMetadataResponse getmetadataResponse = client.GetObjectMetadata(getmetadataRequest);
Console.WriteLine("Object replication status: {0}", getmetadataResponse.ReplicationStatus);
```

------

# Managing multi-Region traffic with Multi-Region Access Points
<a name="MultiRegionAccessPoints"></a>

Amazon S3 Multi-Region Access Points provides a global endpoint that applications can use to fulfill requests from S3 buckets that are located in multiple AWS Regions. You can use Multi-Region Access Points to build multi-Region applications with the same architecture that's used in a single Region, and then run those applications anywhere in the world. Instead of sending requests over the congested public internet, Multi-Region Access Points provides built-in network resilience with acceleration of internet-based requests to Amazon S3. Application requests made to a Multi-Region Access Point global endpoint uses [AWS Global Accelerator](https://docs.aws.amazon.com/global-accelerator/latest/dg/) to automatically route over the AWS global network to the closest proximity S3 bucket with an active routing status.

If a Regional traffic disruption occurs, you can use Multi-Region Access Points failover controls to shift the S3 data request traffic between AWS Regions and redirect S3 traffic away from the disruptions within minutes. You can also test the application resiliency against a disruption to conduct application failover and perform disaster recovery simulations. If you need to connect and accelerate requests to S3 from outside of a VPC, you can simplify applications and network architecture with Amazon S3 Multi-Region Access Points. Your Multi-Region Access Points requests will be routed over the AWS global network and then back to S3 within the AWS Region, without having to traverse the public internet. As a result, you can build more highly available applications.

During your Multi-Region Access Points creation and setup, you'll specify a set of AWS Regions where you want to store data to be served through that Multi-Region Access Point. You can use the provided Multi-Region Access Points endpoint name to connect your clients. After you've established your client connections, you can select the existing or new buckets that you'd like to route the Multi-Region Access Points requests between. Then, use [S3 Cross-Region Replication (CRR)](https://aws.amazon.com/s3/features/replication/) rules to synchronize data among buckets in those Regions.

After you've set up your Multi-Region Access Point. you can then request or write data through the Multi-Region Access Points global endpoint. Amazon S3 automatically serves requests to the replicated data set from the closest available Region. Within the AWS Management Console, you're also able to view the underlying replication topology and replication metrics related to your Multi-Region Access Points requests. This gives you an even easier way to build, manage, and monitor storage for multi-Region applications. Alternatively, you can use Amazon CloudFront to automate the creation and configuration of S3 Multi-Region Access Points.

The following image is a graphical representation of an Amazon S3 Multi-Region Access Point in an active-active configuration. The graphic shows how Amazon S3 requests are automatically routed to buckets in the closest active AWS Region.

![\[Diagram showing requests routed through an Amazon S3 Multi-Region Access Point.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/MultiRegionAccessPoints.png)


 The following image is a graphical representation of an Amazon S3 Multi-Region Access Point in an active-passive configuration. The graphic shows how you can control Amazon S3 data-access traffic to fail over between active and passive AWS Regions.

![\[Diagram showing an Amazon S3 Multi-Region Access Point in an active-passive configuration.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/MultiRegionAccessPointsFailover.png)




**Topics**
+ [Creating Multi-Region Access Points](CreatingMultiRegionAccessPoints.md)
+ [Configuring a Multi-Region Access Point for use with AWS PrivateLink](MultiRegionAccessConfiguration.md)
+ [Making requests through a Multi-Region Access Point](MultiRegionAccessPointRequests.md)

# Creating Multi-Region Access Points
<a name="CreatingMultiRegionAccessPoints"></a>

To create a Multi-Region Access Point in Amazon S3, you do the following: 
+ Specify the name for the Multi-Region Access Point.
+ Choose one bucket in each AWS Region that you want to serve requests for the Multi-Region Access Point.
+ Configure the Amazon S3 Block Public Access settings for the Multi-Region Access Point.

You provide all of this information in a create request, which Amazon S3 processes asynchronously. Amazon S3 provides a token that you can use to monitor the status of the asynchronous creation request. 

Make sure to resolve security warnings, errors, general warnings, and suggestions from AWS Identity and Access Management Access Analyzer before you save your policy. IAM Access Analyzer runs policy checks to validate your policy against IAM [policy grammar](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_grammar.html) and [best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html). These checks generate findings and provide actionable recommendations to help you author policies that are functional and conform to security best practices. To learn more about validating policies using IAM Access Analyzer, see [IAM Access Analyzer policy validation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html) in the *IAM User Guide*. To view a list of the warnings, errors, and suggestions that are returned by IAM Access Analyzer, see [IAM Access Analyzer policy check reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-reference-policy-checks.html).

When you use the API, the request to create a Multi-Region Access Point is asynchronous. When you submit a request to create a Multi-Region Access Point, Amazon S3 synchronously authorizes the request. It then immediately returns a token that you can use to track the progress of the creation request. For more information about tracking asynchronous requests to create and manage Multi-Region Access Points, see [Using Multi-Region Access Points with supported API operations](MrapOperations.md). 

After you create the Multi-Region Access Point, you can create an access control policy for it. Each Multi-Region Access Point can have an associated policy. A Multi-Region Access Point policy is a resource-based policy that you can use to limit the use of the Multi-Region Access Point by resource, user, or other conditions.

**Note**  
For an application or user to be able to access an object through a Multi-Region Access Point, both of the following policies must permit the request:   
The access policy for the Multi-Region Access Point
The access policy for the underlying bucket that contains the object
When the two policies are different, the more restrictive policy takes precedence.   
To simplify permissions management for Multi-Region Access Points, you can delegate access control from the bucket to the Multi-Region Access Point. For more information, see [Multi-Region Access Point policy examples](MultiRegionAccessPointPermissions.md#MultiRegionAccessPointPolicyExamples).

Using a bucket with a Multi-Region Access Point doesn't change the bucket's behavior when the bucket is accessed through the existing bucket name or an Amazon Resource Name (ARN). All existing operations against the bucket continue to work as before. Restrictions that you include in a Multi-Region Access Point policy apply only to requests that are made through the Multi-Region Access Point. 

You can update the policy for a Multi-Region Access Point after creating it, but you can't delete the policy. However, you can update the Multi-Region Access Point policy to deny all permissions. 

**Topics**
+ [Rules for naming Amazon S3 Multi-Region Access Points](multi-region-access-point-naming.md)
+ [Rules for choosing buckets for Amazon S3 Multi-Region Access Points](multi-region-access-point-buckets.md)
+ [Create an Amazon S3 Multi-Region Access Point](multi-region-access-point-create-examples.md)
+ [Blocking public access with Amazon S3 Multi-Region Access Points](multi-region-access-point-block-public-access.md)
+ [Viewing Amazon S3 Multi-Region Access Points configuration details](multi-region-access-point-view-examples.md)
+ [Deleting a Multi-Region Access Point](multi-region-access-point-delete-examples.md)

# Rules for naming Amazon S3 Multi-Region Access Points
<a name="multi-region-access-point-naming"></a>

When you create a Multi-Region Access Point, you give it a name, which is a string that you choose. You can't change the name of the Multi-Region Access Point after it is created. The name must be unique in your AWS account, and it must conform to the naming requirements listed in [Multi-Region Access Point restrictions and limitations](MultiRegionAccessPointRestrictions.md). To help you identify the Multi-Region Access Point, use a name that is meaningful to you, to your organization, or that reflects the scenario. 

You use this name when invoking Multi-Region Access Point management operations, such as `GetMultiRegionAccessPoint` and `PutMultiRegionAccessPointPolicy`. The name is not used to send requests to the Multi-Region Access Point, and it doesn’t need to be exposed to clients who make requests by using the Multi-Region Access Point. 

When Amazon S3 creates a Multi-Region Access Point, it automatically assigns an alias to it. This alias is a unique alphanumeric string that ends in `.mrap`. The alias is used to construct the hostname and the Amazon Resource Name (ARN) for a Multi-Region Access Point. The fully qualified name is also based on the alias for the Multi-Region Access Point.

You can’t determine the name of a Multi-Region Access Point from its alias, so you can disclose an alias without risk of exposing the name, purpose, or owner of the Multi-Region Access Point. Amazon S3 selects the alias for each new Multi-Region Access Point, and the alias can’t be changed. For more information about addressing a Multi-Region Access Point, see [Making requests through a Multi-Region Access Point](MultiRegionAccessPointRequests.md). 

Multi-Region Access Point aliases are unique throughout time and aren’t based on the name or configuration of a Multi-Region Access Point. If you create a Multi-Region Access Point, and then delete it and create another one with the same name and configuration, the second Multi-Region Access Point will have a different alias than the first. New Multi-Region Access Points can never have the same alias as a previous Multi-Region Access Point.

# Rules for choosing buckets for Amazon S3 Multi-Region Access Points
<a name="multi-region-access-point-buckets"></a>

Each Multi-Region Access Point is associated with the Regions where you want to fulfill requests. The Multi-Region Access Point must be associated with exactly one bucket in each of those Regions. You specify the name of each bucket in the request to create the Multi-Region Access Point. Buckets that support the Multi-Region Access Point can either be in the same AWS account that owns the Multi-Region Access Point, or they can be in other AWS accounts.

 A single bucket can be used by multiple Multi-Region Access Points. 

**Important**  
You can specify the buckets that are associated with a Multi-Region Access Point only at the time that you create it. After it is created, you can’t add, modify, or remove buckets from the Multi-Region Access Point configuration. To change the buckets, you must delete the entire Multi-Region Access Point and create a new one. 
You can't delete a bucket that is part of a Multi-Region Access Point. If you want to delete a bucket that's attached to a Multi-Region Access Point, delete the Multi-Region Access Point first. 
If you add a bucket that's owned by another account to your Multi-Region Access Point, the bucket owner must also update their bucket policy to grant access permissions to the Multi-Region Access Point. Otherwise, the Multi-Region Access Point won't be able to retrieve data from that bucket. For example policies that show how to grant such access, see [Multi-Region Access Point policy examples](https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiRegionAccessPointPermissions.html#MultiRegionAccessPointPolicyExamples). 
 Not all Regions support Multi-Region Access Points. To see the list of supported Regions, see [Multi-Region Access Point restrictions and limitations](MultiRegionAccessPointRestrictions.md). 

You can create replication rules to synchronize data between buckets. These rules enable you to automatically copy data from source buckets to destination buckets. Having buckets connected to a Multi-Region Access Point does not affect how replication works. Configuring replication with Multi-Region Access Points is described in a later section.

**Important**  
When you make a request to a Multi-Region Access Point, the Multi-Region Access Point isn't aware of the data contents of the buckets in the Multi-Region Access Point. Therefore, the bucket that gets the request might not contain the requested data. To create consistent datasets in the Amazon S3 buckets that are associated with a Multi-Region Access Point, we recommend that you configure S3 Cross-Region Replication (CRR). For more information, see [Configuring replication for use with Multi-Region Access Points](MultiRegionAccessPointBucketReplication.md).

# Create an Amazon S3 Multi-Region Access Point
<a name="multi-region-access-point-create-examples"></a>

The following example demonstrates how to create a Multi-Region Access Point by using the Amazon S3 console.

## Using the S3 console
<a name="multi-region-access-point-create-console"></a>

**To create a Multi-Region Access Point**
**Note**  
Multi-Region Access Point opt-in Regions aren't currently supported in the Amazon S3 console.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Multi-Region Access Points**.

1. Choose **Create Multi-Region Access Points** to begin creating your Multi-Region Access Point.

1. On the **Multi-Region Access Point** page, supply a name for the Multi-Region Access Point in the **Multi-Region Access Point name** field.

1. Select the buckets that will be associated with this Multi-Region Access Point. You can choose buckets that are in your account, or you can choose buckets from other accounts.
**Note**  
You must add at least one bucket from either your account or other accounts. Also, be aware that Multi-Region Access Points support only one bucket per AWS Region. Therefore, you can’t add two buckets from the same Region. [AWS Regions that are disabled by default](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html) are not supported.
   + To add a bucket that is in your account, choose **Add buckets**. A list of all the buckets in your account displays. You can search for your bucket by name, or sort the bucket names in alphabetical order.
   + To add a bucket from another account, choose **Add bucket from other accounts**. Make sure that you know the exact bucket name and AWS account ID because you can't search or browse for buckets in other accounts.
**Note**  
You must enter a valid AWS account ID and bucket name. The bucket must also be in a supported Region, or you will encounter an error when you try to create your Multi-Region Access Point. For the list of Regions that support Multi-Region Access Points, see [Multi-Region Access Points restrictions and limitations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiRegionAccessPointRestrictions.html).

1. (Optional) If you need to remove a bucket that you added, choose **Remove**.
**Note**  
You can’t add or remove buckets to this Multi-Region Access Point after you’ve finished creating it.

1. Under **Block Public Access settings for this Multi-Region Access Point**, select the Block Public Access settings that you want to apply to the Multi-Region Access Point. By default, all Block Public Access settings are enabled for new Multi-Region Access Points. We recommend that you leave all settings enabled unless you know that you have a specific need to disable any of them.
**Note**  
You can't change the Block Public Access settings for a Multi-Region Access Point after the Multi-Region Access Point has been created. Therefore, if you're going to block public access, make sure that your applications work correctly without public access before you create a Multi-Region Access Point.

1. Choose **Create Multi-Region Access Point**.

**Important**  
When you add a bucket that's owned by another account to your Multi-Region Access Point, the bucket owner must also update their bucket policy to grant access permissions to the Multi-Region Access Point. Otherwise, the Multi-Region Access Point won't be able to retrieve data from that bucket. For example policies that show how to grant such access, see [Multi-Region Access Point policy examples](https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiRegionAccessPointPermissions.html#MultiRegionAccessPointPolicyExamples).

## Using the AWS CLI
<a name="multi-region-access-point-create-cli"></a>

You can use the AWS CLI to create a Multi-Region Access Point. When you create the Multi-Region Access Point, you must provide all the buckets that it will support. You can't add buckets to the Multi-Region Access Point after it has been created. 

 The following example creates a Multi-Region Access Point with two buckets by using the AWS CLI. To use this example command, replace the `user input placeholders` with your own information.

**Note**  
To create a Multi-Region Access Point using buckets in an opt-in Region, make sure to [enable all opt-in Regions](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html) first. Otherwise, you’ll get a `403 InvalidRegion` error when you try to create a Multi-Region Access Point using buckets for an opt-in Region that you haven’t actually opted in to.

```
aws s3control create-multi-region-access-point --account-id 111122223333 --details '{
        "Name": "simple-multiregionaccesspoint-with-two-regions",
        "PublicAccessBlock": {
            "BlockPublicAcls": true,
            "IgnorePublicAcls": true,
            "BlockPublicPolicy": true,
            "RestrictPublicBuckets": true
        },
        "Regions": [
            { "Bucket": "amzn-s3-demo-bucket1" }, 
            { "Bucket": "amzn-s3-demo-bucket2" } 
        ]
    }' --region us-west-2
```

# Blocking public access with Amazon S3 Multi-Region Access Points
<a name="multi-region-access-point-block-public-access"></a>

Each Multi-Region Access Point has distinct settings for Amazon S3 Block Public Access. These settings operate in conjunction with the Block Public Access settings for the AWS account that owns the Multi-Region Access Point and the underlying buckets. 

When Amazon S3 authorizes a request, it applies the most restrictive combination of these settings. If the Block Public Access settings for any of these resources (the Multi-Region Access Point owner account, the underlying bucket, or the bucket owner account) block access for the requested action or resource, Amazon S3 rejects the request.

We recommend that you enable all Block Public Access settings unless you have a specific need to disable any of them. By default, all Block Public Access settings are enabled for a Multi-Region Access Point. If Block Public Access is enabled, the Multi-Region Access Point can't accept internet-based requests.

**Important**  
You can't change the Block Public Access settings for a Multi-Region Access Point after it has been created. 

 For more information about Amazon S3 Block Public Access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md). 

# Viewing Amazon S3 Multi-Region Access Points configuration details
<a name="multi-region-access-point-view-examples"></a>

The following example demonstrates how to view Multi-Region Access Point configuration details by using the Amazon S3 console. 

## Using the S3 console
<a name="multi-region-access-point-view-console"></a>

**To create a Multi-Region Access Point**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Multi-Region Access Points**.

1. Choose the name of the Multi-Region Access Point for which you want to view the configuration details.
   + The **Properties** tab lists all of the buckets that are associated with your Multi-Region Access Point, the creation date, the Amazon Resource Name (ARN), and the alias. The AWS account ID column also lists any buckets owned by external accounts that are associated with your Multi-Region Access Point.
   + The **Permissions** tab lists the Block Public Access settings that are applied to the buckets associated with this Multi-Region Access Point. You can also view the Multi-Region Access Point policy for your Multi-Region Access Point, if you’ve created one. The **Info** alert on the **Permissions** page also lists all the buckets (in your account and other accounts) for this Multi-Region Access Point that have the **Public Access is blocked** setting enabled.
   + The **Replication and failover** tab provides a map view of the buckets that are associated with your Multi-Region Access Point and the Regions that the buckets reside int. If there are buckets from another account that you don’t have permission to pull data from, the Region will be marked in red on the **Replication summary** map, indicating that it is an **AWS Region with errors getting replication status**.
**Note**  
To retrieve replication status information from a bucket in an external account, the bucket owner must grant you the `s3:GetBucketReplication` permission in their bucket policy.

     This tab also provides the replication metrics, replication rules, and failover statuses for the Regions that are used with your Multi-Region Access Point.

## Using the AWS CLI
<a name="multi-region-access-point-view-cli"></a>

 You can use the AWS CLI to view the configuration details for a Multi-Region Access Point.

The following AWS CLI example gets your current Multi-Region Access Point configuration. To use this example command, replace the `user input placeholders` with your own information.

```
aws s3control get-multi-region-access-point --account-id 111122223333 --name amzn-s3-demo-bucket
```

# Deleting a Multi-Region Access Point
<a name="multi-region-access-point-delete-examples"></a>

The following procedure explains how to delete a Multi-Region Access Point by using the Amazon S3 console. Be aware that deleting a Multi-Region Access Point doesn't delete the buckets associated with the Multi-Region Access Point. Instead, it only deletes the Multi-Region Access Point itself.

**Note**  
S3 Multi-Region Access Points using buckets in AWS opt-in Regions is currently only supported through AWS SDKs and the AWS CLI. To delete a Multi-Region Access Point using buckets in an opt-in Region, make sure to specify which AWS opt-in Regions your account can use first. Otherwise, if you try to delete a Multi-Region Access Point that uses buckets in disabled AWS opt-in Regions, you'll receive an error.

## Using the S3 console
<a name="multi-region-access-point-delete-console"></a>

**To delete a Multi-Region Access Point**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Multi-Region Access Points**.

1. Select the option button next to the name of your Multi-Region Access Point.

1. Choose **Delete**.

1. In the **Delete Multi-Region Access Point** dialog box, enter the name of the AWS bucket that you want to delete.
**Note**  
Make sure to enter a valid bucket name. Otherwise, the **Delete** button will be disabled.

1. Choose **Delete** to confirm deletion of your Multi-Region Access Point.

## Using the AWS CLI
<a name="multi-region-access-point-delete-cli"></a>

You can use the AWS CLI to delete a Multi-Region Access Point. This action does not delete the buckets associated with the Multi-Region Access Point, only the Multi-Region Access Point itself. To use this example command, replace the `user input placeholders` with your own information.

```
aws s3control delete-multi-region-access-point --account-id 123456789012 --details Name=example-multi-region-access-point-name
```

# Configuring a Multi-Region Access Point for use with AWS PrivateLink
<a name="MultiRegionAccessConfiguration"></a>

You can use Multi-Region Access Points to route Amazon S3 request traffic between AWS Regions. Each Multi-Region Access Point global endpoint routes Amazon S3 data request traffic from multiple sources without your having to build complex networking configurations with separate endpoints. These data-request traffic sources include:
+ Traffic originating in a virtual private cloud (VPC)
+ Traffic from on-premises data centers traveling over AWS PrivateLink 
+ Traffic from the public internet

If you establish an AWS PrivateLink connection to an S3 Multi-Region Access Point, you can route S3 requests into AWS, or across multiple AWS Regions, over a private connection by using a simple network architecture and configuration. When you use AWS PrivateLink, you don't need to configure a VPC peering connection.

**Topics**
+ [Configuring Multi-Region Access Point opt-in Regions](ConfiguringMrapOptInRegions.md)
+ [Configuring a Multi-Region Access Point for use with AWS PrivateLink](MultiRegionAccessPointsPrivateLink.md)
+ [Removing access to a Multi-Region Access Point from a VPC endpoint](RemovingMultiRegionAccessPointAccess.md)

# Configuring Multi-Region Access Point opt-in Regions
<a name="ConfiguringMrapOptInRegions"></a>

An AWS opt-in Region is a Region that isn’t enabled by default in your AWS account. In contrast, Regions that are enabled by default are known as AWS Regions or commercial Regions.

To start using Multi-Region Access Points in AWS opt-in Regions, you must manually enable the opt-in Region for your AWS account before creating your Multi-Region Access Point. After you enable the opt-in Region, you can create Multi-Region Access Points with buckets in the selected opt-in Region. For instructions on how to enable or disable an opt-in Region for your AWS account or AWS Organization, see [Enable or disable a Region for standalone accounts](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html#manage-acct-regions-enable-standalone) or [Enable or disable a Region in your organization](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html#manage-acct-regions-enable-organization).

**Note**  
Multi-Region Access Point opt-in Regions are currently only supported through AWS SDKs and AWS CLI.

 S3 Multi-Region Access Points supports the following AWS opt-in Regions:
+ `Africa (Cape Town)`
+ `Asia Pacific (Hong Kong)`
+ `Asia Pacific (Jakarta)`
+ `Asia Pacific (Melbourne)`
+ `Asia Pacific (Hyderabad)`
+ `Canada West (Calgary)`
+ `Europe (Zurich)`
+ `Europe (Milan)`
+ `Europe (Spain)`
+ `Israel (Tel Aviv)`
+ `Middle East (Bahrain)`
+ `Middle East (UAE)`

**Note**  
There are no additional costs for enabling an opt-in Region. However, creating or using a resource in a Multi-Region Access Point results in billing charges.

## Using a Multi-Region Access Point in an AWS opt-in Region
<a name="UsingMrapOptInRegions"></a>

To perform a [data plane operation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/MrapOperations.html) on your Multi-Region Access Point, all associated AWS accounts must enable the opt-in Regions that are part of the Multi-Region Access Point. This requirement applies to the requester account, the Multi-Region Access Point owner, S3 bucket owners, and the VPC endpoint owner. If any of these accounts don’t enable AWS opt-in Regions, the Multi-Region Access Point requests fail. For more information about the `InvalidToken` or `AllAccessDisabled` errors, see [List of error codes](https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList).

**Note**  
[Control plane operations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/MrapOperations.html) such as updating your Multi-Region Access Point policy or updating your failover configuration aren’t impacted by the opt-in Region status of any Region that is part of your Multi-Region Access Point. You also don’t need to disable any active opt-in Regions before deleting a Multi-Region Access Point.

## Disabling an active AWS opt-in Region
<a name="DisablingMrapOptInRegions"></a>

If you disable opt-in Region that is part of your Multi-Region Access Point, requests routed to this Region result in a `403 AllAccessDisabled` error. To safely disable an opt-in Region, we recommend that you first identify an alternate Region in your Multi-Region Access Point configuration to route the traffic to. You can then use Multi-Region Access Point failover controls to mark the alternate Region as active, and mark the Region to be disabled as passive. After changing the failover controls, you can disable the Region you want to opt out of.

## Enabling a previously disabled AWS opt-in Region
<a name="EnablingDisabledMrapOptInRegions"></a>

To enable an opt-in AWS Region that was previously disabled for your Multi-Region Access Point, make sure to update your AWS account settings. After you re-enable the opt-in Region, run the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutMultiRegionAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutMultiRegionAccessPointPolicy.html) API operation to apply the Multi-Region Access Points policy to the opt-in Region.

If your Multi-Region Access Point is accessed through a VPC endpoint, we recommend that you update your VPCE policy and use the [ModifyVpcEndpoint](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifyVpcEndpoint.html) API operation to apply the updated VPC endpoint policy to the re-enabled opt-in Region.

## Multi-Region Access Points policy and multiple AWS accounts
<a name="UsingMrapPolicyOptInRegions"></a>

If your Multi-Region Access Points policy grants access to multiple AWS accounts, all requester accounts must also enable the same opt-in Regions in their account settings. If the requester account submits a Multi-Region Access Point request without enabling the opt-in Regions that are part of the Multi-Region Access Point, it’ll result in a `400 InvalidToken` error.

## AWS opt-in Region considerations
<a name="MrapOptInRegionsConsiderations"></a>

When you access a Multi-Region Access Point from an opt-in Region, be aware of the following:
+ When you enable an opt-in Region, it allows you to create a Multi-Region Access Point using the buckets from the opt-in Region. When you disable an opt-in Region, the Multi-Region Access Point is no longer supported in the opt-in Region. If you no longer want an opt-in Region enabled for your Multi-Region Access Point, make sure to [disable the Region for your account](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html#manage-acct-regions-enable-standalone) first. Then, create a new Multi-Region Access Point with your preferred list of opt-in Regions.
+ If you attempt to create your Multi-Region Access Point with a disabled opt-in Region, you’ll receive a `403 InvalidRegion` error. After you enable the opt-in Region, try creating the Multi-Region Access Point again.
+ The maximum number of supported Regions for a Multi-Region Access Point is 17 Regions. This includes both opt-in Regions and commercial Regions. For more information, see [Multi-Region Access Points restrictions and limitations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiRegionAccessPointRestrictions.html).
+ Control plane requests for Multi-Region Access Points will work, even if you haven't opted in to any Regions.
+ When you're trying to create a Multi-Region Access Point for the first time, you must opt into all Regions that are part of the Multi-Region Access Point.
+ Any AWS accounts that are granted access to an S3 Multi-Region Access Point through the Multi-Region Access Point policy must also enable the same opt-in Regions that are part of the Multi-Region Access Point.

# Configuring a Multi-Region Access Point for use with AWS PrivateLink
<a name="MultiRegionAccessPointsPrivateLink"></a>

 AWS PrivateLink provides you with private connectivity to Amazon S3 using private IP addresses in your virtual private cloud (VPC). You can provision one or more interface endpoints inside your VPC to connect to Amazon S3 Multi-Region Access Points.

 You can create **com.amazonaws.s3-global.accesspoint** endpoints for Multi-Region Access Points through the AWS Management Console, AWS CLI, or AWS SDKs. To learn more about how to configure an interface endpoint for Multi-Region Access Point, see [Interface VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html) in the *VPC User Guide*. 

 To make requests to a Multi-Region Access Point via interface endpoints, follow these steps to configure the VPC and the Multi-Region Access Point. 

**To configure a Multi-Region Access Point to use with AWS PrivateLink**

1. Create or have an appropriate VPC endpoint that can connect to Multi-Region Access Points. For more information about creating VPC endpoints, see [Interface VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-interface.html) in the *VPC User Guide*.
**Important**  
 Make sure to create a **com.amazonaws.s3-global.accesspoint** endpoint. Other endpoint types cannot access Multi-Region Access Points. 

   After this VPC endpoint is created, all Multi-Region Access Point requests in the VPC route through this endpoint if you have private DNS enabled for the endpoint. This is enabled by default. 

1. If the Multi-Region Access Point policy does not support connections from VPC endpoints, you will need to update it.

1. Verify that the individual bucket policies will allow access to the users of the Multi-Region Access Point.

Remember that Multi-Region Access Points work by routing requests to buckets, not by fulfilling requests themselves. This is important to remember because the originator of the request must have permissions to the Multi-Region Access Point and be allowed to access the individual buckets in the Multi-Region Access Point. Otherwise, the request might be routed to a bucket where the originator doesn't have permissions to fulfill the request. A Multi-Region Access Point and the buckets associated can be owned by the same or another AWS account. However, VPCs from different accounts can use a Multi-Region Access Point if the permissions are configured correctly. 

Because of this, the VPC endpoint policy must allow access both to the Multi-Region Access Point and to each underlying bucket that you want to be able to fulfill requests. For example, suppose that you have a Multi-Region Access Point with the alias `mfzwi23gnjvgw.mrap`. It is backed by buckets `amzn-s3-demo-bucket1` and `amzn-s3-demo-bucket2`, all owned by AWS account `123456789012`. In this case, the following VPC endpoint policy would allow `GetObject` requests from the VPC made to `mfzwi23gnjvgw.mrap` to be fulfilled by either backing bucket. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
    {
        "Sid": "Read-buckets-and-MRAP-VPCE-policy",
        "Principal": "*",
        "Action": [
            "s3:GetObject"
        ],
        "Effect": "Allow",
        "Resource": [
            "arn:aws:s3:::amzn-s3-demo-bucket1/*",
            "arn:aws:s3:::amzn-s3-demo-bucket2/*",
            "arn:aws:s3::111122223333:accesspoint/mfzwi23gnjvgw.mrap/object/*"
        ]
    }]
}
```

------

As mentioned previously, you also must make sure that the Multi-Region Access Point policy is configured to support access through a VPC endpoint. You don't need to specify the VPC endpoint that is requesting access. The following sample policy would grant access to any requester trying to use the Multi-Region Access Point for the `GetObject` requests. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
    {
        "Sid": "Open-read-MRAP-policy",
        "Effect": "Allow",
        "Principal": "*",
        "Action": [
            "s3:GetObject"
          ],
        "Resource": "arn:aws:s3::111122223333:accesspoint/mfzwi23gnjvgw.mrap/object/*"
    }]
}
```

------

And of course, the individual buckets would each need a policy to support access from requests submitted through VPC endpoint. The following example policy grants read access to any anonymous users, which would include requests made through the VPC endpoint. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
   "Statement": [
   {
       "Sid": "Public-read",
       "Effect": "Allow",
       "Principal": "*",
       "Action": "s3:GetObject",
       "Resource": [
           "arn:aws:s3:::amzn-s3-demo-bucket1",
           "arn:aws:s3:::amzn-s3-demo-bucket2/*"]
    }]
}
```

------

 For more information about editing a VPC endpoint policy, see [Control access to services with VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html) in the *VPC User Guide*. 

# Removing access to a Multi-Region Access Point from a VPC endpoint
<a name="RemovingMultiRegionAccessPointAccess"></a>

If you own a Multi-Region Access Point and want to remove access to it from an interface endpoint, you must supply a new access policy for the Multi-Region Access Point that prevents access for requests coming through VPC endpoints. However, if the buckets in your Multi-Region Access Point support requests through VPC endpoints, they will continue to support these requests. If you want to prevent that support, you must also update the policies for the buckets. Supplying a new access policy to the Multi-Region Access Point prevents access only to the Multi-Region Access Point, not to the underlying buckets. 

**Note**  
You can't delete an access policy for a Multi-Region Access Point. To remove access to a Multi-Region Access Point, you must provide a new access policy with the modified access that you want. 

Instead of updating the access policy for the Multi-Region Access Point, you can update the bucket policies to prevent requests through VPC endpoints. In this case, users can still access the Multi-Region Access Point through the VPC endpoint. However, if the Multi-Region Access Point request is routed to a bucket where the bucket policy prevents access, the request will generate an error message. 

# Making requests through a Multi-Region Access Point
<a name="MultiRegionAccessPointRequests"></a>

Like other resources, Amazon S3 Multi-Region Access Points have Amazon Resource Names (ARNs). You can use these ARNs to direct requests to Multi-Region Access Points by using the AWS Command Line Interface (AWS CLI), AWS SDKs, or the Amazon S3 API. You can also use these ARNs to identify Multi-Region Access Points in access control policies. A Multi-Region Access Point ARN doesn't include or disclose the name of the Multi-Region Access Point. For more information about ARNs, see [Amazon Resource Names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) in the *AWS General Reference*.

**Note**  
The Multi-Region Access Point alias and ARN cannot be used interchangeably.

Multi-Region Access Point ARNs use the following format:

 `arn:aws:s3::account-id:accesspoint/MultiRegionAccessPoint_alias`

The following are a few examples of Multi-Region Access Point ARNs: 
+ `arn:aws:s3::123456789012:accesspoint/mfzwi23gnjvgw.mrap` represents the Multi-Region Access Point with the alias `mfzwi23gnjvgw.mrap`, which is owned by AWS account `123456789012`. 
+ `arn:aws:s3::123456789012:accesspoint/*` represents all Multi-Region Access Points under the account `123456789012`. This ARN matches all Multi-Region Access Points for account `123456789012`, but doesn't match any Regional Amazon S3 Access Points because the ARN doesn’t include an AWS Region. In contrast, the ARN `arn:aws:s3:us-west-2:123456789012:accesspoint/*` matches all Regional Amazon S3 Access Points in the Region `us-west-2` for the account `123456789012`, but doesn't match any Multi-Region Access Points. 

ARNs for objects that are accessed through a Multi-Region Access Point use the following format:

 `arn:aws:s3::account_id:accesspoint/MultiRegionAccessPoint_alias//key`

As with Multi-Region Access Point ARNs, the ARNs for objects that are accessed through Multi-Region Access Points don't include an AWS Region. Here are some examples. 
+ `arn:aws:s3::123456789012:accesspoint/mfzwi23gnjvgw.mrap//-01` represents the `-01`, which is accessed through the Multi-Region Access Point with the alias `mfzwi23gnjvgw.mrap`, which is owned by account `123456789012`. 
+ `arn:aws:s3::123456789012:accesspoint/mfzwi23gnjvgw.mrap//*` represents all objects that can be accessed through the Multi-Region Access Point with the alias `mfzwi23gnjvgw.mrap`, in account `123456789012`. 
+ `arn:aws:s3::123456789012:accesspoint/mfzwi23gnjvgw.mrap//-01/finance/*` represents all objects that can be accessed under the `-01/finance/` for the Multi-Region Access Point with the alias `mfzwi23gnjvgw.mrap`, in account `123456789012`. 

## Multi-Region Access Point hostnames
<a name="MultiRegionAccessPointHostnames"></a>

You can access data in Amazon S3 through a Multi-Region Access Point by using the hostname of the Multi-Region Access Point. Requests can be directed to this hostname from the public internet. If you have configured one or more internet gateways for the Multi-Region Access Point, requests can also be directed to this hostname from a virtual private cloud (VPC). For more information about creating VPC interface endpoints to use with Multi-Region Access Points, see [Configuring a Multi-Region Access Point for use with AWS PrivateLink](MultiRegionAccessPointsPrivateLink.md). 

To make requests through a Multi-Region Access Point from a VPC by using a VPC endpoint, you can use AWS PrivateLink. When you're making requests to a Multi-Region Access Point by using AWS PrivateLink, you cannot directly use an endpoint-specific Regional DOMAIN NAME SYSTEM (DNS) name that ends with `region.vpce.amazonaws.com`. This hostname will not have a certificate associated with it, so it cannot be used directly. You can still use the public DOMAIN NAME SYSTEM (DNS) name of the VPC endpoint as a `CNAME` or `ALIAS` target. Alternatively, you can enable private DOMAIN NAME SYSTEM (DNS) on the endpoint and use the standard Multi-Region Access Point `MultiRegionAccessPoint_alias.accesspoint.s3-global.amazonaws.com` DOMAIN NAME SYSTEM (DNS) name, as described in this section. 

When you make requests to the API for Amazon S3 data operations (for example, `GetObject`) through a Multi-Region Access Point, the hostname for the request is as follows: 

`MultiRegionAccessPoint_alias.accesspoint.s3-global.amazonaws.com` 

For example, to make a `GetObject` request through the Multi-Region Access Point with the alias `mfzwi23gnjvgw.mrap`, make a request to the hostname `mfzwi23gnjvgw.mrap.accesspoint.s3-global.amazonaws.com`. The `s3-global` portion of the hostname indicates that this hostname is not for a specific Region.

Making requests through a Multi-Region Access Point is similar to making requests through a single-Region access point. However, it's important to be aware of the following differences: 
+  Multi-Region Access Point ARNs don't include an AWS Region. They follow the format `arn:aws:s3::account-id:accesspoint/MultiRegionAccessPoint_alias`. 
+  For requests made through API operations (these requests don't require the use of an ARN), Multi-Region Access Points use a different endpoint scheme. The scheme is `MultiRegionAccessPoint_alias.accesspoint.s3-global.amazonaws.com`—for example, `mfzwi23gnjvgw.mrap.accesspoint.s3-global.amazonaws.com`. Note the differences compared to a single-Region access point: 
  + Multi-Region Access Point hostnames use their alias, not the Multi-Region Access Point name. 
  + Multi-Region Access Point hostnames don't include the owner's AWS account ID. 
  + Multi-Region Access Point hostnames don't include an AWS Region. 
  + Multi-Region Access Point hostnames include `s3-global.amazonaws.com` instead of `s3.amazonaws.com`. 
+ Multi-Region Access Point requests must be signed by using Signature Version 4A (SigV4A). When you use the AWS SDKs, the SDK automatically converts a SigV4 to SigV4A. Therefore, make sure that your [AWS SDK supports](https://docs.aws.amazon.com/sdkref/latest/guide/feature-s3-mrap.html) SigV4A as the signing implementation that is used to sign the global AWS Region requests. For more information about SigV4A, see [Signing AWS API requests](https://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html) in the *AWS General Reference*. 

## Multi-Region Access Points and Amazon S3 Transfer Acceleration
<a name="MultiRegionAccessPointsAndTransferAcceleration"></a>

Amazon S3 Transfer Acceleration is a feature that enables fast transfer of data to buckets. Transfer Acceleration is configured on the individual bucket level. For more information about Transfer Acceleration, see [Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration](transfer-acceleration.md). 

Multi-Region Access Points use a similar accelerated transfer mechanism as Transfer Acceleration for sending large objects over the AWS network. Because of this, you don't need to use Transfer Acceleration when sending requests through a Multi-Region Access Point. This increased transfer performance is automatically incorporated into the Multi-Region Access Point. 

**Topics**
+ [Multi-Region Access Point hostnames](#MultiRegionAccessPointHostnames)
+ [Multi-Region Access Points and Amazon S3 Transfer Acceleration](#MultiRegionAccessPointsAndTransferAcceleration)
+ [Permissions](MultiRegionAccessPointPermissions.md)
+ [Multi-Region Access Point restrictions and limitations](MultiRegionAccessPointRestrictions.md)
+ [Multi-Region Access Point request routing](MultiRegionAccessPointRequestRouting.md)
+ [Amazon S3 Multi-Region Access Points failover controls](MrapFailover.md)
+ [Configuring replication for use with Multi-Region Access Points](MultiRegionAccessPointBucketReplication.md)
+ [Using Multi-Region Access Points with supported API operations](MrapOperations.md)
+ [Monitoring and logging requests made through a Multi-Region Access Point to underlying resources](MultiRegionAccessPointMonitoring.md)

# Permissions
<a name="MultiRegionAccessPointPermissions"></a>

Amazon S3 Multi-Region Access Points can simplify data access for Amazon S3 buckets in multiple AWS Regions. Multi-Region Access Points are named global endpoints that you can use to perform Amazon S3 data-access object operations, such as `GetObject` and `PutObject`. Each Multi-Region Access Point can have distinct permissions and network controls for any request that is made through the global endpoint.

Each Multi-Region Access Point can also enforce a customized access policy that works in conjunction with the bucket policy that is attached to the underlying bucket. For a cross-account request to succeed, the following policies must permit the operation:
+ The Multi-Region Access Point policy
+ The underlying AWS Identity and Access Management (IAM) policy
+ The underlying bucket policy (where the request is routed to)

**Note**  
For same account requests, only the underlying IAM policy, which grants the appropriate access, is required.

You can configure any Multi-Region Access Point policy to accept requests only from specific IAM users or groups. For an example of how to do this, see Example 2 in [Multi-Region Access Point policy examples](#MultiRegionAccessPointPolicyExamples). To restrict Amazon S3 data access to a private network, you can configure the Multi-Region Access Point policy to accept requests only from a virtual private cloud (VPC).

For example, suppose that you make a `GetObject` request through a Multi-Region Access Point by using a user called `AppDataReader` in your AWS account. To help ensure that the request won't be denied, the `AppDataReader` user must be granted the `s3:GetObject` permission by the Multi-Region Access Point and by each bucket underlying the Multi-Region Access Point. `AppDataReader` won't be able to retrieve data from any bucket that doesn't grant this permission.

**Important**  
Delegating access control for a bucket to a Multi-Region Access Point policy doesn't change the bucket's behavior when the bucket is accessed directly through its bucket name or Amazon Resource Name (ARN). All operations made directly against the bucket will continue to work as before. Restrictions that you include in a Multi-Region Access Point policy apply only to requests made through that Multi-Region Access Point.

## Managing public access to a Multi-Region Access Point
<a name="MultiRegionAccessPointPublicAccess"></a>

Multi-Region Access Points support independent Block Public Access settings for each Multi-Region Access Point. When you create a Multi-Region Access Point, you can specify the Block Public Access settings that apply to that Multi-Region Access Point. 

**Note**  
Any Block Public Access settings that are enabled under **Block Public Access settings for this account** (in your own account) or **Block Public Settings for external buckets** still apply even if the independent Block Public Access settings for your Multi-Region Access Point are disabled.

For any request that is made through a Multi-Region Access Point, Amazon S3 evaluates the Block Public Access settings for:
+ The Multi-Region Access Point
+ The underlying buckets (including external buckets)
+ The account that owns the Multi-Region Access Point
+ The account that owns the underlying buckets (including external accounts)

If any of these settings indicate that the request should be blocked, Amazon S3 rejects the request. For more information about the Amazon S3 Block Public Access feature, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md). 

**Important**  
By default, all Block Public Access settings are enabled for Multi-Region Access Points. You must explicitly turn off any settings that you don't want to apply to a Multi-Region Access Point.   
You can't change the Block Public Access settings for a Multi-Region Access Point after it has been created. 

## Viewing Block Public Access settings for a Multi-Region Access Point
<a name="viewing-bpa-mrap-settings"></a>

**To view the Block Public Access settings for a Multi-Region Access Point**

1. 

   Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Multi-Region Access Points**.

1. Choose the name of the Multi-Region Access Point that you want to review.

1. Choose the **Permissions** tab.

1. Under **Block Public Access settings for this Multi-Region Access Point**, review the Block Public Access settings for your Multi-Region Access Point.
**Note**  
You can't edit the Block Public Access settings after the Multi-Region Access Point is created. Therefore, if you're going to block public access, make sure that your applications work correctly without public access before you create a Multi-Region Access Point. 

## Using a Multi-Region Access Point policy
<a name="use-mrap-policy"></a>

The following example Multi-Region Access Point policy grants an IAM user access to list and download files from your Multi-Region Access Point. To use this example policy, replace the `user input placeholders` with your own information.

------
#### [ JSON ]

****  

```
 {
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Principal":{
            "AWS":"arn:aws:iam::123456789012:user/JohnDoe" 
         },
         "Action":[
            "s3:ListBucket",
            "s3:GetObject"
         ],
         "Resource":[
            "arn:aws:s3::111122223333:accesspoint/MultiRegionAccessPoint_alias",
            "arn:aws:s3::111122223333:accesspoint/MultiRegionAccessPoint_alias/object/*"
         ]
      }
   ]
}
```

------

To associate your Multi-Region Access Point policy with the specified Multi-Region Access Point by using the AWS Command Line Interface (AWS CLI), use the following `put-multi-region-access-point-policy` command. To use this example command, replace the `user input placeholders` with your own information. Each Multi-Region Access Point can have only one policy, so a request made to the `put-multi-region-access-point-policy` action replaces any existing policy that is associated with the specified Multi-Region Access Point.

------
#### [ AWS CLI ]

```
aws s3control put-multi-region-access-point-policy
--account-id 111122223333
--details { "Name": "amzn-s3-demo-bucket-MultiRegionAccessPoint", "Policy": "{ \"Version\": \"2012-10-17\", \"Statement\": { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::111122223333:root\" }, \"Action\": [\"s3:ListBucket\", \"s3:GetObject\"], \"Resource\": [ \"arn:aws:s3::111122223333:accesspoint/MultiRegionAccessPoint_alias", \"arn:aws:s3::111122223333:accesspoint/MultiRegionAccessPoint_alias/object/*\" ] } }" }
```

------

To query your results for the previous operation, use the following command:

------
#### [ AWS CLI ]

```
aws s3control describe-multi-region-access-point-operation
--account-id 111122223333
--request-token-arn requestArn
```

------

To retrieve your Multi-Region Access Point policy, use the following command:

------
#### [ AWS CLI ]

```
aws s3control get-multi-region-access-point-policy
--account-id 111122223333
--name=amzn-s3-demo-bucket-MultiRegionAccessPoint
```

------

## Editing the Multi-Region Access Point policy
<a name="editing-mrap-policy"></a>

The Multi-Region Access Point policy (written in JSON) provides storage access to the Amazon S3 buckets that are used with this Multi-Region Access Point. You can allow or deny specific principals to perform various actions on your Multi-Region Access Point. When a request is routed to a bucket through the Multi-Region Access Point, both the access policies for the Multi-Region Access Point and the bucket apply. The more restrictive access policy always takes precedence. 

**Note**  
If a bucket contains objects that are owned by other accounts, the Multi-Region Access Point policy doesn't apply to the objects that are owned by other AWS accounts.

After you apply a Multi-Region Access Point policy, the policy cannot be deleted. You can either edit the policy or create a new policy that overwrites the existing one.

**To edit the Multi-Region Access Point policy**



1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Multi-Region Access Points**.

1. Choose the name of the Multi-Region Access Point that you want to edit the policy for.

1. Choose the **Permissions** tab.

1. Scroll down to the **Multi-Region Access Point policy** section. Choose **Edit** to update the policy (in JSON).

1. The **Edit Multi-Region Access Point policy** page appears. You can either enter the policy directly into the text field, or you can choose **Add statement** to select policy elements from a dropdown list.
**Note**  
The console automatically displays the Multi-Region Access Point Amazon Resource Name (ARN), which you can use in the policy. For example Multi-Region Access Point policies, see [Multi-Region Access Point policy examples](#MultiRegionAccessPointPolicyExamples).

## Multi-Region Access Point policy examples
<a name="MultiRegionAccessPointPolicyExamples"></a>

Amazon S3 Multi-Region Access Points support AWS Identity and Access Management (IAM) resource policies. You can use these policies to control the use of the Multi-Region Access Point by resource, user, or other conditions. For an application or user to be able to access objects through a Multi-Region Access Point, both the Multi-Region Access Point and the underlying bucket must allow the same access.

To allow the same access to both the Multi-Region Access Point and the underlying bucket, do one of the following:
+ **(Recommended)** To simplify access controls when using an Amazon S3 Multi-Region Access Point, delegate access control for the Amazon S3 bucket to the Multi-Region Access Point. For an example of how to do this, see Example 1 in this section. 
+ Add the same permissions contained in the Multi-Region Access Point policy to the underlying bucket policy.

**Important**  
Delegating access control for a bucket to a Multi-Region Access Point policy doesn't change the bucket's behavior when the bucket is accessed directly through its bucket name or Amazon Resource Name (ARN). All operations made directly against the bucket will continue to work as before. Restrictions that you include in a Multi-Region Access Point policy apply only to requests made through that Multi-Region Access Point.

**Example 1 – Delegating access to specific Multi-Region Access Points in your bucket policy (for the same account or cross-account)**  
The following example bucket policy grants full bucket access to a specific Multi-Region Access Point. This means that all access to this bucket is controlled by the policies that are attached to the Multi-Region Access Point. We recommend configuring your buckets this way for all use cases that don't require direct access to the bucket. You can use this bucket policy structure for Multi-Region Access Points in either the same account or in another account.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement" : [
    {
        "Effect": "Allow",
        "Principal" : { "AWS": "*" },
        "Action" : "*",
        "Resource" : [ "arn:aws:s3:::amzn-s3-demo-bucket", "arn:aws:s3:::amzn-s3-demo-bucket/*"],
        "Condition": {
            "StringEquals" : { "s3:DataAccessPointArn" : "arn:aws:s3::111122223333:accesspoint/example-multi-region-access-point" }
        }
    }]
}
```
If there are multiple Multi-Region Access Points that you're granting access to, make sure to list each Multi-Region Access Point.

**Example 2 – Granting an account access to a Multi-Region Access Point in your Multi-Region Access Point policy**  
The following Multi-Region Access Point policy allows account `123456789012` permission to list and read the objects contained in the Multi-Region Access Point defined by the *`MultiRegionAccessPoint_ARN`*.    
****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
       "Effect": "Allow",
       "Principal": {
          "AWS": "arn:aws:iam::111122223333:user/JohnDoe"
       },
       "Action": [
          "s3:ListBucket",
          "s3:GetObject"
       ],
       "Resource": [ 
          "arn:aws:s3::111122223333:accesspoint/MultiRegionAccessPoint_alias",
          "arn:aws:s3::111122223333:accesspoint/MultiRegionAccessPoint_alias/object/*"
       ]
     }
  ]
}
```

**Example 3 – Multi-Region Access Point policy that allows bucket listing**  
The following Multi-Region Access Point policy allows account `123456789012` permission to list the objects contained in the Multi-Region Access Point defined by the *`MultiRegionAccessPoint_ARN`*.

# Multi-Region Access Point restrictions and limitations
<a name="MultiRegionAccessPointRestrictions"></a>

Multi-Region Access Points in Amazon S3 have the following restrictions and limitations. 

## Names and aliases
<a name="MultiRegionAccessPointRestrictions-Names"></a>

Multi-Region Access Point names must meet the following requirements:
+  Must be unique within a single AWS account.
+  Must begin with a number or lowercase letter.
+  Must be between 3 and 50 characters long.
+ Can't begin or end with a hyphen (`-`).
+ Can't contain underscores (`_`), uppercase letters, or periods (`.`).
+  Can't be edited after they are created.

Multi-Region Access Point aliases (which are different from a Multi-Region Access Point name), are automatically generated by Amazon S3 and can't be edited or reused. For more information about the difference between Multi-Region Access Point aliases and Multi-Region Access Point names and their respective naming rules, see [Rules for naming Amazon S3 Multi-Region Access Points](multi-region-access-point-naming.md).

## Accessing a Multi-Region Access Point
<a name="MultiRegionAccessPointRestrictions-Access"></a>

You can't access data through a Multi-Region Access Point by using gateway endpoints. However, you can access data through a Multi-Region Access Point by using interface endpoints. To use AWS PrivateLink, you must create VPC endpoints. For more information, see [Configuring a Multi-Region Access Point for use with AWS PrivateLink](MultiRegionAccessPointsPrivateLink.md). However, be aware that IPv6 isn't supported.

To use Multi-Region Access Points with Amazon CloudFront, you must configure the Multi-Region Access Point as a `Custom Origin` distribution type. For more information about various origin types, see [Using various origins with CloudFront distributions](https://docs.aws.amazon.com//AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html). For more information about using Multi-Region Access Points with Amazon CloudFront, see [ Building an active-active, proximity-based application across multiple Regions](https://aws.amazon.com/blogs/storage/building-an-active-active-latency-based-application-across-multiple-regions/) on the *AWS Storage Blog*.

**Note**  
S3 on Outposts buckets aren't supported.

## Signing AWS API requests
<a name="MultiRegionAccessPointRestrictions-Signing"></a>

To sign an AWS API request, your Multi-Region Access Point must meet the following minimum requirements:

**Note**  
Multi-Region Access Points don't support anonymous requests.
+ Support for Transport Layer Security (TLS) version 1.2.
+ Support for Signature Version 4 (SigV4A)–This version of SigV4 allows requests to be signed for multiple AWS Regions. This feature is useful in API operations that might result in data access from one of several Regions. When using an AWS SDK, you supply your credentials, and the requests to Multi-Region Access Points will use Signature Version 4A without additional configuration. Make sure to check your [AWS SDK compatibility](https://docs.aws.amazon.com/sdkref/latest/guide/feature-s3-mrap.html) with the SigV4A algorithm. For more information about SigV4A, see [Signing AWS API requests](https://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html) in the *AWS General Reference*.
**Note**  
To use SigV4A with temporary security credentials—for example, when using AWS Identity and Access Management (IAM) roles—you can request the temporary credentials from a Regional AWS Security Token Service (AWS STS) endpoint. If you request temporary credentials from the global AWS STS endpoint (`sts.amazonaws.com`), then you must first set the Region compatibility of session tokens for the global endpoint to be valid in all AWS Regions. For more information, see [Managing AWS STS in an AWS Region](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) in the *IAM User Guide*.

## Amazon S3 API operations
<a name="MultiRegionAccessPointRestrictions-API"></a>
+ `CopyObject` is supported as a destination only when using the Multi-Region Access Point ARN.
+ The S3 Batch Operations feature isn't supported.

## AWS SDKs
<a name="MultiRegionAccessPointRestrictions-SDKs"></a>

Certain AWS SDKs aren't supported. To confirm which AWS SDKs are supported for Multi-Region Access Points, see [Compatibility with AWS SDKs](https://docs.aws.amazon.com/sdkref/latest/guide/feature-s3-mrap.html#s3-mrap-sdk-compat).

## Service quotas
<a name="MultiRegionAccessPointRestrictions-Quotas"></a>

Be aware of the following service quota limitations:
+ There is a maximum of 100 Multi-Region Access Points per account.
+ There is a limit of 17 Regions for a single Multi-Region Access Point.

## Creating, deleting, or modifying a Multi-Region Access Point
<a name="MultiRegionAccessPointRestrictions-Modifying"></a>

When you create, delete, or modify an Multi-Region Access Point, be aware of the following rules and restrictions:
+ After you create a Multi-Region Access Point, you can’t add, modify, or remove buckets from the Multi-Region Access Point configuration. To change the buckets, you must delete the entire Multi-Region Access Point and create a new one. If a cross-account bucket in your Multi-Region Access Point is deleted, the only way to reconnect this bucket is to recreate the bucket, using the same name and Region in that account.
+ Underlying buckets (in the same account) that are used in a Multi-Region Access Point can be deleted only after a Multi-Region Access Point is deleted.

## Region support
<a name="MultiRegionAccessPointRestrictions-RegionSupport"></a>

**Control plane requests**

All control plane requests to create or maintain Multi-Region Access Points must be routed to the `US West (Oregon)` Region. For Multi-Region Access Point data plane requests, Regions don't need to be specified. 

For the Multi-Region Access Point failover control plane, requests must be routed to one of these five supported Regions:
+ `US East (N. Virginia)`
+ `US West (Oregon)`
+ `Asia Pacific (Sydney)`
+ `Asia Pacific (Tokyo)`
+ `Europe (Ireland)`

**Regions enabled by default**

Your Multi-Region Access Point supports buckets in the following default AWS Regions (which are [enabled by default](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html) in your AWS account):
+ `US East (N. Virginia)`
+ `US East (Ohio)`
+ `US West (N. California)`
+ `US West (Oregon)`
+ `Asia Pacific (Mumbai)`
+ `Asia Pacific (Osaka)`
+ `Asia Pacific (Seoul)`
+ `Asia Pacific (Singapore)`
+ `Asia Pacific (Sydney)`
+ `Asia Pacific (Tokyo)`
+ `Canada (Central)`
+ `Europe (Frankfurt)`
+ `Europe (Ireland)`
+ `Europe (London)`
+ `Europe (Paris)`
+ `Europe (Stockholm)`
+ `South America (São Paulo)`

**AWS opt-in Regions**

Your Multi-Region Access Point also supports buckets in the following opt-in AWS Regions (which are [disabled by default](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html) in your AWS account):
+ `Africa (Cape Town)`
+ `Asia Pacific (Hong Kong)`
+ `Asia Pacific (Jakarta)`
+ `Asia Pacific (Melbourne)`
+ `Asia Pacific (Hyderabad)`
+ `Canada West (Calgary)`
+ `Europe (Zurich)`
+ `Europe (Milan)`
+ `Europe (Spain)`
+ `Israel (Tel Aviv)`
+ `Middle East (Bahrain)`
+ `Middle East (UAE)`

**Note**  
There are no additional costs for enabling an opt-in Region. However, creating or using a resource in a Multi-Region Access Point results in billing charges.

An opt-in Region must be manually enabled when configuring or creating your Multi-Region Access Point. For more information about opt-in Region behaviors for Multi-Region Access Points, see [Configuring Multi-Region Access Point opt-in Regions](ConfiguringMrapOptInRegions.md). For information about how to enable an opt-in Region in your AWS account, see [Enable or disable a Region for standalone accounts](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-regions.html#manage-acct-regions-enable-standalone) in the *AWS Account Management Reference Guide*.

# Multi-Region Access Point request routing
<a name="MultiRegionAccessPointRequestRouting"></a>

 When you make a request through a Multi-Region Access Point, Amazon S3 determines which of the buckets that are associated with the Multi-Region Access Point is closest to you. Amazon S3 then directs the request to that bucket, regardless of the AWS Region it is located in. 

After the Multi-Region Access Point routes the request to the closest-proximity bucket, Amazon S3 processes the request as if you made it directly to that bucket. Multi-Region Access Points aren't aware of the data contents of an Amazon S3 bucket. Therefore, the bucket that gets the request might not contain the requested data. To create consistent datasets in the Amazon S3 buckets that are associated with a Multi-Region Access Point, you can configure S3 Cross-Region Replication (CRR). Then any bucket can fulfill the request successfully. 

 Amazon S3 directs Multi-Region Access Point requests according to the following rules: 
+ Amazon S3 optimizes requests to be fulfilled according to proximity. It looks at the buckets supported by the Multi-Region Access Point and relays the request to the bucket that has the closest proximity. 
+ If the request specifies an existing resource (for example, `GetObject`), Amazon S3 does *not* consider the name of the object when fulfilling the request. This means that even if an object exists in one bucket in the Multi-Region Access Point, your request can be routed to a bucket that doesn't contain the object. This situation will result in a 404 error message being returned to the client. 

  To avoid 404 errors, we recommend that you configure S3 Cross-Region Replication (CRR) for your buckets. Replication helps resolve the potential issue when the object that you want is in a bucket in the Multi-Region Access Point, but it's not located in the specific bucket that your request was routed to. For more information about configuring replication, see [Configuring replication for use with Multi-Region Access Points](MultiRegionAccessPointBucketReplication.md). 

  To ensure that your requests are fulfilled by using the specific objects that you want, we also recommend that you turn on bucket versioning and include version IDs in your requests. This approach helps ensure that you have the correct version of the object that you are looking for. Versioning-enabled buckets can also help you recover objects from accidental overwrite. For more information, see [Using S3 Versioning in S3 buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html).
+ If the request is to create a resource (for example, `PutObject` or `CreateMultipartUpload`), Amazon S3 fulfills the request by using the closest-proximity bucket. For example, consider a video company that wants to support video uploads from anywhere in the world. When a user makes a `PUT` request to the Multi-Region Access Point, the object is put into the bucket with the closest proximity. To then make that uploaded video available to others around the world for download with the lowest latency, you can use CRR with bidirectional (two-way) replication. Using CRR with two-way replication keeps the contents of all the buckets that are associated with the Multi-Region Access Point synchronized. For more information about using replication with Multi-Region Access Points, see [Configuring replication for use with Multi-Region Access Points](MultiRegionAccessPointBucketReplication.md).

# Amazon S3 Multi-Region Access Points failover controls
<a name="MrapFailover"></a>

With Amazon S3 Multi-Region Access Point failover controls, you can maintain business continuity during Regional traffic disruptions, while also giving your applications a multi-Region architecture to fulfill compliance and redundancy needs. If your Regional traffic gets disrupted, you can use Multi-Region Access Point failover controls to select which AWS Regions behind an Amazon S3 Multi-Region Access Point will process data-access and storage requests. 

To support failover, you can set up your Multi-Region Access Point in an active-passive configuration, with traffic flowing to the active Region during normal conditions, and a passive Region on standby for failover. 

For example, to perform failover to an AWS Region of your choice, you shift traffic from your primary (active) Region to your secondary (passive) Region. In an active-passive configuration like this, one bucket is active and accepting traffic, while the other bucket is passive and not accepting traffic. The passive bucket is used for disaster recovery. When you initiate failover, all traffic (such as `GET` or `PUT` requests) is directed to the bucket in the active state (in one Region) and away from the bucket in the passive state (in another Region).

If you have S3 Cross-Region Replication (CRR) enabled with two-way replication rules, you can keep your buckets synchronized during a failover. In addition, if you have CRR enabled in an active-active configuration, Amazon S3 Multi-Region Access Points can also fetch data from the bucket location of closest proximity, which improves application performance. 

## AWS Region support
<a name="RegionSupport"></a>

With Amazon S3 Multi-Region Access Points failover controls, your S3 buckets can be in any of the [17 Regions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiRegionAccessPointRestrictions.html) where Multi-Region Access Points are supported. You can initiate failover across any two Regions at one time.

**Note**  
Although failover is initiated between only two Regions at one time, you can separately update the routing statuses for multiple Regions at the same time in your Multi-Region Access Point.

The following topics demonstrate how to use and manage Amazon S3 Multi-Region Access Point failover controls.

**Topics**
+ [AWS Region support](#RegionSupport)
+ [Amazon S3 Multi-Region Access Points routing states](FailoverConfiguration.md)
+ [Using Amazon S3 Multi-Region Access Point failover controls](UsingFailover.md)
+ [Amazon S3 Multi-Region Access Point failover controls errors](mrap-failover-errors.md)

# Amazon S3 Multi-Region Access Points routing states
<a name="FailoverConfiguration"></a>

Your Amazon S3 Multi-Region Access Points failover configuration determines the routing status of the AWS Regions that are used with the Multi-Region Access Point. You can configure your Amazon S3 Multi-Region Access Point to be in an active-active state or active-passive state.
+ **Active-active** – In an active-active configuration, all requests are automatically sent to the closest proximity AWS Region in your Multi-Region Access Point. After the Multi-Region Access Point has been configured to be in an active-active state, all Regions can receive traffic. If traffic disruption occurs in an active-active configuration, network traffic will automatically be redirected to one of the active Regions.
+ **Active-passive** – In an active-passive configuration, the active Regions in your Multi-Region Access Point receive traffic and the passive ones do not. If you intend to use S3 failover controls to initiate failover in a disaster situation, set up your Multi-Region Access Points in an active-passive configuration while you're testing and performing disaster-recovery planning.

# Using Amazon S3 Multi-Region Access Point failover controls
<a name="UsingFailover"></a>

This section explains how to manage and use your Amazon S3 Multi-Region Access Points failover controls by using the AWS Management Console. 

There are two failover controls in the **Failover configuration** section on your Multi-Region Access Point details page in the AWS Management Console: **Edit routing status** and **Failover**. You can use these controls as follows: 
+ **Edit routing status** – You can manually edit the routing statuses of up to 17 AWS Regions in a single request for your Multi-Region Access Point by choosing **Edit routing status**. You can use **Edit routing status** for the following purposes: 
  + To set or edit the routing statuses of one or more Regions in your Multi-Region Access Point
  + To create a failover configuration for your Multi-Region Access Point by configuring two Regions to be in an active-passive state
  + To manually fail over your Regions
  + To manually switch traffic between Regions
+ **Failover** – When you initiate failover by choosing **Failover**, you are only updating the routing statuses of two Regions that are already configured to be in an active-passive state. During a failover that you initiated by choosing **Failover**, the routing statuses between the two Regions are automatically switched.

## Editing the routing status of the Regions in your Multi-Region Access Point
<a name="editing-mrap-routing-status"></a>

You can manually update the routing statuses of up to 17 AWS Regions in a single request for your Multi-Region Access Point by choosing **Edit routing status** in the **Failover configuration** section on your Multi-Region Access Point details page. However, when you initiate failover by choosing **Failover**, you are only updating the routing statuses of two Regions that are already configured to be in an active-passive state. During a failover that you initiated by choosing **Failover**, the routing statuses between the two Regions are automatically switched.

You can use **Edit routing status** (as described in the following procedure) for the following purposes:
+ To set or edit the routing statuses of one or more Regions in your Multi-Region Access Point
+ To create a failover configuration for your Multi-Region Access Point by configuring two Regions to be in an active-passive state
+ To manually fail over your Regions
+ To manually switch traffic between Regions

### Using the S3 console
<a name="update-mrap-routing-console"></a>

**To update the routing status of the Regions in your Multi-Region Access Point**



1. Sign in to the AWS Management Console.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Multi-Region Access Points**.

1. Choose the Multi-Region Access Point that you want to update.

1. Choose the **Replication and failover** tab.

1. Select one or more Regions that you want to edit the routing status of.
**Note**  
To initiate failover, at least one AWS Region must be designated as **Active** and one Region must be designated as **Passive** in your Multi-Region Access Point.

1. Choose **Edit routing status**.

1. In the dialog box that appears, select **Active** or **Passive** for the **Routing status** for each Region.

   An active state allows traffic to be routed to the Region. A passive state stops any traffic from being directed to the Region.

   If you are creating a failover configuration for your Multi-Region Access Point or initiating failover, at least one AWS Region must be designated as **Active** and one Region must be designated as **Passive** in your Multi-Region Access Point.

1. Choose **Save routing status**. It takes about 2 minutes for traffic to be redirected.

After you submit the routing status of the AWS Regions for your Multi-Region Access Point, you can verify your routing status changes. To verify these changes, go to Amazon CloudWatch at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/) to monitor the shift of your Amazon S3 data-request traffic (for example, `GET` and `PUT` requests) between active and passive Regions. Any existing connections will not be terminated during failover. Existing connections will continue until they reach a success or failure status.

### Using the AWS CLI
<a name="update-mrap-routing-cli"></a>

**Note**  
You can run Multi-Region Access Point AWS CLI routing commands against any of these five Regions:  
`ap-southeast-2`
`ap-northeast-1`
`us-east-1`
`us-west-2`
`eu-west-1`

The following example command updates your current Multi-Region Access Point route configuration. To update the active or passive status of a bucket, set the `TrafficDialPercentage` value to `100` for active and to `0` for passive. In this example, `amzn-s3-demo-bucket1` is set to active, and *amzn-s3-demo-bucket2* is set to passive. To use this example command, replace the `user input placeholders` with your own information. 

```
aws s3control submit-multi-region-access-point-routes
--region ap-southeast-2 
--account-id 123456789012 
--mrap MultiRegionAccessPoint_ARN
--route-updates Bucket=amzn-s3-demo-bucket1,TrafficDialPercentage=100
                Bucket=amzn-s3-demo-bucket2
,TrafficDialPercentage=0
```

The following example command gets your updated Multi-Region Access Point routing configuration. To use this example command, replace the `user input placeholders` with your own information.

```
aws s3control get-multi-region-access-point-routes
--region eu-west-1
--account-id 123456789012
--mrap MultiRegionAccessPoint_ARN
```

## Initiating failover
<a name="InitiatingFailover"></a>

When you initiate failover by choosing **Failover** in the **Failover configuration** section on your Multi-Region Access Point details page, Amazon S3 request traffic automatically gets shifted to an alternate AWS Region. The failover process is completed within 2 minutes. 

You can initiate a failover across any two AWS Regions at one time (of the [17 Regions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiRegionAccessPointRestrictions.html) where Multi-Region Access Points are supported). Failover events are then logged in AWS CloudTrail. Upon failover completion, you can monitor Amazon S3 traffic and any traffic routing updates to the new active Region in Amazon CloudWatch.

**Important**  
To keep all metadata and objects in sync across buckets during data replication, we recommend that you create two-way replication rules and enable replica modification sync before configuring your failover controls.   
Two-way replication rules help ensure that when data is written to the Amazon S3 bucket that traffic fails over to, that data is then replicated back to the source bucket. Replica modification sync helps ensure that object metadata is also synchronized between buckets during two-way replication.   
For more information about configuring replication to support failover, see [Configuring replication for use with Multi-Region Access Points](MultiRegionAccessPointBucketReplication.md).

**To initiate failover between replicated buckets**

1. Sign in to the AWS Management Console.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Multi-Region Access Points**.

1. Choose the Multi-Region Access Point that you want to use to initiate failover.

1. Choose the **Replication and failover** tab.

1. Scroll down to the **Failover configuration** section and select two AWS Regions.
**Note**  
To initiate failover, at least one AWS Region must be designated as **Active** and one Region must be designated as **Passive** in your Multi-Region Access Point. An active state allows traffic to be directed to a Region. A passive state stops any traffic from being directed to the Region.

1. Choose **Failover**.

1. In the dialog box, choose **Failover** again to initiate the failover process. During this process, the routing statuses of the two Regions are automatically switched. All new traffic is directed to the Region that becomes active, and traffic stops being directed to the Region that becomes passive. It takes about 2 minutes for traffic to be redirected.

   After you initiate the failover process, you can verify your traffic changes. To verify these changes, go to Amazon CloudWatch at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/) to monitor the shift of your Amazon S3 data-request traffic (for example, `GET` and `PUT` requests) between active and passive Regions. Any existing connections will not be terminated during failover. Existing connections will continue until they reach a success or failure status. 

## Viewing your Amazon S3 Multi-Region Access Point routing controls
<a name="viewing-mrap-routing-controls"></a>

### Using the S3 console
<a name="viewing-mrap-routing-console"></a>

**To view the routing controls for your Amazon S3 Multi-Region Access Point**



1. Sign in to the AWS Management Console.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Multi-Region Access Points**.

1. Choose the Multi-Region Access Point that you want to review.

1. Choose the **Replication and failover** tab. This page displays the routing configuration details and summary for your Multi-Region Access Point, associated replication rules, and replication metrics. You can see the routing status of your Regions in the **Failover configuration** section.

### Using the AWS CLI
<a name="viewing-mrap-routing-cli"></a>

The following example AWS CLI command gets your current Multi-Region Access Point route configuration for the specified Region. To use this example command, replace the `user input placeholders` with your own information.

```
aws s3control get-multi-region-access-point-routes
--region eu-west-1
--account-id 123456789012 
--mrap MultiRegionAccessPoint_ARN
```

**Note**  
This command can only be executed against these five Regions:  
`ap-southeast-2`
`ap-northeast-1`
`us-east-1`
`us-west-2`
`eu-west-1`

# Amazon S3 Multi-Region Access Point failover controls errors
<a name="mrap-failover-errors"></a>

When you update the failover configuration for your Multi-Region Access Point, you might encounter one of these errors:
+ HTTP 400 Bad Request: This error can occur if you enter an invalid Multi-Region Access Point ARN while updating your failover configuration. You can confirm your Multi-Region Access Point ARN by reviewing your Multi-Region Access Point policy. To review or update your Multi-Region Access Point policy, see [Editing the Multi-Region Access Point policy](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingFailover.html#editing-mrap-policy). This error can also occur if you use an empty string or a random string while updating your Amazon S3 Multi-Region Access Point failover controls. Make sure to use the Multi-Region Access Point ARN format: 

  `arn:aws:s3::account-id:accesspoint/MultiRegionAccessPoint_alias` 
+ HTTP 503 Slow Down: This error occurs if you send too many requests in a short period of time. Rejected requests will result in an error.
+ HTTP 409 Conflict: This error occurs when two or more concurrent route configuration update requests are targeting a single Multi-Region Access Point. The first request succeeds, but any other requests fail with an error.
+ HTTP 405 Method Not Allowed: This error occurs when you've selected a Multi-Region Access Point with only one AWS Region when initiating failover. You must select two Regions before you can initiate failover. Otherwise, an error is returned.

# Configuring replication for use with Multi-Region Access Points
<a name="MultiRegionAccessPointBucketReplication"></a>

When you make a request to a Multi-Region Access Point endpoint, Amazon S3 automatically routes the request to the bucket that is closest to you. Amazon S3 doesn't consider the contents of the request when making this decision. If you make a request to `GET` an object, your request might be routed to a bucket that doesn't have a copy of this object. If that happens, you receive an HTTP status code 404 (Not Found) error. For more information about Multi-Region Access Point request routing, see [Multi-Region Access Point request routing](MultiRegionAccessPointRequestRouting.md).

If you want the Multi-Region Access Point to be able to retrieve the object regardless of which bucket receives the request, you must configure Amazon S3 Cross-Region Replication (CRR). 

 For example, consider a Multi-Region Access Point with three buckets: 
+ A bucket named `amzn-s3-demo-bucket1` in the Region `US West (Oregon)` that contains the object `my-image.jpg` 
+ A bucket named `amzn-s3-demo-bucket2` in the Region `Asia Pacific (Mumbai)` that contains the object `my-image.jpg` 
+ A bucket named `amzn-s3-demo-bucket` in the Region `Europe (Frankfurt)` that doesn't contain the object `my-image.jpg` 

In this situation, if you make a `GetObject` request for the object `my-image.jpg`, the success of that request depends upon which bucket receives your request. Because Amazon S3 doesn't consider the contents of the request, it might route your `GetObject` request to the `amzn-s3-demo-bucket` bucket if that bucket responds of closest proximity. Even though your object is in a bucket in the Multi-Region Access Point, you will get an HTTP 404 Not Found error because the individual bucket that received your request didn't have the object. 

Enabling Cross-Region Replication (CRR) helps avoid this result. With appropriate replication rules, the `my-image.jpg` object is copied over to the `amzn-s3-demo-bucket` bucket. Therefore, if Amazon S3 routes your request to that bucket, you can now retrieve the object. 

Replication works as normal with buckets that are assigned to a Multi-Region Access Point. Amazon S3 doesn't perform any special replication handling with buckets that are in Multi-Region Access Points. For more information about configuring replication in your buckets, see [Setting up live replication overview](replication-how-setup.md).

**Recommendations for using replication with Multi-Region Access Points**  
For the best replication performance when working with Multi-Region Access Points, we recommend the following: 
+ Configure S3 Replication Time Control (S3 RTC). To replicate your data across different Regions within a predictable time frame, you can use S3 RTC. S3 RTC replicates 99.99 percent of new objects stored in Amazon S3 within 15 minutes (backed by a service-level agreement). For more information, see [Meeting compliance requirements with S3 Replication Time Control](replication-time-control.md). There are additional charges for S3 RTC. For information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).
+ Use two-way (bidirectional) replication to support keeping buckets synchronized when buckets are updated through the Multi-Region Access Point. For more information, see [Create two-way replication rules for your Multi-Region Access Point](mrap-create-two-way-replication-rules.md).
+ Create cross-account Multi-Region Access Points to replicate data to buckets in separate AWS accounts. This approach provides account-level separation, so that data can be accessed from and replicated across different accounts in different Regions other than the source bucket. Setting up cross-account Multi-Region Access Points comes at no additional cost. If you're a bucket owner but don't own the Multi-Region Access Point, you pay only for data transfer and request costs. Multi-Region Access Point owners pay for data routing and internet-acceleration costs. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).
+ Enable replica modification sync for each replication rule to also keep metadata changes to your objects in sync. For more information, see [Enabling replica modification sync](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-for-metadata-changes.html#enabling-replication-for-metadata-changes).
+ Enable Amazon CloudWatch metrics to [monitor replication events](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-metrics.html). CloudWatch metrics fees apply. For more information, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/).

**Topics**
+ [Create one-way replication rules for your Multi-Region Access Point](mrap-create-one-way-replication-rules.md)
+ [Create two-way replication rules for your Multi-Region Access Point](mrap-create-two-way-replication-rules.md)
+ [View the replication rules for your Multi-Region Access Point](mrap-view-replication-rules.md)

# Create one-way replication rules for your Multi-Region Access Point
<a name="mrap-create-one-way-replication-rules"></a>

Replication rules enable automatic and asynchronous copying of objects across buckets. A one-way replication rule helps ensure that data is fully replicated from a source bucket in one AWS Region to a destination bucket in another Region. When one-way replication is set up, a replication rule from the source bucket (*amzn-s3-demo-bucket*) to the destination bucket (*amzn-s3-demo-bucket*) is created. Like all replication rules, you can apply the one-way replication rule to the entire Amazon S3 bucket or to a subset of objects that are filtered by a prefix or object tags.

**Important**  
We recommend using one-way replication if your users will only be consuming the objects in your destination buckets. If your users will be uploading or modifying the objects in your destination buckets, use two-way replication to keep all of your buckets in sync. We also recommend two-way replication if you plan to use your Multi-Region Access Point for failover. To set up two-way replication, see [Create two-way replication rules for your Multi-Region Access Point](mrap-create-two-way-replication-rules.md).

**To create a one-way replication rule for your Multi-Region Access Point**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Multi-Region Access Points**.

1. Choose the name of your Multi-Region Access Point.

1. Choose the **Replication and failover** tab.

1. Scroll down to the **Replication rules** section, and then choose **Create replication rules**. Make sure that you have sufficient permissions to create the replication rule, or versioning will be disabled.
**Note**  
You can create replication rules only for buckets in your own account. To create replication rules for external buckets, the bucket owners must create the replication rules for those buckets.

1. On the **Create replication rules** page, choose the **Replicate objects from one or more source buckets to one or more destination buckets** template.
**Important**  
When you create replication rules by using this template, they replace any existing replication rules that are already assigned to the bucket.   
To add to or modify any existing replication rules instead of replacing them, go to each bucket's **Management** tab in the console, and then edit the rules in the **Replication rules** section. You can also add to or modify existing replication rules by using the AWS CLI, SDKs, or REST API. For more information, see [Replication configuration file elements](replication-add-config.md).

1. In the **Source and destination** section, under **Source buckets**, select one or more buckets that you want to replicate objects from. All buckets (source and destination) that are chosen for replication must have S3 Versioning enabled, and each bucket must reside in a different AWS Region. For more information about S3 Versioning, see [Using versioning in Amazon S3 buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html).

   Under **Destination buckets**, select one or more buckets that you want to replicate objects to.

1. In the **Replication rule configuration** section, choose whether the replication rule will be **Enabled** or **Disabled** when it's created.
**Note**  
You can't enter a name in the **Replication rule name** box. Replication rule names are generated based on your configuration when you create the replication rule.

1. In the **Scope** section, choose the appropriate scope for your replication.
   + To replicate the whole bucket, choose **Apply to all objects in the bucket**. 
   + To replicate a subset of the objects in the bucket, choose **Limit the scope of this rule using one or more filters**. 

     You can filter your objects by using a prefix, object tags, or a combination of both. 
     + To limit replication to all objects that have names that begin with the same string (for example `pictures`), enter a prefix in the **Prefix** box. 

       If you enter a prefix that is the name of a folder, you must use a delimiter such as a `/` (forward slash) to indicate its level of hierarchy (for example, `pictures/`). For more information about prefixes, see [Organizing objects using prefixes](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-prefixes.html).
     + To replicate all objects that have one or more object tags, choose **Add tag** and enter the key-value pair in the boxes. To add another tag, repeat the procedure. For more information about object tags, see [Categorizing your objects using tags](object-tagging.md).

1. Scroll down to the **Additional replication options** section, and select the replication options that you want to apply.
**Note**  
We recommend that you apply the following options:  
**Replication time control (RTC)** – To replicate your data across different Regions within a predictable time frame, you can use S3 Replication Time Control (S3 RTC). S3 RTC replicates 99.99 percent of new objects stored in Amazon S3 within 15 minutes (backed by a service-level agreement). For more information, see [Meeting compliance requirements with S3 Replication Time Control](replication-time-control.md).
**Replication metrics and notifications** – Enable Amazon CloudWatch metrics to monitor replication events.
**Delete marker replication** – Delete markers created by S3 delete operations will be replicated. Delete markers created by lifecycle rules are not replicated. For more information, see [Replicating delete markers between buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-marker-replication.html).
There are additional charges for S3 RTC and CloudWatch replication metrics and notifications. For more information, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/) and [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/).

1. If you're writing a new replication rule that replaces an existing one, select **I acknowledge that by choosing Create replication rules, these existing replication rules will be overwritten**.

1. Choose **Create replication rules** to create and save your new one-way replication rule.

# Create two-way replication rules for your Multi-Region Access Point
<a name="mrap-create-two-way-replication-rules"></a>

Replication rules enable automatic and asynchronous copying of objects across buckets. A two-way replication rule (also known as a bidirectional replication rule) ensures that data is fully synchronized between two or more buckets in different AWS Regions. When two-way replication is set up, a replication rule from the source bucket (DOC-EXAMPLE-BUCKET-1) to the bucket containing the replicas (DOC-EXAMPLE-BUCKET-2) is created. Then, a second replication rule from the bucket containing the replicas (DOC-EXAMPLE-BUCKET-2) to the source bucket (DOC-EXAMPLE-BUCKET-1) is created.

Like all replication rules, you can apply the two-way replication rule to the entire Amazon S3 bucket or to a subset of objects filtered by a prefix or object tags. You can also keep metadata changes to your objects in sync by [enabling replica modification sync](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-for-metadata-changes.html#enabling-replication-for-metadata-changes) for each replication rule. You can enable replica modification sync through the Amazon S3 console, the AWS CLI, the AWS SDKs, the Amazon S3 REST API, or AWS CloudFormation.

To monitor the replication progress of objects and object metadata in Amazon CloudWatch, enable S3 Replication metrics and notifications. For more information, see [Monitoring progress with replication metrics and Amazon S3 event notifications](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-metrics.html).

**To create a two-way replication rule for your Multi-Region Access Point**



1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Multi-Region Access Points**.

1. Choose the name of the Multi-Region Access Point that you want to update.

1. Choose the **Replication and failover** tab.

1. Scroll down to the **Replication rules** section, and then choose **Create replication rules**.

1. On the **Create replication rules** page, choose the **Replicate objects among all specified buckets** template. The **Replicate objects among all specified buckets** template sets up two-way replication (with failover capabilities) for your buckets.
**Important**  
When you create replication rules by using this template, they replace any existing replication rules that are already assigned to the bucket.   
To add to or modify any existing replication rules instead of replacing them, go to each bucket's **Management** tab in the console, and then edit the rules in the **Replication rules** section. You can also add to or modify existing replication rules by using the AWS CLI, AWS SDKs, or Amazon S3 REST API. For more information, see [Replication configuration file elements](replication-add-config.md).

1. In the **Buckets** section, select at least two buckets that you want to replicate objects from. All buckets chosen for replication must have S3 Versioning enabled, and each bucket must reside in a different AWS Region. For more information about S3 Versioning, see [Using versioning in Amazon S3 buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html).
**Note**  
Make sure that you have the required read and replicate permissions to establish replication, or you will encounter errors. For more information, see [Creating an IAM role](https://docs.aws.amazon.com/AmazonS3/latest/userguide/setting-repl-config-perm-overview.html).

1. In the **Replication rule configuration** section, choose whether the replication rule will be **Enabled** or **Disabled** when it's created.
**Note**  
You can't enter a name in the **Replication rule name** box. Replication rule names are generated based on your configuration when you create the replication rule.

1. In the **Scope** section, choose the appropriate scope for your replication.
   + To replicate the whole bucket, choose **Apply to all objects in the bucket**. 
   + To replicate a subset of the objects in the bucket, choose **Limit the scope of this rule using one or more filters**. 

     You can filter your objects by using a prefix, object tags, or a combination of both. 
     + To limit replication to all objects that have names that begin with the same string (for example `pictures`), enter a prefix in the **Prefix** box. 

       If you enter a prefix that is the name of a folder, you must use a `/` (forward slash) as the last character (for example, `pictures/`).
     + To replicate all objects that have one or more object tags, choose **Add tag** and enter the key-value pair in the boxes. To add another tag, repeat the procedure. For more information about object tags, see [Categorizing your objects using tags](object-tagging.md).

1. Scroll down to the **Additional replication options** section, and select the replication options that you want to apply.
**Note**  
We recommend that you apply the following options, especially if you intend to configure your Multi-Region Access Point to support failover:  
**Replication time control (RTC)** – To replicate your data across different Regions within a predictable time frame, you can use S3 Replication Time Control (S3 RTC). S3 RTC replicates 99.99 percent of new objects stored in Amazon S3 within 15 minutes (backed by a service-level agreement). For more information, see [Meeting compliance requirements with S3 Replication Time Control](replication-time-control.md).
**Replication metrics and notifications** – Enable Amazon CloudWatch metrics to monitor replication events.
**Delete marker replication** – Delete markers created by S3 delete operations will be replicated. Delete markers created by lifecycle rules are not replicated. For more information, see [Replicating delete markers between buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-marker-replication.html).
**Replica modification sync** – Enable replica modification sync for each replication rule to also keep metadata changes to your objects in sync. For more information, see [Enabling replica modification sync](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-for-metadata-changes.html#enabling-replication-for-metadata-changes).
There are additional charges for S3 RTC and CloudWatch replication metrics and notifications. For more information, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/) and [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/).

1. If you're writing a new replication rule that replaces an existing one, select **I acknowledge that by choosing Create replication rules, these existing replication rules will be overwritten**.

1. Choose **Create replication rules** to create and save your new two-way replication rules. 

# View the replication rules for your Multi-Region Access Point
<a name="mrap-view-replication-rules"></a>

With Multi-Region Access Points, you can either set up one-way replication rules or two-way (bidirectional) replication rules. For information about how to manage your replication rules, see [ Managing replication rules by using the Amazon S3 console](https://docs.aws.amazon.com/AmazonS3/latest/userguide/disable-replication.html).

**To view the replication rules for your Multi-Region Access Point**



1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Multi-Region Access Points**.

1. Choose the name of your Multi-Region Access Point.

1. Choose the **Replication and failover** tab.

1. Scroll down to the **Replication rules** section. This section lists all of the replication rules that have been created for your Multi-Region Access Point.
**Note**  
If you’ve added a bucket from another account to this Multi-Region Access Point, you must have the `s3:GetBucketReplication` permission from the bucket owner to view the replication rules for that bucket.

# Using Multi-Region Access Points with supported API operations
<a name="MrapOperations"></a>

 Amazon S3 provides a set of operations to manage Multi-Region Access Points. Amazon S3 processes some of these operations synchronously and some asynchronously. When you invoke an asynchronous operation, Amazon S3 first synchronously authorizes the requested operation. If authorization is successful, Amazon S3 returns a token that you can use to track the progress and results of the requested operation. 

**Note**  
Requests that are made through the Amazon S3 console are always synchronous. The console waits until the request is completed before allowing you to submit another request. 

You can view the current status and results of asynchronous operations by using the console, or you can use `DescribeMultiRegionAccessPointOperation` in the AWS CLI, AWS SDKs, or REST API. Amazon S3 provides a tracking token in the response to an asynchronous operation. You include that tracking token as an argument to `DescribeMultiRegionAccessPointOperation`. When you include the tracking token, Amazon S3 then returns the current status and results of the specified operation, including any errors or relevant resource information. Amazon S3 performs `DescribeMultiRegionAccessPointOperation` operations synchronously. 

All control plane requests to create or maintain Multi-Region Access Points must be routed to the `US West (Oregon)` Region. For Multi-Region Access Point data plane requests, Regions don't need to be specified. For the Multi-Region Access Point failover control plane, the request must be routed to one of the five supported Regions. For more information about Multi-Region Access Point supported Regions, see [Multi-Region Access Point restrictions and limitations](MultiRegionAccessPointRestrictions.md).

In addition, you must grant the `s3:ListAllMyBuckets` permission to the user, role, or other AWS Identity and Access Management (IAM) entity that makes a request to manage a Multi-Region Access Point. 

The following examples demonstrate how to use Multi-Region Access Points with compatible operations in Amazon S3.

**Topics**
+ [Multi-Region Access Point compatibility with AWS services and AWS SDKs](#mrap-api-support)
+ [Multi-Region Access Point compatibility with S3 operations](#mrap-operations-support)
+ [View your Multi-Region Access Point routing configuration](#query-mrap-routing-configuration)
+ [Update your underlying Amazon S3 bucket policy](#update-underlying-bucket-policy)
+ [Update a Multi-Region Access Point route configuration](#update-mrap-route-configuration)
+ [Add an object to a bucket in your Multi-Region Access Point](#add-bucket-mrap)
+ [Retrieve objects from your Multi-Region Access Point](#get-object-mrap)
+ [List objects that are stored in a bucket underlying your Multi-Region Access Point](#list-objects-mrap)
+ [Use a presigned URL with Multi-Region Access Points](#use-presigned-url-mrap)
+ [Use a bucket that's configured with Requester Pays with Multi-Region Access Points](#use-requester-pays-mrap)

## Multi-Region Access Point compatibility with AWS services and AWS SDKs
<a name="mrap-api-support"></a>

To use a Multi-Region Access Point with applications that require an Amazon S3 bucket name, use the Amazon Resource Name (ARN) of the Multi-Region Access Point when making requests by using an AWS SDK. To check which AWS SDKs are compatible with Multi-Region Access Points, see [Compatibility with AWS SDKs](https://docs.aws.amazon.com/sdkref/latest/guide/feature-s3-mrap.html#s3-mrap-sdk-compat).

## Multi-Region Access Point compatibility with S3 operations
<a name="mrap-operations-support"></a>

You can use use the following Amazon S3 data plane API operations to perform actions on objects in buckets that are associated with your Multi-Region Access Point. The following S3 operations can accept Multi-Region Access Point ARNs:
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAcl.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLegalHold.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLegalHold.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectRetention.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectRetention.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLegalHold.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLegalHold.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectRetention.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectRetention.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html)

**Note**  
Multi-Region Access Points supports copy operations using Multi-Region Access Points only as a destination when using the Multi-Region Access Point ARN.

You can use the following Amazon S3 control plane operations to create and manage your Multi-Region Access Points:
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateMultiRegionAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateMultiRegionAccessPoint.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DescribeMultiRegionAccessPointOperation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DescribeMultiRegionAccessPointOperation.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPoint.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointPolicyStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointPolicyStatus.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointRoutes.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetMultiRegionAccessPointRoutes.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListMultiRegionAccessPoints.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListMultiRegionAccessPoints.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutMultiRegionAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutMultiRegionAccessPointPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_SubmitMultiRegionAccessPointRoutes.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_SubmitMultiRegionAccessPointRoutes.html)

## View your Multi-Region Access Point routing configuration
<a name="query-mrap-routing-configuration"></a>

------
#### [ AWS CLI ]

The following example command retrieves your Multi-Region Access Point route configuration so that you can see the current routing statuses for your buckets. To use this example command, replace the `user input placeholders` with your own information.

```
aws s3control get-multi-region-access-point-routes
--region eu-west-1
--account-id 111122223333
--mrap arn:aws:s3::111122223333:accesspoint/abcdef0123456.mrap
```

------
#### [ SDK for Java ]

The following SDK for Java code retrieves your Multi-Region Access Point route configuration so that you can see the current routing statuses for your buckets. To use this example syntax, replace the `user input placeholders` with your own information.

```
S3ControlClient s3ControlClient = S3ControlClient.builder()
    .region(Region.US_EAST_1)
    .credentialsProvider(credentialsProvider)
    .build();
 
GetMultiRegionAccessPointRoutesRequest request = GetMultiRegionAccessPointRoutesRequest.builder()
    .accountId("111122223333")
    .mrap("arn:aws:s3::111122223333:accesspoint/abcdef0123456.mrap")
    .build();
 
GetMultiRegionAccessPointRoutesResponse response = s3ControlClient.getMultiRegionAccessPointRoutes(request);
```

------
#### [ SDK for JavaScript ]

The following SDK for JavaScript code retrieves your Multi-Region Access Point route configuration so that you can see the current routing statuses for your buckets. To use this example syntax, replace the `user input placeholders` with your own information.

```
const REGION = 'us-east-1'
 
const s3ControlClient = new S3ControlClient({
  region: REGION
})
 
export const run = async () => {
  try {
    const data = await s3ControlClient.send(
      new GetMultiRegionAccessPointRoutesCommand({
        AccountId: '111122223333',
        Mrap: 'arn:aws:s3::111122223333:accesspoint/abcdef0123456.mrap',
      })
    )
    console.log('Success', data)
    return data
  } catch (err) {
    console.log('Error', err)
  }
}
 
run()
```

------
#### [ SDK for Python ]

The following SDK for Python code retrieves your Multi-Region Access Point route configuration so that you can see the current routing statuses for your buckets. To use this example syntax, replace the `user input placeholders` with your own information.

```
s3.get_multi_region_access_point_routes(
        AccountId=111122223333,
        Mrap=arn:aws:s3::111122223333:accesspoint/abcdef0123456.mrap)['Routes']
```

------

## Update your underlying Amazon S3 bucket policy
<a name="update-underlying-bucket-policy"></a>

To grant proper access, you must also update the underlying Amazon S3 bucket policy. The following examples delegate access control to the Multi-Region Access Point policy. After you delegate access control to the Multi-Region Access Point policy, the bucket policy is no longer used for access control when requests are made through the Multi-Region Access Point.

Here's an example bucket policy that delegates access control to the Multi-Region Access Point policy. To use this example bucket policy, replace the `user input placeholders` with your own information. To apply this policy through the AWS CLI `put-bucket-policy` command, as shown in the next example, save the policy in a file, for example, `policy.json`.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": { 
        "AWS": "arn:aws:iam::444455556666:root" 
      },
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::amzn-s3-demo-bucket",
        "arn:aws:s3:::amzn-s3-demo-bucket/*"
      ],
      "Condition": {
        "StringEquals": {
          "s3:DataAccessPointAccount": "444455556666"
        }
      }
    }
  ]
}
```

------

The following `put-bucket-policy` example command associates the updated S3 bucket policy with your S3 bucket:

```
aws s3api put-bucket-policy
  --bucket amzn-s3-demo-bucket
  --policy file:///tmp/policy.json
```

## Update a Multi-Region Access Point route configuration
<a name="update-mrap-route-configuration"></a>

The following example command updates the Multi-Region Access Point route configuration. Multi-Region Access Point route commands can be run against the following five Regions:
+ `ap-southeast-2`
+ `ap-northeast-1`
+ `us-east-1`
+ `us-west-2`
+ `eu-west-1`

In a Multi-Region Access Point routing configuration, you can set buckets to an active or passive routing status. Active buckets receive traffic, whereas passive buckets do not. You can set a bucket's routing status by setting the `TrafficDialPercentage` value for the bucket to `100` for active or `0` for passive. 

------
#### [ AWS CLI ]

The following example command updates your Multi-Region Access Point routing configuration. In this example, `amzn-s3-demo-bucket1` is set to active status and `amzn-s3-demo-bucket2` is set to passive. To use this example command, replace the `user input placeholders` with your own information.

```
aws s3control submit-multi-region-access-point-routes
--region ap-southeast-2 
--account-id 111122223333
--mrap arn:aws:s3::111122223333:accesspoint/abcdef0123456.mrap
--route-updates Bucket=amzn-s3-demo-bucket1,TrafficDialPercentage=100
                Bucket=amzn-s3-demo-bucket2,TrafficDialPercentage=0
```

------
#### [ SDK for Java ]

The following SDK for Java code updates your Multi-Region Access Point routing configuration. To use this example syntax, replace the `user input placeholders` with your own information.

```
S3ControlClient s3ControlClient = S3ControlClient.builder()
    .region(Region.ap-southeast-2)
    .credentialsProvider(credentialsProvider)
    .build();
 
SubmitMultiRegionAccessPointRoutesRequest request = SubmitMultiRegionAccessPointRoutesRequest.builder()
    .accountId("111122223333")
    .mrap("arn:aws:s3::111122223333:accesspoint/abcdef0123456.mrap")
    .routeUpdates(
        MultiRegionAccessPointRoute.builder()
            .region("eu-west-1")
            .trafficDialPercentage(100)
            .build(),
        MultiRegionAccessPointRoute.builder()
            .region("ca-central-1")
            .bucket("111122223333")
            .trafficDialPercentage(0)
            .build()
    )
    .build();
 
SubmitMultiRegionAccessPointRoutesResponse response = s3ControlClient.submitMultiRegionAccessPointRoutes(request);
```

------
#### [ SDK for JavaScript ]

The following SDK for JavaScript code updates your Multi-Region Access Point route configuration. To use this example syntax, replace the `user input placeholders` with your own information.

```
const REGION = 'ap-southeast-2'
 
const s3ControlClient = new S3ControlClient({
  region: REGION
})
 
export const run = async () => {
  try {
    const data = await s3ControlClient.send(
      new SubmitMultiRegionAccessPointRoutesCommand({
        AccountId: '111122223333',
        Mrap: 'arn:aws:s3::111122223333:accesspoint/abcdef0123456.mrap',
        RouteUpdates: [
          {
            Region: 'eu-west-1',
            TrafficDialPercentage: 100,
          },
          {
            Region: 'ca-central-1',
            Bucket: 'amzn-s3-demo-bucket1',
            TrafficDialPercentage: 0,
          },
        ],
      })
    )
    console.log('Success', data)
    return data
  } catch (err) {
    console.log('Error', err)
  }
}
 
run()
```

------
#### [ SDK for Python ]

The following SDK for Python code updates your Multi-Region Access Point route configuration. To use this example syntax, replace the `user input placeholders` with your own information.

```
s3.submit_multi_region_access_point_routes(
        AccountId=111122223333,
        Mrap=arn:aws:s3::111122223333:accesspoint/abcdef0123456.mrap, 
        RouteUpdates= [{
            'Bucket': amzn-s3-demo-bucket,
            'Region': ap-southeast-2, 
            'TrafficDialPercentage': 10
        }])
```

------

## Add an object to a bucket in your Multi-Region Access Point
<a name="add-bucket-mrap"></a>

To add an object to the bucket that's associated with the Multi-Region Access Point, you can use the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html) operation. To keep all buckets in the Multi-Region Access Point in sync, enable [Cross-Region Replication](MultiRegionAccessPointBucketReplication.md).

**Note**  
To use this operation, you must have the `s3:PutObject` permission for the Multi-Region Access Point. For more information about Multi-Region Access Point permission requirements, see [Permissions](MultiRegionAccessPointPermissions.md).

------
#### [ AWS CLI ]

The following example data plane request uploads *`example.txt`* to the specified Multi-Region Access Point. To use this example, replace the *`user input placeholders`* with your own information.

```
aws s3api put-object --bucket arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap --key example.txt --body example.txt
```

------
#### [ SDK for Java ]

```
S3Client s3Client = S3Client.builder()
        .build();
        
PutObjectRequest objectRequest = PutObjectRequest.builder()
        .bucket("arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap")
        .key("example.txt")
        .build();

s3Client.putObject(objectRequest, RequestBody.fromString("Hello S3!"));
```

------
#### [ SDK for JavaScript ]

```
const client = new S3Client({});

async function putObjectExample() {
    const command = new PutObjectCommand({
        Bucket: "arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap",
        Key: "example.txt",
        Body: "Hello S3!",
    });
    
    try {
        const response = await client.send(command);
        console.log(response);
    } catch (err) {
        console.error(err);
    }
}
```

------
#### [ SDK for Python ]

```
import boto3

client = boto3.client('s3')
client.put_object(
    Bucket='arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap',
    Key='example.txt',
    Body='Hello S3!'
)
```

------

## Retrieve objects from your Multi-Region Access Point
<a name="get-object-mrap"></a>

To retrieve objects from the Multi-Region Access Point, you can use the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html) operation.

**Note**  
To use this API operation, you must have the `s3:GetObject` permission for the Multi-Region Access Point. For more information about Multi-Region Access Point permissions requirements, see [Permissions](MultiRegionAccessPointPermissions.md).

------
#### [ AWS CLI ]

The following example data plane request retrieves *`example.txt`* from the specified Multi-Region Access Point and downloads it as *`downloaded_example.txt`*. To use this example, replace the *`user input placeholders`* with your own information.

```
aws s3api get-object --bucket arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap --key example.txt downloaded_example.txt
```

------
#### [ SDK for Java ]

```
S3Client s3 = S3Client
   .builder()
   .build();

GetObjectRequest getObjectRequest = GetObjectRequest.builder()
    .bucket("arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap")
    .key("example.txt")
    .build();

s3Client.getObject(getObjectRequest);
```

------
#### [ SDK for JavaScript ]

```
const client = new S3Client({})

async function getObjectExample() {
    const command = new GetObjectCommand({
        Bucket: "arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap",
        Key: "example.txt"
    });
    
    try {
        const response = await client.send(command);
        console.log(response);
    } catch (err) {
        console.error(err);
    }
}
```

------
#### [ SDK for Python ]

```
import boto3

client = boto3.client('s3')
client.get_object(
    Bucket='arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap',
    Key='example.txt'
)
```

------

## List objects that are stored in a bucket underlying your Multi-Region Access Point
<a name="list-objects-mrap"></a>

To return a list of objects that are stored in a bucket underlying your Multi-Region Access Point, use the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html) operation. In the following example command, all objects for the specified Multi-Region Access Point are listed by using the ARN for the Multi-Region Access Point. In this case, the Multi-Region Access Point ARN is:

`arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap`

**Note**  
To use this API operation, you must have the `s3:ListBucket` permission for the Multi-Region Access Point and the underlying bucket. For more information about Multi-Region Access Point permissions requirements, see [Permissions](MultiRegionAccessPointPermissions.md).

------
#### [ AWS CLI ]

The following example data plane request lists the objects in the bucket that underlies the Multi-Region Access Point that's specified by the ARN. To use this example, replace the *`user input placeholders`* with your own information.

```
aws s3api list-objects-v2 --bucket arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap
```

------
#### [ SDK for Java ]

```
S3Client s3Client = S3Client.builder()
        .build();
        
String bucketName = "arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap";

ListObjectsV2Request listObjectsRequest = ListObjectsV2Request
    .builder()
    .bucket(bucketName)
    .build();

 s3Client.listObjectsV2(listObjectsRequest);
```

------
#### [ SDK for JavaScript ]

```
const client = new S3Client({});

async function listObjectsExample() {
    const command = new ListObjectsV2Command({
        Bucket: "arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap",
    });
    
    try {
        const response = await client.send(command);
        console.log(response);
    } catch (err) {
        console.error(err);
    }
}
```

------
#### [ SDK for Python ]

```
import boto3

client = boto3.client('s3')
client.list_objects_v2(
    Bucket='arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap'
)
```

------

## Use a presigned URL with Multi-Region Access Points
<a name="use-presigned-url-mrap"></a>

You can use a presigned URL to generate a URL that allows others to access your Amazon S3 buckets through an Amazon S3 Multi-Region Access Point. When you create a presigned URL, you associate it with a specific object action, such as an S3 upload (`PutObject`) or an S3 download (`GetObject`). You can share the presigned URL, and anyone with access to it can perform the action embedded in the URL as if they were the original signing user.

Presigned URLs have an expiration date. When the expiration time is reached, the URL will no longer work. 

Before you use S3 Multi-Region Access Points with presigned URLs, check the [AWS SDK compatibility](https://docs.aws.amazon.com/sdkref/latest/guide/feature-s3-mrap.html#s3-mrap-sdk-compat) with the SigV4A algorithm. Verify that your SDK version supports SigV4A as the signing implementation that is used to sign the global AWS Region requests. For more information about using presigned URLs with Amazon S3, see [Sharing objects by using presigned URLs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html).

The following examples show how you can use Multi-Region Access Points with presigned URLs. To use these examples, replace the *`user input placeholders`* with your own information.

------
#### [ AWS CLI ]

```
aws s3 presign arn:aws:s3::123456789012:accesspoint/MultiRegionAccessPoint_alias/example-file.txt
```

------
#### [ SDK for Python ]

```
import logging
import boto3
from botocore.exceptions import ClientError

s3_client = boto3.client('s3',aws_access_key_id='xxx',aws_secret_access_key='xxx')
s3_client.generate_presigned_url(HttpMethod='PUT',ClientMethod="put_object", Params={'Bucket':'arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap','Key':'example-file'})
```

------
#### [ SDK for Java ]

```
S3Presigner s3Presigner = S3Presigner.builder()
    .credentialsProvider(StsAssumeRoleCredentialsProvider.builder()
        .refreshRequest(assumeRole)
        .stsClient(stsClient)
        .build())
    .build();

GetObjectRequest getObjectRequest = GetObjectRequest.builder()
    .bucket("arn:aws:s3::123456789012:accesspoint/abcdef0123456.mrap")
    .key("example-file")
    .build();

GetObjectPresignRequest preSignedReq = GetObjectPresignRequest.builder()
    .getObjectRequest(getObjectRequest)
    .signatureDuration(Duration.ofMinutes(10))
    .build();

PresignedGetObjectRequest presignedGetObjectRequest = s3Presigner.presignGetObject(preSignedReq);
```

------

**Note**  
To use SigV4A with temporary security credentials—for example, when using IAM roles—make sure that you request the temporary credentials from a Regional endpoint in AWS Security Token Service (AWS STS), instead of a global endpoint. If you use the global endpoint for AWS STS (`sts.amazonaws.com`), AWS STS will generate temporary credentials from a global endpoint, which isn't supported by Sig4A. As a result, you'll get an error. To resolve this issue, use any of the listed [Regional endpoints for AWS STS](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html#id_credentials_region-endpoints).

## Use a bucket that's configured with Requester Pays with Multi-Region Access Points
<a name="use-requester-pays-mrap"></a>

If an S3 bucket that is associated with your Multi-Region Access Points is [configured to use Requester Pays](https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysExamples.html), the requester will pay for the bucket request, the download, and any Multi-Region Access Points related costs. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

Here's an example of a data plane request to a Multi-Region Access Point that is connected to a Requester Pays bucket.

------
#### [ AWS CLI ]

To download objects from a Multi-Region Access Point that is connected to a Requester Pays bucket, you must specify `--request-payer requester` as part of your [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-object.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-object.html) request. You must also specify the name of the file in the bucket and the location where the downloaded file should be stored.

```
aws s3api get-object --bucket MultiRegionAccessPoint_ARN --request-payer requester --key example-file-in-bucket.txt example-location-of-downloaded-file.txt 
```

------
#### [ SDK for Java ]

To download objects from a Multi-Region Access Point that is connected to a Requester Pays bucket, you must specify the `RequestPayer.REQUESTER` as part of your `GetObject` request. You must also specify the name of the file in the bucket, as well as the location where it should be stored.

```
GetObjectResponse getObjectResponse = s3Client.getObject(GetObjectRequest.builder()
    .key("example-file.txt")
    .bucket("arn:aws:s3::
123456789012:accesspoint/abcdef0123456.mrap")
    .requestPayer(RequestPayer.REQUESTER)
    .build()
).response();
```

------

# Monitoring and logging requests made through a Multi-Region Access Point to underlying resources
<a name="MultiRegionAccessPointMonitoring"></a>

Amazon S3 logs requests made through Multi-Region Access Points and requests made to the API operations that manage them, such as `CreateMultiRegionAccessPoint` and `GetMultiRegionAccessPointPolicy`. Requests made to Amazon S3 through a Multi-Region Access Point appear in your Amazon S3 server access logs and AWS CloudTrail logs with the Multi-Region Access Point hostname. An access point's hostname takes the form `MRAP_alias.accesspoint.s3-global.amazonaws.com`. For example, suppose that you have the following bucket and Multi-Region Access Point configuration: 
+ A bucket named `my-bucket-usw2` in the Region `us-west-2` that contains the object `my-image.jpg`. 
+ A bucket named `my-bucket-aps1` in the Region `ap-south-1` that contains the object `my-image.jpg`. 
+  A bucket named `my-bucket-euc1` in the Region `eu-central-1` that doesn’t contain an object named `my-image.jpg`. 
+  A Multi-Region Access Point named `my-mrap` with the alias `mfzwi23gnjvgw.mrap` that is configured to fulfill requests from all three buckets. 
+  Your AWS account ID is `123456789012`. 

A request made to retrieve `my-image.jpg` directly through any of the buckets appears in your logs with a hostname of `bucket_name.s3.Region.amazonaws.com`. 

If you make the request through the Multi-Region Access Point instead, Amazon S3 first determines which of the buckets in the different Regions is closest. After Amazon S3 determines which bucket to use to fulfill the request, it sends the request to that bucket and logs the operation by using the Multi-Region Access Point hostname. In this example, if Amazon S3 relays the request to `my-bucket-aps1`, your logs will reflect a successful `GET` request for `my-image.jpg` from `my-bucket-aps1`, using a hostname of `mfzwi23gnjvgw.mrap.accesspoint.s3-global.amazonaws.com`. 

**Important**  
Multi-Region Access Points aren't aware of the data contents of the underlying buckets. Therefore, the bucket that gets the request might not contain the requested data. For example, if Amazon S3 determines that the `my-bucket-euc1` bucket is the closest, your logs will reflect a failed `GET` request for `my-image.jpg` from `my-bucket-euc1`, using a hostname of `mfzwi23gnjvgw.mrap.accesspoint.s3-global.amazonaws.com`. If the request was routed to `my-bucket-usw2` instead, your logs would indicate a successful `GET` request.

 For more information about Amazon S3 server access logs, see [Logging requests with server access logging](ServerLogs.md). For more information about AWS CloudTrail, see [What is AWS CloudTrail?](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) in the *AWS CloudTrail User Guide*. 

## Monitoring and logging requests made to Multi-Region Access Point management API operations
<a name="MonitoringMultiRegionAccessPointAPIs"></a>

Amazon S3 provides several API operations to manage Multi-Region Access Points, such as `CreateMultiRegionAccessPoint` and `GetMultiRegionAccessPointPolicy`. When you make requests to these API operations by using the AWS Command Line Interface (AWS CLI), AWS SDKs, or Amazon S3 REST API, Amazon S3 processes these requests asynchronously. Provided that you have the appropriate permissions for the request, Amazon S3 returns a token for these requests. You can use this token with `DescribeAsyncOperation` to help you to view the status of ongoing asynchronous operations. Amazon S3 processes `DescribeAsyncOperation` requests synchronously. To view the status of asynchronous requests, you can use the Amazon S3 console, AWS CLI, SDKs, or REST API. 

**Note**  
The console displays only the status of asynchronous requests that were made within the previous 14 days. To view the status of older requests, use the AWS CLI, SDKs, or REST API. 

 Asynchronous management operations can be in one of several states: 

NEW  
 Amazon S3 has received the request and is preparing to perform the operation. 

IN\$1PROGRESS  
 Amazon S3 is currently performing the operation. 

SUCCESS  
 The operation succeeded. The response includes relevant information, such as the Multi-Region Access Point alias for a `CreateMultiRegionAccessPoint` request. 

FAILED  
 The operation failed. The response includes an error message that indicates the reason for the request failure. 

## Using AWS CloudTrail with Multi-Region Access Points
<a name="MultiRegionAccessPointCloudTrail"></a>

You can use AWS CloudTrail to view, search, download, archive, analyze, and respond to account activity across your AWS infrastructure. With Multi-Region Access Points and CloudTrail logging, you can identify the following: 
+ Who or what took which action
+ Which resources were acted upon
+ When the event occurred
+ Other details regarding the event

You can use this logging information to help you analyze and respond to activity that occurred through your Multi-Region Access Points. 

### How to set up AWS CloudTrail for Multi-Region Access Points
<a name="MultiRegionAccessPointCTSetup"></a>

To enable CloudTrail logging for any operations related to creating or maintaining Multi-Region Access Points, you must configure CloudTrail logging to record the events in the US West (Oregon) Region. You must set up your logging configuration this way regardless of which Region you are in when making the request, or which Regions the Multi-Region Access Point supports. All requests to create or maintain a Multi-Region Access Point are routed through the US West (Oregon) Region. We recommend that you either add this Region to an existing trail or create a new trail that contains this Region and all the Regions associated with the Multi-Region Access Point.

Amazon S3 logs all requests made through a Multi-Region Access Point and requests made to the API operations that manage access points, such as `CreateMultiRegionAccessPoint` and `GetMultiRegionAccessPointPolicy`. When you log these requests through a Multi-Region Access Point, they appear in your AWS CloudTrail logs with the hostname of the Multi-Region Access Point. For example, if you make requests to a bucket through a Multi-Region Access Point with the alias `mfzwi23gnjvgw.mrap`, the entries in the CloudTrail log will have a hostname of `mfzwi23gnjvgw.mrap.accesspoint.s3-global.amazonaws.com`. 

Multi-Region Access Points route requests to the closest bucketBecause of this behavior, when you are looking at the CloudTrail logs for a Multi-Region Access Point, you will see requests being made to the underlying buckets. Some of those requests might be direct requests to the bucket that are not routed through the Multi-Region Access Point. Keep this fact in mind when reviewing traffic. When a bucket is in a Multi-Region Access Point, requests can still be made to that bucket directly without going through the Multi-Region Access Point. 

There are asynchronous events involved with creating and managing Multi-Region Access Points. Asynchronous requests don't have completion events in the CloudTrail log. For more information about asynchronous requests, see [Monitoring and logging requests made to Multi-Region Access Point management API operations](#MonitoringMultiRegionAccessPointAPIs). 

 For more information about AWS CloudTrail, see [What is AWS CloudTrail?](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) in the *AWS CloudTrail User Guide*. 

# Retaining multiple versions of objects with S3 Versioning
<a name="Versioning"></a>

Versioning in Amazon S3 is a means of keeping multiple variants of an object in the same bucket. You can use the S3 Versioning feature to preserve, retrieve, and restore every version of every object stored in your buckets. With versioning you can recover more easily from both unintended user actions and application failures. After versioning is enabled for a bucket, if Amazon S3 receives multiple write requests for the same object simultaneously, it stores all of those objects.

Versioning-enabled buckets can help you recover objects from accidental deletion or overwrite. For example, if you delete an object, Amazon S3 inserts a delete marker instead of removing the object permanently. The delete marker becomes the current object version. If you overwrite an object, it results in a new object version in the bucket. You can always restore the previous version. For more information, see [Deleting object versions from a versioning-enabled bucket](DeletingObjectVersions.md). 

By default, S3 Versioning is disabled on buckets, and you must explicitly enable it. For more information, see [Enabling versioning on buckets](manage-versioning-examples.md).

**Note**  
The SOAP API does not support S3 Versioning. SOAP support over HTTP is deprecated, but it is still available over HTTPS. New Amazon S3 features are not supported for SOAP.
Normal Amazon S3 rates apply for every version of an object stored and transferred. Each version of an object is the entire object; it is not just a diff from the previous version. Thus, if you have three versions of an object stored, you are charged for three objects. 

## Unversioned, versioning-enabled, and versioning-suspended buckets
<a name="versioning-states"></a>

Buckets can be in one of three states: 
+ Unversioned (the default)
+ Versioning-enabled
+ Versioning-suspended

You enable and suspend versioning at the bucket level. After you version-enable a bucket, it can never return to an unversioned state. But you can *suspend* versioning on that bucket.

The versioning state applies to all (never some) of the objects in that bucket. When you enable versioning in a bucket, all new objects are versioned and given a unique version ID. Objects that already existed in the bucket at the time versioning was enabled will thereafter *always* be versioned and given a unique version ID when they are modified by future requests. Note the following: 
+ Objects that are stored in your bucket before you set the versioning state have a version ID of `null`. When you enable versioning, existing objects in your bucket do not change. What changes is how Amazon S3 handles the objects in future requests. For more information, see [Working with objects in a versioning-enabled bucket](manage-objects-versioned-bucket.md).
+ The bucket owner (or any user with appropriate permissions) can suspend versioning to stop accruing object versions. When you suspend versioning, existing objects in your bucket do not change. What changes is how Amazon S3 handles objects in future requests. For more information, see [Working with objects in a versioning-suspended bucket](VersionSuspendedBehavior.md).

## Using S3 Versioning with S3 Lifecycle
<a name="versioning-lifecycle"></a>

To customize your data retention approach and control storage costs, use object versioning with S3 Lifecycle. For more information, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md). For information about creating S3 Lifecycle configurations using the AWS Management Console, AWS CLI, AWS SDKs, or the REST API, see [Setting an S3 Lifecycle configuration on a bucket](how-to-set-lifecycle-configuration-intro.md).

**Important**  
If you have an object expiration lifecycle configuration in your unversioned bucket and you want to maintain the same permanent delete behavior when you enable versioning, you must add a noncurrent expiration configuration. The noncurrent expiration lifecycle configuration manages the deletes of the noncurrent object versions in the versioning-enabled bucket. (A versioning-enabled bucket maintains one current, and zero or more noncurrent, object versions.) For more information, see [Setting an S3 Lifecycle configuration on a bucket](how-to-set-lifecycle-configuration-intro.md).

For information about working with S3 Versioning, see the following topics.

**Topics**
+ [Unversioned, versioning-enabled, and versioning-suspended buckets](#versioning-states)
+ [Using S3 Versioning with S3 Lifecycle](#versioning-lifecycle)
+ [How S3 Versioning works](versioning-workflows.md)
+ [Enabling versioning on buckets](manage-versioning-examples.md)
+ [Configuring MFA delete](MultiFactorAuthenticationDelete.md)
+ [Working with objects in a versioning-enabled bucket](manage-objects-versioned-bucket.md)
+ [Working with objects in a versioning-suspended bucket](VersionSuspendedBehavior.md)
+ [Troubleshooting versioning](troubleshooting-versioning.md)

# How S3 Versioning works
<a name="versioning-workflows"></a>

You can use S3 Versioning to keep multiple versions of an object in one bucket so that you can restore objects that are accidentally deleted or overwritten. For example, if you apply S3 Versioning to a bucket, the following changes occur: 
+ If you delete an object, instead of removing the object permanently, Amazon S3 inserts a delete marker, which becomes the current object version. You can then restore the previous version. For more information, see [Deleting object versions from a versioning-enabled bucket](DeletingObjectVersions.md).
+ If you overwrite an object, Amazon S3 adds a new object version in the bucket. The previous version remains in the bucket and becomes a noncurrent version. You can restore the previous version.

**Note**  
Normal Amazon S3 rates apply for every version of an object that is stored and transferred. Each version of an object is the entire object; it is not a diff from the previous version. Thus, if you have three versions of an object stored, you are charged for three objects.

Each S3 bucket that you create has a *versioning* subresource associated with it. (For more information, see [General purpose buckets configuration options](UsingBucket.md#bucket-config-options-intro).) By default, your bucket is *unversioned*, and the versioning subresource stores the empty versioning configuration, as follows.

```
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> 
</VersioningConfiguration>
```

To enable versioning, you can send a request to Amazon S3 with a versioning configuration that includes an `Enabled` status. 

```
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> 
  <Status>Enabled</Status> 
</VersioningConfiguration>
```

To suspend versioning, you set the status value to `Suspended`.

**Note**  
When you enable versioning on a bucket for the first time, it might take a short amount of time for the change to be fully propagated. While this change is propagating, you may encounter intermittent `HTTP 404 NoSuchKey` errors for requests to objects created or updated after enabling versioning. We recommend that you wait for 15 minutes after enabling versioning before issuing write operations (`PUT` or `DELETE`) on objects in the bucket. 

The bucket owner and all authorized AWS Identity and Access Management (IAM) users can enable versioning. The bucket owner is the AWS account that created the bucket. For more information about permissions, see [Identity and Access Management for Amazon S3](security-iam.md).

For more information about enabling and disabling S3 Versioning by using the AWS Management Console, AWS Command Line Interface (AWS CLI), or REST API, see [Enabling versioning on buckets](manage-versioning-examples.md).

**Topics**
+ [Version IDs](#version-ids)
+ [Versioning workflows](#versioning-workflows-examples)

## Version IDs
<a name="version-ids"></a>

If you enable versioning for a bucket, Amazon S3 automatically generates a unique version ID for the object that is being stored. For example, in one bucket you can have two objects with the same key (object name) but different version IDs, such as `photo.gif` (version 111111) and `photo.gif` (version 121212).

![\[Diagram depicting a versioning-enabled bucket that has two objects with the same key but different version IDs.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_Enabled.png)


Each object has a version ID, whether or not S3 Versioning is enabled. If S3 Versioning is not enabled, Amazon S3 sets the value of the version ID to `null`. If you enable S3 Versioning, Amazon S3 assigns a version ID value for the object. This value distinguishes that object from other versions of the same key.

When you enable S3 Versioning on an existing bucket, objects that are already stored in the bucket are unchanged. Their version IDs (`null`), contents, and permissions remain the same. After you enable S3 Versioning, each object that is added to the bucket gets a version ID, which distinguishes it from other versions of the same key. 

Only Amazon S3 generates version IDs, and they cannot be edited. Version IDs are Unicode, UTF-8 encoded, URL-ready, opaque strings that are no more than 1,024 bytes long. The following is an example:

`3sL4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY+MTRCxf3vjVBH40Nr8X8gdRQBpUMLUo`

**Note**  
For simplicity, the other examples in this topic use much shorter IDs.



## Versioning workflows
<a name="versioning-workflows-examples"></a>

When you `PUT` an object in a versioning-enabled bucket, the noncurrent version is not overwritten. As shown in the following figure, when a new version of `photo.gif` is `PUT` into a bucket that already contains an object with the same name, the following behavior occurs:
+ The original object (ID = 111111) remains in the bucket.
+ Amazon S3 generates a new version ID (121212), and adds this newer version of the object to the bucket.

![\[Diagram depicting how S3 Versioning works when you PUT an object in a versioning-enabled bucket.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_PUT_versionEnabled3.png)


With this functionality, you can retrieve a previous version of an object if an object has been accidentally overwritten or deleted.

When you `DELETE` an object, all versions remain in the bucket, and Amazon S3 inserts a delete marker, as shown in the following figure.

![\[Illustration that shows a delete marker insertion.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_DELETE_versioningEnabled.png)


The delete marker becomes the current version of the object. By default, `GET` requests retrieve the most recently stored version. Performing a `GET Object` request when the current version is a delete marker returns a `404 Not Found` error, as shown in the following figure.

![\[Illustration that shows a GetObject call for a delete marker returning a 404 (Not Found) error.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_DELETE_NoObjectFound.png)


However, you can `GET` a noncurrent version of an object by specifying its version ID. In the following figure, you `GET` a specific object version, 111111. Amazon S3 returns that object version even though it's not the current version.

For more information, see [Retrieving object versions from a versioning-enabled bucket](RetrievingObjectVersions.md).

![\[Diagram depicting how S3 Versioning works when you GET a noncurrent version in a versioning-enabled bucket.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_GET_Versioned3.png)


You can permanently delete an object by specifying the version that you want to delete. Only the owner of an Amazon S3 bucket or an authorized IAM user can permanently delete a version. If your `DELETE` operation specifies the `versionId`, that object version is permanently deleted, and Amazon S3 doesn't insert a delete marker.

![\[Diagram that shows how DELETE versionId permanently deletes a specific object version.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_DELETE_versioningEnabled2.png)


You can add more security by configuring a bucket to enable multi-factor authentication (MFA) delete. When you enable MFA delete for a bucket, the bucket owner must include two forms of authentication in any request to delete a version or change the versioning state of the bucket. For more information, see [Configuring MFA delete](MultiFactorAuthenticationDelete.md).

### When are new versions created for an object?
<a name="versioning-workflows-new-versions"></a>

New versions of objects are created only when you `PUT` a new object. Be aware that certain actions, such as `CopyObject`, work by implementing a `PUT` operation.

Some actions that modify the current object don't create a new version because they don't `PUT` a new object. This includes actions such as changing the tags on an object. 

**Important**  
If you notice a significant increase in the number of HTTP 503 (Service Unavailable) responses received for Amazon S3 `PUT` or `DELETE` object requests to a bucket that has S3 Versioning enabled, you might have one or more objects in the bucket for which there are millions of versions. For more information, see the S3 Versioning section of [Troubleshooting versioning](troubleshooting-versioning.md).

# Enabling versioning on buckets
<a name="manage-versioning-examples"></a>

You can use S3 Versioning to keep multiple versions of an object in one bucket. This section provides examples of how to enable versioning on a bucket using the console, REST API, AWS SDKs, and AWS Command Line Interface (AWS CLI). 

**Note**  
After enabling versioning on a bucket for the first time, it may take up to 15 minutes for the change to fully propagate across the S3 system. During this time, `GET` requests for objects created or updated after enabling versioning may result in `HTTP 404 NoSuchKey` errors. We recommend waiting 15 minutes after enabling versioning before performing any write operations (`PUT` or `DELETE`) on objects in the bucket. This waiting period helps avoid potential issues with object visibility and version tracking.

For more information about S3 Versioning, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md). For information about working with objects that are in versioning-enabled buckets, see [Working with objects in a versioning-enabled bucket](manage-objects-versioned-bucket.md).

To learn more about how to use S3 Versioning to protect data, see [Tutorial: Protecting data on Amazon S3 against accidental deletion or application bugs using S3 Versioning, S3 Object Lock, and S3 Replication](https://aws.amazon.com/getting-started/hands-on/protect-data-on-amazon-s3/?ref=docs_gateway/amazons3/manage-versioning-examples.html).

Each S3 bucket that you create has a *versioning* subresource associated with it. (For more information, see [General purpose buckets configuration options](UsingBucket.md#bucket-config-options-intro).) By default, your bucket is *unversioned*, and the versioning subresource stores the empty versioning configuration, as follows.

```
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> 
</VersioningConfiguration>
```

To enable versioning, you can send a request to Amazon S3 with a versioning configuration that includes a status. 

```
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> 
  <Status>Enabled</Status> 
</VersioningConfiguration>
```

To suspend versioning, you set the status value to `Suspended`.

The bucket owner and all authorized users can enable versioning. The bucket owner is the AWS account that created the bucket (the root account). For more information about permissions, see [Identity and Access Management for Amazon S3](security-iam.md).

The following sections provide more detail about enabling S3 Versioning using the console, AWS CLI, and the AWS SDKs.

## Using the S3 console
<a name="enable-versioning"></a>

Follow these steps to use the AWS Management Console to enable versioning on an S3 bucket.

**To enable or disable versioning on an S3 general purpose bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to enable versioning for.

1. Choose **Properties**.

1. Under **Bucket Versioning**, choose **Edit**.

1. Choose **Suspend** or **Enable**, and then choose **Save changes**.

**Note**  
You can use AWS multi-factor authentication (MFA) with versioning. When you use MFA with versioning, you must provide your AWS account’s access keys and a valid code from the account’s MFA device to permanently delete an object version or suspend or reactivate versioning.   
To use MFA with versioning, you enable `MFA Delete`. However, you can't enable `MFA Delete` using the AWS Management Console. You must use the AWS Command Line Interface (AWS CLI) or the API. For more information, see [Configuring MFA delete](MultiFactorAuthenticationDelete.md).

## Using the AWS CLI
<a name="manage-versioning-examples-cli"></a>

The following example enables versioning on an S3 general purpose bucket. 

```
aws s3api put-bucket-versioning --bucket amzn-s3-demo-bucket1 --versioning-configuration Status=Enabled
```

The following example enables S3 Versioning and multi-factor authentication (MFA) delete on a bucket for a physical MFA device. For physical MFA devices, in the `--mfa` parameter, pass a concatenation of the MFA device serial number, a space character, and the value that is displayed on your authentication device.

```
aws s3api put-bucket-versioning --bucket amzn-s3-demo-bucket1 --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa "SerialNumber 123456"
```

The following example enables S3 Versioning and multi-factor authentication (MFA) delete on a bucket for a virtual MFA device. For virtual MFA devices, in the `--mfa` parameter, pass a concatenation of the MFA device ARN, a space character, and the value that is displayed on your authentication device.

```
aws s3api put-bucket-versioning --bucket amzn-s3-demo-bucket1 --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa "arn:aws:iam::account-id:mfa/root-account-mfa-device 123789"
```

**Note**  
Using MFA delete requires an approved physical or virtual authentication device. For more information about using MFA delete in Amazon S3, see [Configuring MFA delete](MultiFactorAuthenticationDelete.md).

For more information about enabling versioning using the AWS CLI, see [put-bucket-versioning](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-versioning.html) in the *AWS CLI Command Reference*.

## Using the AWS SDKs
<a name="manage-versioning-examples-sdk"></a>

The following examples enable versioning on a bucket and then retrieve versioning status using the AWS SDK for Java and the AWS SDK for .NET. For information about using other AWS SDKs, see the [AWS Developer Center](https://aws.amazon.com/code/).

------
#### [ .NET ]

For information about setting up and running the code examples, see [Getting Started with the AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/net-dg-setup.html) in the *AWS SDK for .NET Developer Guide*. 

```
using System;
using Amazon.S3;
using Amazon.S3.Model;

namespace s3.amazon.com.rproxy.goskope.com.docsamples
{
    class BucketVersioningConfiguration
    {
        static string bucketName = "*** bucket name ***";

        public static void Main(string[] args)
        {
            using (var client = new AmazonS3Client(Amazon.RegionEndpoint.USEast1))
            {
                try
                {
                    EnableVersioningOnBucket(client);
                    string bucketVersioningStatus = RetrieveBucketVersioningConfiguration(client);
                }
                catch (AmazonS3Exception amazonS3Exception)
                {
                    if (amazonS3Exception.ErrorCode != null &&
                        (amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId")
                        ||
                        amazonS3Exception.ErrorCode.Equals("InvalidSecurity")))
                    {
                        Console.WriteLine("Check the provided AWS Credentials.");
                        Console.WriteLine(
                        "To sign up for service, go to http://aws.amazon.com/s3");
                    }
                    else
                    {
                        Console.WriteLine(
                         "Error occurred. Message:'{0}' when listing objects",
                         amazonS3Exception.Message);
                    }
                }
            }

            Console.WriteLine("Press any key to continue...");
            Console.ReadKey();
        }

        static void EnableVersioningOnBucket(IAmazonS3 client)
        {

                PutBucketVersioningRequest request = new PutBucketVersioningRequest
                {
                    BucketName = bucketName,
                    VersioningConfig = new S3BucketVersioningConfig 
                    {
                        Status = VersionStatus.Enabled
                    }
                };

                PutBucketVersioningResponse response = client.PutBucketVersioning(request);
        }


        static string RetrieveBucketVersioningConfiguration(IAmazonS3 client)
        {
                GetBucketVersioningRequest request = new GetBucketVersioningRequest
                {
                    BucketName = bucketName
                };
 
                GetBucketVersioningResponse response = client.GetBucketVersioning(request);
                return response.VersioningConfig.Status;
            }
    }
}
```

------
#### [ Java ]

For instructions on how to create and test a working sample, see [Getting Started](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/getting-started.html) in the AWS SDK for Java Developer Guide.

```
import java.io.IOException;

import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.AmazonS3Exception;
import com.amazonaws.services.s3.model.BucketVersioningConfiguration;
import com.amazonaws.services.s3.model.SetBucketVersioningConfigurationRequest;

public class BucketVersioningConfigurationExample {
    public static String bucketName = "*** bucket name ***"; 
    public static AmazonS3Client s3Client;

    public static void main(String[] args) throws IOException {
        s3Client = new AmazonS3Client(new ProfileCredentialsProvider());
        s3Client.setRegion(Region.getRegion(Regions.US_EAST_1));
        try {

            // 1. Enable versioning on the bucket.
        	BucketVersioningConfiguration configuration = 
        			new BucketVersioningConfiguration().withStatus("Enabled");
            
			SetBucketVersioningConfigurationRequest setBucketVersioningConfigurationRequest = 
					new SetBucketVersioningConfigurationRequest(bucketName,configuration);
			
			s3Client.setBucketVersioningConfiguration(setBucketVersioningConfigurationRequest);
			
			// 2. Get bucket versioning configuration information.
			BucketVersioningConfiguration conf = s3Client.getBucketVersioningConfiguration(bucketName);
			 System.out.println("bucket versioning configuration status:    " + conf.getStatus());

        } catch (AmazonS3Exception amazonS3Exception) {
            System.out.format("An Amazon S3 error occurred. Exception: %s", amazonS3Exception.toString());
        } catch (Exception ex) {
            System.out.format("Exception: %s", ex.toString());
        }        
    }
}
```

------
#### [ Python ]

The following Python code example creates an Amazon S3 bucket, enables it for versioning, and configures a lifecycle that expires noncurrent object versions after 7 days.

```
def create_versioned_bucket(bucket_name, prefix):
    """
    Creates an Amazon S3 bucket, enables it for versioning, and configures a lifecycle
    that expires noncurrent object versions after 7 days.

    Adding a lifecycle configuration to a versioned bucket is a best practice.
    It helps prevent objects in the bucket from accumulating a large number of
    noncurrent versions, which can slow down request performance.

    Usage is shown in the usage_demo_single_object function at the end of this module.

    :param bucket_name: The name of the bucket to create.
    :param prefix: Identifies which objects are automatically expired under the
                   configured lifecycle rules.
    :return: The newly created bucket.
    """
    try:
        bucket = s3.create_bucket(
            Bucket=bucket_name,
            CreateBucketConfiguration={
                "LocationConstraint": s3.meta.client.meta.region_name
            },
        )
        logger.info("Created bucket %s.", bucket.name)
    except ClientError as error:
        if error.response["Error"]["Code"] == "BucketAlreadyOwnedByYou":
            logger.warning("Bucket %s already exists! Using it.", bucket_name)
            bucket = s3.Bucket(bucket_name)
        else:
            logger.exception("Couldn't create bucket %s.", bucket_name)
            raise

    try:
        bucket.Versioning().enable()
        logger.info("Enabled versioning on bucket %s.", bucket.name)
    except ClientError:
        logger.exception("Couldn't enable versioning on bucket %s.", bucket.name)
        raise

    try:
        expiration = 7
        bucket.LifecycleConfiguration().put(
            LifecycleConfiguration={
                "Rules": [
                    {
                        "Status": "Enabled",
                        "Prefix": prefix,
                        "NoncurrentVersionExpiration": {"NoncurrentDays": expiration},
                    }
                ]
            }
        )
        logger.info(
            "Configured lifecycle to expire noncurrent versions after %s days "
            "on bucket %s.",
            expiration,
            bucket.name,
        )
    except ClientError as error:
        logger.warning(
            "Couldn't configure lifecycle on bucket %s because %s. "
            "Continuing anyway.",
            bucket.name,
            error,
        )

    return bucket
```

------

# Configuring MFA delete
<a name="MultiFactorAuthenticationDelete"></a>

When working with S3 Versioning in Amazon S3 buckets, you can optionally add another layer of security by configuring a bucket to enable *MFA (multi-factor authentication) delete*. When you do this, the bucket owner must include two forms of authentication in any request to delete a version or change the versioning state of the bucket.

MFA delete requires additional authentication for either of the following operations:
+ Changing the versioning state of your bucket
+ Permanently deleting an object version

MFA delete requires two forms of authentication together:
+ Your security credentials
+ The concatenation of a valid serial number, a space, and the six-digit code displayed on an approved authentication device

MFA delete thus provides added security if, for example, your security credentials are compromised. MFA delete can help prevent accidental bucket deletions by requiring the user who initiates the delete action to prove physical possession of an MFA device with an MFA code and adding an extra layer of friction and security to the delete action.

To identify buckets that have MFA delete enabled, you can use Amazon S3 Storage Lens metrics. S3 Storage Lens is a cloud-storage analytics feature that you can use to gain organization-wide visibility into object-storage usage and activity. For more information, see [ Assessing your storage activity and usage with S3 Storage Lens](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens?icmpid=docs_s3_user_guide_MultiFactorAuthenticationDelete.html). For a complete list of metrics, see [ S3 Storage Lens metrics glossary](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens_metrics_glossary.html?icmpid=docs_s3_user_guide_MultiFactorAuthenticationDelete.html).

The bucket owner, the AWS account that created the bucket (root account), and all authorized users can enable versioning. However, only the bucket owner (root account) can enable MFA delete. For more information, see [Securing Access to AWS Using MFA](https://aws.amazon.com/blogs/security/securing-access-to-aws-using-mfa-part-3/) on the AWS Security Blog.

**Note**  
To use MFA delete with versioning, you enable `MFA Delete`. However, you cannot enable `MFA Delete` using the AWS Management Console. You must use the AWS Command Line Interface (AWS CLI) or the API.   
For examples of using MFA delete with versioning, see the examples section in the topic [Enabling versioning on buckets](manage-versioning-examples.md).  
You cannot use MFA delete with lifecycle configurations. For more information about lifecycle configurations and how they interact with other configurations, see [How S3 Lifecycle interacts with other bucket configurations](lifecycle-and-other-bucket-config.md).

To enable or disable MFA delete, you use the same API that you use to configure versioning on a bucket. Amazon S3 stores the MFA delete configuration in the same *versioning* subresource that stores the bucket's versioning status.

```
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> 
  <Status>VersioningState</Status>
  <MfaDelete>MfaDeleteState</MfaDelete>  
</VersioningConfiguration>
```

To use MFA delete, you can use either a hardware or virtual MFA device to generate an authentication code. The following example shows a generated authentication code displayed on a hardware device.

![\[An example of a generated authentication code displayed on a hardware device.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/MFADevice.png)


MFA delete and MFA-protected API access are features intended to provide protection for different scenarios. You configure MFA delete on a bucket to help ensure that the data in your bucket cannot be accidentally deleted. MFA-protected API access is used to enforce another authentication factor (MFA code) when accessing sensitive Amazon S3 resources. You can require any operations against these Amazon S3 resources to be done with temporary credentials created using MFA. For an example, see [Requiring MFA](example-bucket-policies.md#example-bucket-policies-MFA). 

For more information about how to purchase and activate an authentication device, see [Multi-factor authentication](https://aws.amazon.com/iam/details/mfa/).

## To enable S3 Versioning and configure MFA delete
<a name="enable-versioning-mfa-delete"></a>

### Using the AWS CLI
<a name="enable-versioning-mfa-delete-cli"></a>

The serial number is the number that uniquely identifies the MFA device. For physical MFA devices, this is the unique serial number that's provided with the device. For virtual MFA devices, the serial number is the device ARN. To use the following commands, replace the *user input placeholders* with your own information.

The following example enables S3 Versioning and multi-factor authentication (MFA) delete on a bucket for a physical MFA device. For physical MFA devices, in the `--mfa` parameter, pass a concatenation of the MFA device serial number, a space character, and the value that is displayed on your authentication device.

```
aws s3api put-bucket-versioning --bucket amzn-s3-demo-bucket1 --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa "SerialNumber 123456"
```

The following example enables S3 Versioning and multi-factor authentication (MFA) delete on a bucket for a virtual MFA device. For virtual MFA devices, in the `--mfa` parameter, pass a concatenation of the MFA device ARN, a space character, and the value that is displayed on your authentication device.

```
aws s3api put-bucket-versioning --bucket amzn-s3-demo-bucket1 --versioning-configuration Status=Enabled,MFADelete=Enabled --mfa "arn:aws:iam::account-id:mfa/root-account-mfa-device 123789"
```

For more information, see the AWS rePost article [How do I turn on MFA delete for my Amazon S3 bucket?](https://repost.aws/knowledge-center/s3-bucket-mfa-delete).

### Using the REST API
<a name="enable-versioning-mfa-delete-rest-api"></a>

For more information about specifying MFA delete using the Amazon S3 REST API, see [PutBucketVersioning](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketVersioning.html) *Amazon Simple Storage Service API Reference*.

# Working with objects in a versioning-enabled bucket
<a name="manage-objects-versioned-bucket"></a>

Objects that are stored in an Amazon S3 bucket before you set the versioning state have a version ID of `null`. When you enable versioning, existing objects in your bucket do not change. What changes is how Amazon S3 handles the objects in future requests.

**Transitioning object versions**  
You can define lifecycle configuration rules for objects that have a well-defined lifecycle to transition object versions to the S3 Glacier Flexible Retrieval storage class at a specific time in the object's lifetime. For more information, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md).

The topics in this section explain various object operations in a versioning-enabled bucket. For more information about versioning, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).

**Topics**
+ [Adding objects to versioning-enabled buckets](AddingObjectstoVersioningEnabledBuckets.md)
+ [Listing objects in a versioning-enabled bucket](list-obj-version-enabled-bucket.md)
+ [Retrieving object versions from a versioning-enabled bucket](RetrievingObjectVersions.md)
+ [Deleting object versions from a versioning-enabled bucket](DeletingObjectVersions.md)
+ [Configuring versioned object permissions](VersionedObjectPermissionsandACLs.md)

# Adding objects to versioning-enabled buckets
<a name="AddingObjectstoVersioningEnabledBuckets"></a>

After you enable versioning on a bucket, Amazon S3 automatically adds a unique version ID to every object stored (using `PUT`, `POST`, or `CopyObject`) in the bucket. 

The following figure shows that Amazon S3 adds a unique version ID to an object when it is added to a versioning-enabled bucket. 

![\[Illustration that shows a unique version ID added to an object when it is put in a versioning-enabled bucket.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_PUT_versionEnabled.png)


**Note**  
The version ID values that Amazon S3 assigns are URL safe (can be included as part of a URI).

For more information about versioning, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md). You can add object versions to a versioning-enabled bucket using the console, AWS SDKs, and REST API.

## Using the console
<a name="add-obj-versioning-enabled-bucket-console"></a>

For instructions, see [Uploading objects](upload-objects.md). 

## Using the AWS SDKs
<a name="add-obj-versioning-enabled-bucket-sdk"></a>

For examples of uploading objects using the AWS SDKs for Java, .NET, and PHP, see [Uploading objects](upload-objects.md). The examples for uploading objects in nonversioned and versioning-enabled buckets are the same, although in the case of versioning-enabled buckets, Amazon S3 assigns a version number. Otherwise, the version number is null. 

For information about using other AWS SDKs, see the [AWS Developer Center](https://aws.amazon.com/code/). 

## Using the REST API
<a name="add-obj-versioning-enabled-bucket-rest"></a>

**To add objects to versioning-enabled buckets**

1. Enable versioning on a bucket using a `PutBucketVersioning` request.

   For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTVersioningStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTVersioningStatus.html) in the *Amazon Simple Storage Service API Reference*.

1. Send a `PUT`, `POST`, or `CopyObject` request to store an object in the bucket.

When you add an object to a versioning-enabled bucket, Amazon S3 returns the version ID of the object in the `x-amz-version-id` response header, as shown in the following example.

```
1. x-amz-version-id: 3/L4kqtJlcpXroDTDmJ+rmSpXd3dIbrHY
```

# Listing objects in a versioning-enabled bucket
<a name="list-obj-version-enabled-bucket"></a>

This section provides examples of listing object versions from a versioning-enabled bucket. Amazon S3 stores object version information in the *versions* subresource that is associated with the bucket. For more information, see [General purpose buckets configuration options](UsingBucket.md#bucket-config-options-intro). In order to list the objects in a versioning-enabled bucket, you need the `ListBucketVersions` permission.

## Using the S3 console
<a name="view-object-versions"></a>

Follow these steps to use the Amazon S3 console to see the different versions of an object.

**To see multiple versions of an object**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the **Buckets** list, choose the name of the bucket that contains the object.

1. To see a list of the versions of the objects in the bucket, choose the **Show versions** switch. 

   For each object version, the console shows a unique version ID, the date and time the object version was created, and other properties. (Objects stored in your bucket before you set the versioning state have a version ID of **null**.)

   To list the objects without the versions, choose the **List versions** switch.

You also can view, download, and delete object versions in the object overview pane on the console. For more information, see [Viewing object properties in the Amazon S3 console](view-object-properties.md).

**Note**  
 To access object versions older than 300 versions, you must use the AWS CLI or the object's URL.

**Important**  
You can undelete an object only if it was deleted as the latest (current) version. You can't undelete a previous version of an object that was deleted. For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).

## Using the AWS SDKs
<a name="list-obj-version-enabled-bucket-sdk-examples"></a>

The examples in this section show how to retrieve an object listing from a versioning-enabled bucket. Each request returns up to 1,000 versions, unless you specify a lower number. If the bucket contains more versions than this limit, you send a series of requests to retrieve the list of all versions. This process of returning results in "pages" is called *pagination*.

To show how pagination works, the examples limit each response to two object versions. After retrieving the first page of results, each example checks to determine whether the version list was truncated. If it was, the example continues retrieving pages until all versions have been retrieved. 

**Note**  
The following examples also work with a bucket that isn't versioning-enabled, or for objects that don't have individual versions. In those cases, Amazon S3 returns the object listing with a version ID of `null`.

 For information about using other AWS SDKs, see the [AWS Developer Center](https://aws.amazon.com/code/). 

------
#### [ Java ]

For instructions on creating and testing a working sample, see [Getting Started](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/getting-started.html) in the AWS SDK for Java Developer Guide.

```
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ListVersionsRequest;
import com.amazonaws.services.s3.model.S3VersionSummary;
import com.amazonaws.services.s3.model.VersionListing;

public class ListKeysVersioningEnabledBucket {

    public static void main(String[] args) {
        Regions clientRegion = Regions.DEFAULT_REGION;
        String bucketName = "*** Bucket name ***";

        try {
            AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
                    .withCredentials(new ProfileCredentialsProvider())
                    .withRegion(clientRegion)
                    .build();

            // Retrieve the list of versions. If the bucket contains more versions
            // than the specified maximum number of results, Amazon S3 returns
            // one page of results per request.
            ListVersionsRequest request = new ListVersionsRequest()
                    .withBucketName(bucketName)
                    .withMaxResults(2);
            VersionListing versionListing = s3Client.listVersions(request);
            int numVersions = 0, numPages = 0;
            while (true) {
                numPages++;
                for (S3VersionSummary objectSummary : versionListing.getVersionSummaries()) {
                    System.out.printf("Retrieved object %s, version %s\n",
                            objectSummary.getKey(),
                            objectSummary.getVersionId());
                    numVersions++;
                }
                // Check whether there are more pages of versions to retrieve. If
                // there are, retrieve them. Otherwise, exit the loop.
                if (versionListing.isTruncated()) {
                    versionListing = s3Client.listNextBatchOfVersions(versionListing);
                } else {
                    break;
                }
            }
            System.out.println(numVersions + " object versions retrieved in " + numPages + " pages");
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it, so it returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

------
#### [ .NET ]

For information about setting up and running the code examples, see [Getting Started with the AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/net-dg-setup.html) in the *AWS SDK for .NET Developer Guide*. 

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
    class ListObjectsVersioningEnabledBucketTest
    {
        static string bucketName = "*** bucket name ***";
        // Specify your bucket region (an example region is shown).
        private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
        private static IAmazonS3 s3Client;

        public static void Main(string[] args)
        {
            s3Client = new AmazonS3Client(bucketRegion);
            GetObjectListWithAllVersionsAsync().Wait();
        }

        static async Task GetObjectListWithAllVersionsAsync()
        {
            try
            {
                ListVersionsRequest request = new ListVersionsRequest()
                {
                    BucketName = bucketName,
                    // You can optionally specify key name prefix in the request
                    // if you want list of object versions of a specific object.

                    // For this example we limit response to return list of 2 versions.
                    MaxKeys = 2
                };
                do
                {
                    ListVersionsResponse response = await s3Client.ListVersionsAsync(request); 
                    // Process response.
                    foreach (S3ObjectVersion entry in response.Versions)
                    {
                        Console.WriteLine("key = {0} size = {1}",
                            entry.Key, entry.Size);
                    }

                    // If response is truncated, set the marker to get the next 
                    // set of keys.
                    if (response.IsTruncated)
                    {
                        request.KeyMarker = response.NextKeyMarker;
                        request.VersionIdMarker = response.NextVersionIdMarker;
                    }
                    else
                    {
                        request = null;
                    }
                } while (request != null);
            }
            catch (AmazonS3Exception e)
            {
                Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message);
            }
            catch (Exception e)
            {
                Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
            }
        }
    }
}
```

------

## Using the REST API
<a name="ListingtheObjectsinaVersioningEnabledBucket"></a>

**Example — Listing all object versions in a bucket**  
To list all the versions of all the objects in a bucket, you use the `versions` subresource in a `GET Bucket` request. Amazon S3 can retrieve a maximum of 1,000 objects, and each object version counts fully as an object. Therefore, if a bucket contains two keys (for example, `photo.gif` and `picture.jpg`), and the first key has 990 versions and the second key has 400 versions, a single request would retrieve all 990 versions of `photo.gif` and only the most recent 10 versions of `picture.jpg`.  
Amazon S3 returns object versions in the order in which they were stored, with the most recently stored returned first.  
In a `GET Bucket` request, include the `versions` subresource.  

```
1. GET /?versions HTTP/1.1
2. Host: bucketName.s3.amazonaws.com
3. Date: Wed, 28 Oct 2009 22:32:00 +0000
4. Authorization: AWS AKIAIOSFODNN7EXAMPLE:0RQf4/cRonhpaBX5sCYVf1bNRuU=
```

**Example — Retrieving all versions of a key**  
 To retrieve a subset of object versions, you use the request parameters for `GET Bucket`. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html).   

1. Set the `prefix` parameter to the key of the object that you want to retrieve.

1. Send a `GET Bucket` request using the `versions` subresource and `prefix`.

   `GET /?versions&prefix=objectName HTTP/1.1`

**Example — Retrieving objects using a prefix**  
The following example retrieves objects whose key is or begins with `myObject`.  

```
1. GET /?versions&prefix=myObject HTTP/1.1
2. Host: bucket.s3.amazonaws.com
3. Date: Wed, 28 Oct 2009 22:32:00 GMT
4. Authorization: AWS AKIAIOSFODNN7EXAMPLE:0RQf4/cRonhpaBX5sCYVf1bNRuU=
```
You can use the other request parameters to retrieve a subset of all versions of the object. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html) in the *Amazon Simple Storage Service API Reference*.

**Example — Retrieving a listing of additional objects if the response is truncated**  
If the number of objects that could be returned in a `GET` request exceeds the value of `max-keys`, the response contains `<isTruncated>true</isTruncated>`, and includes the first key (in `NextKeyMarker`) and the first version ID (in `NextVersionIdMarker`) that satisfy the request, but were not returned. You use those returned values as the starting position in a subsequent request to retrieve the additional objects that satisfy the `GET` request.   
Use the following process to retrieve additional objects that satisfy the original `GET Bucket versions` request from a bucket. For more information about `key-marker`, `version-id-marker`, `NextKeyMarker`, and `NextVersionIdMarker`, see [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html) in the *Amazon Simple Storage Service API Reference*.  
The following are additional responses that satisfy the original `GET` request:  
+ Set the value of `key-marker` to the key returned in `NextKeyMarker` in the previous response.
+ Set the value of `version-id-marker` to the version ID returned in `NextVersionIdMarker` in the previous response.
+ Send a `GET Bucket versions` request using `key-marker` and `version-id-marker`.

**Example — Retrieving objects starting with a specified key and version ID**  

```
1. GET /?versions&key-marker=myObject&version-id-marker=298459348571 HTTP/1.1
2. Host: bucket.s3.amazonaws.com
3. Date: Wed, 28 Oct 2009 22:32:00 GMT
4. Authorization: AWS AKIAIOSFODNN7EXAMPLE:0RQf4/cRonhpaBX5sCYVf1bNRuU=
```

## Using the AWS CLI
<a name="list-obj-version-enabled-bucket-cli"></a>

The following command returns metadata about all versions of the objects in a bucket. 

```
aws s3api list-object-versions --bucket amzn-s3-demo-bucket1
```

For more information about `list-object-versions` see [https://docs.aws.amazon.com/cli/latest/reference/s3api/list-object-versions.html](https://docs.aws.amazon.com/cli/latest/reference/s3api/list-object-versions.html) in the *AWS CLI Command Reference*.

# Retrieving object versions from a versioning-enabled bucket
<a name="RetrievingObjectVersions"></a>

Versioning in Amazon S3 is a way of keeping multiple variants of an object in the same bucket. A simple `GET` request retrieves the current version of an object. The following figure shows how `GET` returns the current version of the object, `photo.gif`.

![\[Illustration that shows how GET returns the current version of the object.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_GET_NoVersionID.png)


To retrieve a specific version, you have to specify its version ID. The following figure shows that a `GET versionId` request retrieves the specified version of the object (not necessarily the current one).

![\[Illustration that shows how a GET versionId request retrieves the specified version of the object.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_GET_Versioned.png)


You can retrieve object versions in Amazon S3 using the console, AWS SDKs, or REST API.

**Note**  
 To access object versions older than 300 versions, you must use the AWS CLI or the object's URL.

## Using the S3 console
<a name="retrieving-object-versions"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the **Buckets** list, choose the name of the bucket that contains the object.

1. In the **Objects** list, choose the name of the object.

1. Choose **Versions**.

   Amazon S3 shows all the versions for the object.

1. Select the check box next to the **Version ID** for the versions that you want to retrieve.

1. Choose **Actions**, choose **Download**, and save the object.

You also can view, download, and delete object versions in the object overview panel. For more information, see [Viewing object properties in the Amazon S3 console](view-object-properties.md).

**Important**  
You can undelete an object only if it was deleted as the latest (current) version. You can't undelete a previous version of an object that was deleted. For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).

## Using the AWS SDKs
<a name="retrieve-obj-version-sdks"></a>

The examples for uploading objects in nonversioned and versioning-enabled buckets are the same. However, for versioning-enabled buckets, Amazon S3 assigns a version number. Otherwise, the version number is null.

For examples of downloading objects using AWS SDKs for Java, .NET, and PHP, see [Downloading objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/download-objects.html).

For examples of listing the version of objects using AWS SDKs for .NET and Rust, see [List the version of objects in an Amazon S3 bucket](https://docs.aws.amazon.com/code-library/latest/ug/s3_example_s3_ListObjectVersions_section.html).

## Using the REST API
<a name="retrieve-obj-version-rest"></a>

**To retrieve a specific object version**

1. Set `versionId` to the ID of the version of the object that you want to retrieve.

1. Send a `GET Object versionId` request.

**Example — Retrieving a versioned object**  
The following request retrieves version `L4kqtJlcpXroDTDmpUMLUo` of `my-image.jpg`.  

```
1. GET /my-image.jpg?versionId=L4kqtJlcpXroDTDmpUMLUo HTTP/1.1
2. Host: bucket.s3.amazonaws.com
3. Date: Wed, 28 Oct 2009 22:32:00 GMT
4. Authorization: AWS AKIAIOSFODNN7EXAMPLE:0RQf4/cRonhpaBX5sCYVf1bNRuU=
```

You can retrieve just the metadata of an object (not the content). For information, see [Retrieving the metadata of an object version](RetMetaOfObjVersion.md).

For information about restoring a previous object version, see [Restoring previous versions](RestoringPreviousVersions.md).

# Retrieving the metadata of an object version
<a name="RetMetaOfObjVersion"></a>

If you only want to retrieve the metadata of an object (and not its content), you use the `HEAD` operation. By default, you get the metadata of the most recent version. To retrieve the metadata of a specific object version, you specify its version ID.

**To retrieve the metadata of an object version**

1. Set `versionId` to the ID of the version of the object whose metadata you want to retrieve.

1. Send a `HEAD Object versionId` request.

**Example — Retrieving the metadata of a versioned object**  
The following request retrieves the metadata of version `3HL4kqCxf3vjVBH40Nrjfkd` of `my-image.jpg`.  

```
1. HEAD /my-image.jpg?versionId=3HL4kqCxf3vjVBH40Nrjfkd HTTP/1.1
2. Host: bucket.s3.amazonaws.com
3. Date: Wed, 28 Oct 2009 22:32:00 GMT
4. Authorization: AWS AKIAIOSFODNN7EXAMPLE:0RQf4/cRonhpaBX5sCYVf1bNRuU=
```

The following shows a sample response.

```
 1. HTTP/1.1 200 OK
 2. x-amz-id-2: ef8yU9AS1ed4OpIszj7UDNEHGran
 3. x-amz-request-id: 318BC8BC143432E5
 4. x-amz-version-id: 3HL4kqtJlcpXroDTDmjVBH40Nrjfkd
 5. Date: Wed, 28 Oct 2009 22:32:00 GMT
 6. Last-Modified: Sun, 1 Jan 2006 12:00:00 GMT
 7. ETag: "fba9dede5f27731c9771645a39863328"
 8. Content-Length: 434234
 9. Content-Type: text/plain
10. Connection: close
11. Server: AmazonS3
```

# Restoring previous versions
<a name="RestoringPreviousVersions"></a>

You can use versioning to retrieve previous versions of an object. There are two approaches to doing so:
+ Copy a previous version of the object into the same bucket.

  The copied object becomes the current version of that object and all object versions are preserved.
+ Permanently delete the current version of the object.

  When you delete the current object version, you, in effect, turn the previous version into the current version of that object.

Because all object versions are preserved, you can make any earlier version the current version by copying a specific version of the object into the same bucket. In the following figure, the source object (ID = 111111) is copied into the same bucket. Amazon S3 supplies a new ID (88778877) and it becomes the current version of the object. So, the bucket has both the original object version (111111) and its copy (88778877). For more information about getting a previous version and then uploading it to make it the current version, see [Retrieving object versions from a versioning-enabled bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/RetrievingObjectVersions.html) and [Uploading objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/upload-objects.html).

![\[Illustration that shows copying a specific version of an object into the same bucket to make it the current version.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_COPY2.png)


A subsequent `GET` retrieves version 88778877.

The following figure shows how deleting the current version (121212) of an object leaves the previous version (111111) as the current object. For more information about deleting an object, see [Deleting a single object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-objects.html).

![\[Illustration that shows deleting the current version of an object leaves the previous version as the current object.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_COPY_delete2.png)


A subsequent `GET` retrieves version 111111.

**Note**  
To restore object versions in batches, you can [use the `CopyObject` operation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-copy-object.html). The `CopyObject` operation copies each object that is specified in the manifest. However, be aware that objects aren't necessarily copied in the same order as they appear in the manifest. For versioned buckets, if preserving current/non-current version order is important, you should copy all non-current versions first. Then, after the first job is complete, copy the current versions in a subsequent job.

## To restore previous object versions
<a name="restoring-obj-version-version-enabled-bucket-examples"></a>

For more guidance on restoring deleted objects, see [How can I retrieve an Amazon S3 object that was deleted in a versioning-enabled bucket?](https://repost.aws/knowledge-center/s3-undelete-configuration) in the AWS re:Post Knowledge Center.

### Using the S3 console
<a name="retrieving-object-versions"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the **Buckets** list, choose the name of the bucket that contains the object.

1. In the **Objects** list, choose the name of the object.

1. Choose **Versions**.

   Amazon S3 shows all the versions for the object.

1. Select the check box next to the **Version ID** for the versions that you want to retrieve.

1. Choose **Actions**, choose **Download**, and save the object.

You also can view, download, and delete object versions in the object overview panel. For more information, see [Viewing object properties in the Amazon S3 console](view-object-properties.md).

**Important**  
You can undelete an object only if it was deleted as the latest (current) version. You can't undelete a previous version of an object that was deleted. For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).

### Using the AWS SDKs
<a name="restoring-obj-version-version-enabled-bucket-sdks"></a>

For information about using other AWS SDKs, see the [AWS Developer Center](https://aws.amazon.com/code/). 

------
#### [ Python ]

The following Python code example restores a versioned object's previous version by deleting all versions that occurred after the specified rollback version.

```
def rollback_object(bucket, object_key, version_id):
    """
    Rolls back an object to an earlier version by deleting all versions that
    occurred after the specified rollback version.

    Usage is shown in the usage_demo_single_object function at the end of this module.

    :param bucket: The bucket that holds the object to roll back.
    :param object_key: The object to roll back.
    :param version_id: The version ID to roll back to.
    """
    # Versions must be sorted by last_modified date because delete markers are
    # at the end of the list even when they are interspersed in time.
    versions = sorted(
        bucket.object_versions.filter(Prefix=object_key),
        key=attrgetter("last_modified"),
        reverse=True,
    )

    logger.debug(
        "Got versions:\n%s",
        "\n".join(
            [
                f"\t{version.version_id}, last modified {version.last_modified}"
                for version in versions
            ]
        ),
    )

    if version_id in [ver.version_id for ver in versions]:
        print(f"Rolling back to version {version_id}")
        for version in versions:
            if version.version_id != version_id:
                version.delete()
                print(f"Deleted version {version.version_id}")
            else:
                break

        print(f"Active version is now {bucket.Object(object_key).version_id}")
    else:
        raise KeyError(
            f"{version_id} was not found in the list of versions for " f"{object_key}."
        )
```

------

# Deleting object versions from a versioning-enabled bucket
<a name="DeletingObjectVersions"></a>

You can delete object versions from Amazon S3 buckets whenever you want. You can also define lifecycle configuration rules for objects that have a well-defined lifecycle to request Amazon S3 to expire current object versions or permanently remove noncurrent object versions. When your bucket has versioning enabled or the versioning is suspended, the lifecycle configuration actions work as follows:
+ The `Expiration` action applies to the current object version. Instead of deleting the current object version, Amazon S3 retains the current version as a noncurrent version by adding a *delete marker*, which then becomes the current version.
+ The `NoncurrentVersionExpiration` action applies to noncurrent object versions, and Amazon S3 permanently removes these object versions. You cannot recover permanently removed objects.

For more information about S3 Lifecycle, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md) and [Examples of S3 Lifecycle configurations](lifecycle-configuration-examples.md).

To see how many current and noncurrent object versions that your buckets have, you can use Amazon S3 Storage Lens metrics. S3 Storage Lens is a cloud-storage analytics feature that you can use to gain organization-wide visibility into object-storage usage and activity. For more information, see [ Using S3 Storage Lens to optimize your storage costs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-lens-optimize-storage.html?icmpid=docs_s3_user_guide_DeletingObjectVersions.html). For a complete list of metrics, see [ S3 Storage Lens metrics glossary](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens_metrics_glossary.html?icmpid=docs_s3_user_guide_replication.html).

**Note**  
 Normal Amazon S3 rates apply for every version of an object that is stored and transferred, including noncurrent object versions. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). 

## Delete request use cases
<a name="delete-request-use-cases"></a>

A `DELETE` request has the following use cases:
+ When versioning is enabled, a simple `DELETE` cannot permanently delete an object. (A simple `DELETE` request is a request that doesn't specify a version ID.) Instead, Amazon S3 inserts a delete marker in the bucket, and that marker becomes the current version of the object with a new ID. 

  When you try to `GET` an object whose current version is a delete marker, Amazon S3 behaves as though the object has been deleted (even though it has not been erased) and returns a 404 error. For more information, see [Working with delete markers](DeleteMarker.md).

  The following figure shows that a simple `DELETE` does not actually remove the specified object. Instead, Amazon S3 inserts a delete marker.  
![\[Illustration that shows a delete marker insertion.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_DELETE_versioningEnabled.png)
+ To delete versioned objects permanently, you must use `DELETE Object versionId`.

  The following figure shows that deleting a specified object version permanently removes that object.  
![\[Diagram that shows how DELETE Object versionId permanently deletes a specific object version.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_DELETE_versioningEnabled2.png)

## To delete object versions
<a name="delete-object-version"></a>

You can delete object versions in Amazon S3 using the console, AWS SDKs, the REST API, or the AWS Command Line Interface.

### Using the S3 console
<a name="deleting-object-versions"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the **Buckets** list, choose the name of the bucket that contains the object.

1. In the **Objects** list, choose the name of the object.

1. Choose **Versions**.

   Amazon S3 shows all the versions for the object.

1. Select the check box next to the **Version ID** for the versions that you want to permanently delete.

1. Choose **Delete**.

1. In **Permanently delete objects?**, enter **permanently delete**.
**Warning**  
When you permanently delete an object version, the action cannot be undone.

1. Choose **Delete objects**.

   Amazon S3 deletes the object version.

### Using the AWS SDKs
<a name="delete-obj-version-version-enabled-bucket-sdks"></a>

For examples of deleting objects using the AWS SDKs for Java, .NET, and PHP, see [Deleting Amazon S3 objects](DeletingObjects.md). The examples for deleting objects in nonversioned and versioning-enabled buckets are the same. However, for versioning-enabled buckets, Amazon S3 assigns a version number. Otherwise, the version number is null. 

For information about using other AWS SDKs, see the [AWS Developer Center](https://aws.amazon.com/code/). 

------
#### [ Python ]

The following Python code example permanently deletes a versioned object by deleting all of its versions.

```
def permanently_delete_object(bucket, object_key):
    """
    Permanently deletes a versioned object by deleting all of its versions.

    Usage is shown in the usage_demo_single_object function at the end of this module.

    :param bucket: The bucket that contains the object.
    :param object_key: The object to delete.
    """
    try:
        bucket.object_versions.filter(Prefix=object_key).delete()
        logger.info("Permanently deleted all versions of object %s.", object_key)
    except ClientError:
        logger.exception("Couldn't delete all versions of %s.", object_key)
        raise
```

------

### Using the REST API
<a name="delete-obj-version-enabled-bucket-rest"></a>

**To delete a specific version of an object**
+ In a `DELETE`, specify a version ID.

**Example — Deleting a specific version**  
The following example deletes version `UIORUnfnd89493jJFJ` of `photo.gif`.  

```
1. DELETE /photo.gif?versionId=UIORUnfnd89493jJFJ HTTP/1.1 
2. Host: bucket.s3.amazonaws.com
3. Date: Wed, 12 Oct 2009 17:50:00 GMT
4. Authorization: AWS AKIAIOSFODNN7EXAMPLE:xQE0diMbLRepdf3YB+FIEXAMPLE=
5. Content-Type: text/plain
6. Content-Length: 0
```

### Using the AWS CLI
<a name="delete-obj-version-enabled-bucket-cli"></a>

The following command deletes an object named test.txt from a bucket named `amzn-s3-demo-bucket1`. To remove a specific version of an object, you must be the bucket owner and you must use the version Id subresource.

```
aws s3api delete-object --bucket amzn-s3-demo-bucket1 --key test.txt --version-id versionID
```

For more information about `delete-object` see [https://docs.aws.amazon.com/cli/latest/reference/s3api/delete-object.html](https://docs.aws.amazon.com/cli/latest/reference/s3api/delete-object.html) in the *AWS CLI Command Reference*.

For more information about deleting object versions, see the following topics:
+ [Working with delete markers](DeleteMarker.md)
+ [Removing delete markers to make an older version current](ManagingDelMarkers.md#RemDelMarker)
+ [Deleting an object from an MFA delete-enabled bucket](UsingMFADelete.md)

# Working with delete markers
<a name="DeleteMarker"></a>

A *delete marker* in Amazon S3 is a placeholder (or marker) for a versioned object that was specified in a simple `DELETE` request. A simple `DELETE` request is a request that doesn't specify a version ID. Because the object is in a versioning-enabled bucket, the object is not deleted. But the delete marker makes Amazon S3 behave as if the object is deleted. You can use an Amazon S3 API `DELETE` call on a delete marker. To do this, you must make the `DELETE` request by using an AWS Identity and Access Management (IAM) user or role with the appropriate permissions.

A delete marker has a *key name* (or *key*) and version ID like any other object. However, a delete marker differs from other objects in the following ways:
+ A delete marker doesn't have data associated with it.
+ A delete marker isn't associated with an access control list (ACL) value.
+ If you issue a `GET` request for a delete marker, the `GET` request doesn't retrieve anything because a delete marker has no data. Specifically, when your `GET` request doesn't specify a `versionId`, you get a 404 (Not Found) error.

Delete markers accrue a minimal charge for storage in Amazon S3. The storage size of a delete marker is equal to the size of the key name of the delete marker. A key name is a sequence of Unicode characters. The UTF-8 encoding for the key name adds 1‐4 bytes of storage to your bucket for each character in the name. Delete markers are stored in the S3 Standard storage class. 

If you want to find out how many delete markers you have and what storage class they're stored in, you can use Amazon S3 Storage Lens. For more information, see [Monitoring your storage activity and usage with Amazon S3 Storage Lens](storage_lens.md) and [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

For more information about key names, see [Naming Amazon S3 objects](object-keys.md). For information about deleting a delete marker, see [Managing delete markers](ManagingDelMarkers.md). 

Only Amazon S3 can create a delete marker, and it does so whenever you send a `DeleteObject` request on an object in a versioning-enabled or suspended bucket. The object specified in the `DELETE` request is not actually deleted. Instead, the delete marker becomes the current version of the object. The object's key name (or key) becomes the key of the delete marker. 

When you get an object without specifying a `versionId` in your request, if its current version is a delete marker, Amazon S3 responds with the following:
+ A 404 (Not Found) error
+ A response header, `x-amz-delete-marker: true`

When you get an object by specifying a `versionId` in your request, if the specified version is a delete marker, Amazon S3 responds with the following:
+ A 405 (Method Not Allowed) error
+ A response header, `x-amz-delete-marker: true`
+ A response header, `Last-Modified: timestamp` (only when using the [HeadObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html) or [GetObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html) API operations)

The `x-amz-delete-marker: true` response header tells you that the object accessed was a delete marker. This response header never returns `false`, because when the value is `false`, the current or specified version of the object is not a delete marker.

The `Last-Modified` response header provides the creation time of the delete markers.

The following figure shows how a `GetObject` API call on an object whose current version is a delete marker responds with a 404 (Not Found) error and the response header includes `x-amz-delete-marker: true`.

![\[Illustration that shows a GetObject call for a delete marker returning a 404 (Not Found) error.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_DELETE_NoObjectFound.png)


If you make a `GetObject` call on an object by specifying a `versionId` in your request, and if the specified version is a delete marker, Amazon S3 responds with a 405 (Method Not Allowed) error and the response headers include `x-amz-delete-marker: true` and `Last-Modified: timestamp`.

![\[Illustration that shows a GetObject call for a delete marker returning a 405 (Method Not Allowed) error.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_DELETE_NoObjectFound_405.png)


Even if overwritten, delete markers remain in your object versions. The only way to list delete markers (and other versions of an object) is by using a [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html) request. You can make this request in the AWS Management Console by listing your objects in an general purpose bucket and selecting **Show versions**. For more information, see [Listing objects in a versioning-enabled bucket](list-obj-version-enabled-bucket.md).

The following figure shows that a [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html) or [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html) request doesn't return objects whose current version is a delete marker.

![\[Illustration that shows how a ListObjectsV2 or ListObjects call doesn't return any delete markers.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_GETBucketwithDeleteMarkers.png)


# Managing delete markers
<a name="ManagingDelMarkers"></a>

## Configuring lifecycle to clean up expired delete markers automatically
<a name="LifecycleDelMarker"></a>

An expired object delete marker is one where all object versions are deleted and only a single delete marker remains. If the lifecycle configuration is set to delete current versions, or the `ExpiredObjectDeleteMarker` action is explicitly set, Amazon S3 removes the expired object’s delete marker. For an example, see [Removing expired object delete markers in a versioning-enabled bucket](lifecycle-configuration-examples.md#lifecycle-config-conceptual-ex7). 

## Removing delete markers to make an older version current
<a name="RemDelMarker"></a>

When you delete an object in a versioning-enabled bucket, all versions remain in the bucket, and Amazon S3 creates a delete marker for the object. To undelete the object, you must delete this delete marker. For more information about versioning and delete markers, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).

To delete a delete marker permanently, you must include its version ID in a `DeleteObject versionId` request. The following figure shows how a `DeleteObject versionId` request permanently removes a delete marker.

![\[Illustration that shows a delete marker deletion using its version ID.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_DELETE_deleteMarkerVersioned.png)


The effect of removing the delete marker is that a simple `GET` request will now retrieve the current version ID (121212) of the object. 

**Note**  
If you use a `DeleteObject` request where the current version is a delete marker (without specifying the version ID of the delete marker), Amazon S3 does not delete the delete marker, but instead `PUTs` another delete marker.

To delete a delete marker with a `NULL` version ID, you must pass the `NULL` as the version ID in the `DeleteObject` request. The following figure shows how a simple `DeleteObject` request made without a version ID where the current version is a delete marker, removes nothing, but instead adds an additional delete marker with a unique version ID (7498372).

![\[Illustration that shows a delete marker deletion using a NULL version ID.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_DELETE_deleteMarker.png)


## Using the S3 console
<a name="undelete-objects"></a>

Use the following steps to recover deleted objects that are not folders from your S3 bucket, including objects that are within those folders. 

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the **Buckets** list, choose the name of the bucket that you want.

1. To see a list of the **versions** of the objects in the bucket, choose the **List versions** switch. You'll be able to see the delete markers for deleted objects. 

   

1. To undelete an object, you must delete the delete marker. Select the check box next to the **delete marker** of the object to recover, and then choose **Delete**.

1. Confirm the deletion on the **Delete objects** page.

   1. For **Permanently delete objects?** enter **permanently delete**.

   1. Choose **Delete objects**.

**Note**  
You can't use the Amazon S3 console to undelete folders. You must use the AWS CLI or SDK. For examples, see [ How can I retrieve an Amazon S3 object that was deleted in a versioning-enabled bucket?](https://aws.amazon.com/premiumsupport/knowledge-center/s3-undelete-configuration/) in the AWS Knowledge Center.

## Using the REST API
<a name="delete-marker-rest-api"></a>

**To permanently remove a delete marker**

1. Set `versionId` to the ID of the version to the delete marker you want to remove.

1. Send a `DELETE Object versionId` request.

**Example — Removing a delete marker**  
The following example removes the delete marker for `photo.gif` version 4857693.  

```
1. DELETE /photo.gif?versionId=4857693 HTTP/1.1
2. Host: bucket.s3.amazonaws.com
3. Date: Wed, 28 Oct 2009 22:32:00 GMT
4. Authorization: AWS AKIAIOSFODNN7EXAMPLE:0RQf4/cRonhpaBX5sCYVf1bNRuU=
```

When you delete a delete marker, Amazon S3 includes the following in the response.

```
1. 204 NoContent 
2. x-amz-version-id: versionID 
3. x-amz-delete-marker: true
```

## Using the AWS SDKs
<a name="remove-delete-marker-examples-sdk"></a>

For information about using other AWS SDKs, see the [AWSDeveloper Center](https://aws.amazon.com/code/).

------
#### [ Python ]

The following Python code example demonstrates how to remove a delete marker from an object and thus makes the most recent non-current version, the current version of the object.

```
def revive_object(bucket, object_key):
    """
    Revives a versioned object that was deleted by removing the object's active
    delete marker.
    A versioned object presents as deleted when its latest version is a delete marker.
    By removing the delete marker, we make the previous version the latest version
    and the object then presents as *not* deleted.

    Usage is shown in the usage_demo_single_object function at the end of this module.

    :param bucket: The bucket that contains the object.
    :param object_key: The object to revive.
    """
    # Get the latest version for the object.
    response = s3.meta.client.list_object_versions(
        Bucket=bucket.name, Prefix=object_key, MaxKeys=1
    )

    if "DeleteMarkers" in response:
        latest_version = response["DeleteMarkers"][0]
        if latest_version["IsLatest"]:
            logger.info(
                "Object %s was indeed deleted on %s. Let's revive it.",
                object_key,
                latest_version["LastModified"],
            )
            obj = bucket.Object(object_key)
            obj.Version(latest_version["VersionId"]).delete()
            logger.info(
                "Revived %s, active version is now %s  with body '%s'",
                object_key,
                obj.version_id,
                obj.get()["Body"].read(),
            )
        else:
            logger.warning(
                "Delete marker is not the latest version for %s!", object_key
            )
    elif "Versions" in response:
        logger.warning("Got an active version for %s, nothing to do.", object_key)
    else:
        logger.error("Couldn't get any version info for %s.", object_key)
```

------

# Deleting an object from an MFA delete-enabled bucket
<a name="UsingMFADelete"></a>

When you configure MFA delete, only the root user can permanently delete object versions or change the versioning configuration on your S3 bucket. You must use an MFA device to authenticate the root user to perform the delete action.

If a bucket's versioning configuration is MFA delete enabled, the bucket owner must include the `x-amz-mfa` request header in requests to permanently delete an object version or change the versioning state of the bucket. Requests that include `x-amz-mfa` must use HTTPS.

The header's value is the concatenation of your authentication device's serial number, a space, and the authentication code displayed on it. If you don't include this request header, the request fails.

When using the AWS CLI include the same information as the value of the `mfa` parameter.

For more information about authentication devices, see [Multi-factor Authentication](https://aws.amazon.com/iam/details/mfa/).

For more information about enabling MFA delete, see [Configuring MFA delete](MultiFactorAuthenticationDelete.md).

**Note**  
Deleting an object in a versioning-enabled bucket that is MFA delete enabled is not available through the AWS Management Console.

## Using the AWS CLI
<a name="MFADeleteCLI"></a>

To delete an object in a versioning-enabled bucket that is MFA delete enabled, use the following command. When you use the following example command, replace the `user input placeholders` with your own information.

```
 aws s3api delete-object --bucket amzn-s3-demo-bucket --key OBJECT-KEY --version-id "VERSION ID" --mfa "MFA_DEVICE_SERIAL_NUMBER MFA_DEVICE_CODE"						
```

## Using the REST API
<a name="MFADeleteAPI"></a>

The following example deletes `my-image.jpg` (with the specified version), which is in a bucket configured with MFA delete enabled. 

For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectDELETE.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectDELETE.html) in the Amazon Simple Storage Service API Reference

```
1. DELETE /my-image.jpg?versionId=3HL4kqCxf3vjVBH40Nrjfkd HTTPS/1.1
2. Host: bucketName.s3.amazonaws.com
3. x-amz-mfa: 20899872 301749
4. Date: Wed, 28 Oct 2009 22:32:00 GMT
5. Authorization: AWS AKIAIOSFODNN7EXAMPLE:0RQf4/cRonhpaBX5sCYVf1bNRuU=
```

# Configuring versioned object permissions
<a name="VersionedObjectPermissionsandACLs"></a>

Permissions for objects in Amazon S3 are set at the version level. Each version has its own object owner. The AWS account that creates the object version is the owner. So, you can set different permissions for different versions of the same object. To do so, you must specify the version ID of the object whose permissions you want to set in a `PUT Object versionId acl` request. For a detailed description and instructions on using ACLs, see [Identity and Access Management for Amazon S3](security-iam.md).

**Example — Setting permissions for an object version**  
The following request sets the permission of the grantee with canonical user ID *b4bf1b36f9716f094c3079dcf5ac9982d4f2847de46204d47448bc557fb5ac2a*, to `FULL_CONTROL` on the key, `my-image.jpg`, version ID, `3HL4kqtJvjVBH40Nrjfkd`.  

```
 1. PUT /my-image.jpg?acl&versionId=3HL4kqtJvjVBH40Nrjfkd HTTP/1.1
 2. Host: bucket.s3.amazonaws.com
 3. Date: Wed, 28 Oct 2009 22:32:00 GMT
 4. Authorization: AWS AKIAIOSFODNN7EXAMPLE:0RQf4/cRonhpaBX5sCYVf1bNRuU=
 5. Content-Length: 124
 6.  
 7. <AccessControlPolicy>
 8.   <Owner>
 9.     <ID>75cc57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID>
10.   </Owner>
11.   <AccessControlList>
12.     <Grant>
13.       <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
14.         <ID>a9a7b886d6fd24a52fe8ca5bef65f89a64e0193f23000e241bf9b1c61be666e9</ID>
15.       </Grantee>
16.       <Permission>FULL_CONTROL</Permission>
17.     </Grant>
18.   </AccessControlList>
19.   </AccessControlPolicy>
```

Likewise, to get the permissions of a specific object version, you must specify its version ID in a `GET Object versionId acl` request. You need to include the version ID because, by default, `GET Object acl` returns the permissions of the current version of the object. 

**Example — Retrieving the permissions for a specified object version**  
In the following example, Amazon S3 returns the permissions for the key, `my-image.jpg`, version ID, `DVBH40Nr8X8gUMLUo`.  

```
1. GET /my-image.jpg?versionId=DVBH40Nr8X8gUMLUo&acl HTTP/1.1
2. Host: bucket.s3.amazonaws.com
3. Date: Wed, 28 Oct 2009 22:32:00 GMT
4. Authorization: AWS AKIAIOSFODNN7EXAMPLE:0RQf4/cRonhpaBX5sCYVf1bNRuU
```

For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGETacl.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGETacl.html) in the *Amazon Simple Storage Service API Reference*.

# Working with objects in a versioning-suspended bucket
<a name="VersionSuspendedBehavior"></a>

In Amazon S3, you can suspend versioning to stop accruing new versions of the same object in a bucket. You might do this because you only want a single version of an object in a bucket. Or, you might not want to accrue charges for multiple versions. 

When you suspend versioning, existing objects in your bucket do not change. What changes is how Amazon S3 handles objects in future requests. The topics in this section explain various object operations in a versioning-suspended bucket, including adding, retrieving, and deleting objects.

For more information about S3 Versioning, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md). For more information about retrieving object versions, see [Retrieving object versions from a versioning-enabled bucket](RetrievingObjectVersions.md).

**Topics**
+ [Adding objects to versioning-suspended buckets](AddingObjectstoVersionSuspendedBuckets.md)
+ [Retrieving objects from versioning-suspended buckets](RetrievingObjectsfromVersioningSuspendedBuckets.md)
+ [Deleting objects from versioning-suspended buckets](DeletingObjectsfromVersioningSuspendedBuckets.md)

# Adding objects to versioning-suspended buckets
<a name="AddingObjectstoVersionSuspendedBuckets"></a>

You can add objects to versioning-suspended buckets in Amazon S3 to create the object with a null version ID or overwrite any object version with a matching version ID.

After you suspend versioning on a bucket, Amazon S3 automatically adds a `null` version ID to every subsequent object stored thereafter (using `PUT`, `POST`, or `CopyObject`) in that bucket.

The following figure shows how Amazon S3 adds the version ID of `null` to an object when it is added to a version-suspended bucket.

![\[Amazon S3 adding the version ID of null to an object graphic.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_PUT_versionSuspended.png)


If a null version is already in the bucket and you add another object with the same key, the added object overwrites the original null version. 

If there are versioned objects in the bucket, the version you `PUT` becomes the current version of the object. The following figure shows how adding an object to a bucket that contains versioned objects does not overwrite the object already in the bucket. 

In this case, version 111111 was already in the bucket. Amazon S3 attaches a version ID of null to the object being added and stores it in the bucket. Version 111111 is not overwritten.

![\[Amazon S3 adding the version ID of null to an object without overwriting version 111111 graphic.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_PUT_versionSuspended3.png)


If a null version already exists in a bucket, the null version is overwritten, as shown in the following figure.

![\[Amazon S3 adding the version ID of null to an object while overwriting the original contents graphic.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_PUT_versionSuspended4.png)


Although the key and version ID (`null`) of the null version are the same before and after the `PUT`, the contents of the null version originally stored in the bucket are replaced by the contents of the object `PUT` into the bucket.

# Retrieving objects from versioning-suspended buckets
<a name="RetrievingObjectsfromVersioningSuspendedBuckets"></a>

A `GET Object` request returns the current version of an object whether you've enabled versioning on a bucket or not. The following figure shows how a simple `GET` returns the current version of an object.

![\[Illustration that shows how a simple GET returns the current version of an object.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_GET_suspended.png)


# Deleting objects from versioning-suspended buckets
<a name="DeletingObjectsfromVersioningSuspendedBuckets"></a>

You can delete objects from versioning-suspended buckets to remove an object with a null version ID.

If versioning is suspended for a bucket, a `DELETE` request:
+ Can only remove an object whose version ID is `null`.
+ Doesn't remove anything if there isn't a null version of the object in the bucket.
+ Inserts a delete marker into the bucket.

If bucket versioning is suspended, the operation removes the object that has a null `versionId`. If a version ID exists, Amazon S3 inserts a delete marker that becomes the current version of the object. The following figure shows how a simple `DELETE` removes a null version and Amazon S3 inserts a delete marker in its place instead with a `null` version ID.

![\[Illustration that shows a simple delete to remove an object with a NULL version ID.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_DELETE_versioningSuspended.png)


To permanently delete an object that has a `versionId`, you must include the object’s `versionId` in the request. Since a delete marker doesn't contain any content, you'll lose the content for the `null` version when a delete marker replaces it.

The following figure shows a bucket that doesn't have a null version. In this case, the `DELETE` removes nothing. Instead, Amazon S3 just inserts a delete marker.

![\[Illustration that shows a delete marker insertion.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_DELETE_versioningSuspendedNoNull.png)


Even in a versioning-suspended bucket, the bucket owner can permanently delete a specified version by including the version ID in the `DELETE` request, unless permissions for the `DELETE` request have been explicitly denied. For example, to deny deletion of any objects that have a `null` version ID, you must explicitly deny the `s3:DeleteObject` and `s3:DeleteObjectVersions` permissions.

The following figure shows that deleting a specified object version permanently removes that version of the object. Only the bucket owner can delete a specified object version.

![\[Illustration that shows a permanent object deletion using a specified version ID.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/versioning_DELETE_versioningEnabled2.png)


# Troubleshooting versioning
<a name="troubleshooting-versioning"></a>

The following topics can help you troubleshoot some common Amazon S3 versioning issues.

**Topics**
+ [I want to recover objects that were accidentally deleted in a versioning-enabled bucket](#recover-objects)
+ [I want to permanently delete versioned objects](#delete-objects-permanent)
+ [I'm experiencing performance degradation after enabling bucket versioning](#performance-degradation)

## I want to recover objects that were accidentally deleted in a versioning-enabled bucket
<a name="recover-objects"></a>

In general, when object versions are deleted from S3 buckets, there is no way for Amazon S3 to recover them. However, if you have enabled S3 Versioning on your S3 bucket, a `DELETE` request that doesn't specify a version ID cannot permanently delete an object. Instead, a delete marker is added as a placeholder. This delete marker becomes the current version of the object. 

To verify whether your deleted objects are permanently deleted or temporarily deleted (with a delete marker in their place), do the following: 

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the name of the bucket that contains the object.

1. In the **Objects** list, Turn on the **Show versions** toggle to the right of the search bar, and then search for the deleted object in the search bar. This toggle is available only if versioning was previously enabled on the bucket.

   You can also use [S3 Inventory to search for deleted objects](storage-inventory.md#storage-inventory-contents).

1. If you can't find the object after toggling **Show versions** or creating an inventory report, and you also cannot find a [delete marker](DeleteMarker.md) of the object, the deletion is permanent and the object cannot be recovered.



You can also verify a deleted object's status by using the `HeadObject` API operation from the AWS Command Line Interface (AWS CLI). To do so, use the following `head-object` command and replace the `user input placeholders` with your own information: 

`aws s3api head-object --bucket amzn-s3-demo-bucket --key index.html`

If you run the `head-object` command on a versioned object whose current version is a delete marker, you will receive a 404 Not Found error. For example: 

An error occurred (404) when calling the HeadObject operation: Not Found

If you run the `head-object` command on a versioned object and provide the object's version ID, Amazon S3 retrieves the object's metadata, confirming that the object still exists and is not permanently deleted.

`aws s3api head-object --bucket amzn-s3-demo-bucket --key index.html --version-id versionID`

```
{
"AcceptRanges": "bytes",
"ContentType": "text/html",
"LastModified": "Thu, 16 Apr 2015 18:19:14 GMT",
"ContentLength": 77,
"VersionId": "Zg5HyL7m.eZU9iM7AVlJkrqAiE.0UG4q",
"ETag": "\"30a6ec7e1a9ad79c203d05a589c8b400\"",
"Metadata": {}
}
```

If the object is found and the newest version is a delete marker, the previous version of the object still exists. Because the delete marker is the current version of the object, you can recover the object by deleting the delete marker. 

After you permanently remove the delete marker, the second newest version of the object becomes the current version of the object, making your object available once again. For a visual depiction of how objects are recovered, see [Removing delete markers](ManagingDelMarkers.md#RemDelMarker).

To remove a specific version of an object, you must be the bucket owner. To delete a delete marker permanently, you must include its version ID in a `DeleteObject` request. To delete the delete marker, use the following command, and replace the `user input placeholders` with your own information: 

****  
`aws s3api delete-object --bucket amzn-s3-demo-bucket --key index.html --version-id versionID`

For more information about the `delete-object` command, see [https://docs.aws.amazon.com//cli/latest/reference/s3api/delete-object.html](https://docs.aws.amazon.com//cli/latest/reference/s3api/delete-object.html) in the *AWS CLI Command Reference*. For more information about permanently deleting delete markers, see [Managing delete markers](ManagingDelMarkers.md).

## I want to permanently delete versioned objects
<a name="delete-objects-permanent"></a>

In a versioning-enabled bucket, a `DELETE` request without a version ID cannot permanently delete an object. Instead, such a request inserts a delete marker.

To permanently delete versioned objects, you can choose from the following methods:
+ Create an S3 Lifecycle rule to permanently delete noncurrent versions. To permanently delete noncurrent versions, select **Permanently delete noncurrent versions of objects**, and then enter a number under **Days after objects become noncurrent**. You can optionally specify the number of newer versions to retain by entering a value under **Number of newer versions to retain**. For more information about creating this rule, see [Setting an S3 Lifecycle configuration](how-to-set-lifecycle-configuration-intro.md).
+ Delete a specified version by including the version ID in the `DELETE` request. For more information, see [How to delete versioned objects permanently](DeletingObjectVersions.md#delete-request-use-cases).
+ Create a lifecycle rule to expire current versions. To expire current versions of objects, select **Expire current versions of objects**, and then add a number under **Days after object creation**. For more information about creating this lifecycle rule, see [Setting an S3 Lifecycle configuration](how-to-set-lifecycle-configuration-intro.md).
+ To permanently delete all versioned objects and delete markers, create two lifecycle rules: one to expire current versions and permanently delete noncurrent versions of objects, and the other to delete expired object delete markers.

In a versioning-enabled bucket, a `DELETE` request that doesn't specify a version ID can remove only objects with a `NULL` version ID. If the object was uploaded when versioning was enabled, a `DELETE` request that doesn't specify a version ID creates a delete marker of that object.

**Note**  
For S3 Object Lock-enabled buckets, a `DELETE` object request with a protected object version ID causes a 403 Access Denied error. A `DELETE` object request without a version ID adds a delete marker as the newest version of the object with a 200 OK response. Objects protected by Object Lock cannot be permanently deleted until their retention periods and legal holds are removed. For more information, see [How S3 Object Lock works](object-lock.md#object-lock-overview).

## I'm experiencing performance degradation after enabling bucket versioning
<a name="performance-degradation"></a>

Performance degradation can occur on versioning-enabled buckets if there are too many delete markers or versioned objects, and if best practices aren't followed.

**Too many delete markers**  
After you enable versioning on a bucket, a `DELETE` request without a version ID made to an object creates a delete marker with a unique version ID. Lifecycle configurations with an **Expire current versions of objects** rule add a delete marker with a unique version ID to every object. Excessive delete markers can reduce performance in the bucket.

When versioning is suspended on a bucket, Amazon S3 marks the version ID as `NULL` on newly created objects. An expiration action in a versioning-suspended bucket causes Amazon S3 to create a delete marker with `NULL` as the version ID. In a versioning-suspended bucket, a `NULL` delete marker is created for any delete request. These `NULL` delete markers are also called expired object delete markers when all object versions are deleted and only a single delete marker remains. If too many `NULL` delete markers accumulate, performance degradation in the bucket occurs.

**Too many versioned objects**  
If a versioning-enabled bucket contains objects with millions of versions, an increase in 503 Service Unavailable errors can occur. If you notice a significant increase in the number of HTTP 503 Service Unavailable responses received for `PUT` or `DELETE` object requests to a versioning-enabled bucket, you might have one or more objects in the bucket with millions of versions. When you have objects with millions of versions, Amazon S3 automatically throttles requests to the bucket. Throttling requests protects your bucket from an excessive amount of request traffic, which could potentially impede other requests made to the same bucket. 

To determine which objects have millions of versions, use S3 Inventory. S3 Inventory generates a report that provides a flat file list of the objects in a bucket. For more information, see [Cataloging and analyzing your data with S3 Inventory](storage-inventory.md).

To verify if there are high number of versioned objects in the bucket, use S3 Storage Lens metrics to view the **Current version object count**, **Noncurrent version object count**, and **Delete marker object count**. For more information about Storage Lens metrics, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

The Amazon S3 team encourages customers to investigate applications that repeatedly overwrite the same object, potentially creating millions of versions for that object, to determine whether the application is working as intended. For instance, an application overwriting the same object every minute for a week can create over ten thousand versions. We recommend storing less than one hundred thousand versions for each object. If you have a use case that requires millions of versions for one or more objects, contact the AWS Support team for assistance with determining a better solution.

**Best practices**  
To prevent versioning-related performance degradation issues, we recommend that you employ the following best practices:
+ Enable a lifecycle rule to expire the previous versions of objects. For example, you can create a lifecycle rule to expire noncurrent versions after 30 days of the object being noncurrent. You can also retain multiple noncurrent versions if you don't want to delete all of them. For more information, see [Setting an S3 Lifecycle configuration](how-to-set-lifecycle-configuration-intro.md).
+ Enable a lifecycle rule to delete expired object delete markers that don't have associated data objects in the bucket. For more information, see [Removing expired object delete markers](lifecycle-configuration-examples.md#lifecycle-config-conceptual-ex7).

For additional Amazon S3 performance-optimization best practices, see [Best practices design patterns](optimizing-performance.md).

# Locking objects with Object Lock
<a name="object-lock"></a>

S3 Object Lock can help prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of time or indefinitely. Object Lock uses a *write-once-read-many* (WORM) model to store objects. You can use Object Lock to help meet regulatory requirements that require WORM storage, or to add another layer of protection against object changes or deletion.

**Note**  
S3 Object Lock has been assessed by Cohasset Associates for use in environments that are subject to SEC 17a-4, CFTC, and FINRA regulations. For more information about how Object Lock relates to these regulations, see the [Cohasset Associates Compliance Assessment](https://d1.awsstatic.com/r2018/b/S3-Object-Lock/Amazon-S3-Compliance-Assessment.pdf).

Object Lock provides two ways to manage object retention: *retention periods* and *legal holds*. An object version can have a retention period, a legal hold, or both.
+ **Retention period** – A retention period specifies a fixed period of time during which an object version remains locked. You can set a unique retention period for individual objects. Additionally, you can set a default retention period on an S3 bucket. You may also restrict the minimum and maximum allowable retention periods with the `s3:object-lock-remaining-retention-days` condition key in the bucket policy. This condition key helps you establish the allowable retention period. For more information, see [Setting limits on retention periods with a bucket policy](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-managing.html#object-lock-managing-retention-limits).
+ **Legal hold** – A legal hold provides the same protection as a retention period, but it has no expiration date. Instead, a legal hold remains in place until you explicitly remove it. Legal holds are independent from retention periods and are placed on individual object versions.

Object Lock works only in buckets that have S3 Versioning enabled. When you lock an object version, Amazon S3 stores the lock information in the metadata for that object version. Placing a retention period or a legal hold on an object protects only the version that's specified in the request. Retention periods and legal holds don't prevent new versions of the object from being created, or delete markers to be added on top of the object. For information about S3 Versioning, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).

If you put an object into a bucket that already contains an existing protected object with the same object key name, Amazon S3 creates a new version of that object. The existing protected version of the object remains locked according to its retention configuration.

## How S3 Object Lock works
<a name="object-lock-overview"></a>

**Topics**
+ [Retention periods](#object-lock-retention-periods)
+ [Retention modes](#object-lock-retention-modes)
+ [Legal holds](#object-lock-legal-holds)
+ [How deletes work with S3 Object Lock](#object-lock-how-deletes-work)
+ [Best practices for using S3 Object Lock](#object-lock-best-practices)
+ [Required permissions](#object-lock-permissions)

### Retention periods
<a name="object-lock-retention-periods"></a>

A *retention period* protects an object version for a fixed amount of time. When you place a retention period on an object version, Amazon S3 stores a timestamp in the object version's metadata to indicate when the retention period expires. After the retention period expires, the object version can be overwritten or deleted.

You can place a retention period explicitly on an individual object version or on a bucket's properties so that it applies to all objects in the bucket automatically. When you apply a retention period to an object version explicitly, you specify a *Retain Until Date* for the object version. Amazon S3 stores this date in the object version's metadata.

You can also set a retention period in a bucket's properties. When you set a retention period on a bucket, you specify a duration, in either days or years, for how long to protect every object version placed in the bucket. When you place an object in the bucket, Amazon S3 calculates a *Retain Until Date* for the object version by adding the specified duration to the object version's creation timestamp. The object version is then protected exactly as though you explicitly placed an individual lock with that retention period on the object version.

**Note**  
When you `PUT` an object version that has an explicit individual retention mode and period in a bucket, the object version's individual Object Lock settings override any bucket property retention settings.

Like all other Object Lock settings, retention periods apply to individual object versions. Different versions of a single object can have different retention modes and periods.

For example, suppose that you have an object that is 15 days into a 30-day retention period, and you `PUT` an object into Amazon S3 with the same name and a 60-day retention period. In this case, your `PUT` request succeeds, and Amazon S3 creates a new version of the object with a 60-day retention period. The older version maintains its original retention period and becomes deletable in 15 days.

After you've applied a retention setting to an object version, you can extend the retention period. To do this, submit a new Object Lock request for the object version with a *Retain Until Date* that is later than the one currently configured for the object version. Amazon S3 replaces the existing retention period with the new, longer period. Any user with permissions to place an object retention period can extend a retention period for an object version. To set a retention period, you must have the `s3:PutObjectRetention` permission.

When you set a retention period on an object or S3 bucket, you must select one of two retention modes: *compliance* or *governance*.

### Retention modes
<a name="object-lock-retention-modes"></a>

S3 Object Lock provides two retention modes that apply different levels of protection to your objects:
+ Compliance mode
+ Governance mode

In *compliance* mode, a protected object version can't be overwritten or deleted by any user, including the root user in your AWS account. When an object is locked in compliance mode, its retention mode can't be changed, and its retention period can't be shortened. Compliance mode helps ensure that an object version can't be overwritten or deleted for the duration of the retention period.

**Note**  
The only way to delete an object under the compliance mode before its retention date expires is to delete the associated AWS account.

In *governance* mode, users can't overwrite or delete an object version or alter its lock settings unless they have special permissions. With governance mode, you protect objects against being deleted by most users, but you can still grant some users permission to alter the retention settings or delete the objects if necessary. You can also use governance mode to test retention-period settings before creating a compliance-mode retention period. 

To override or remove governance-mode retention settings, you must have the `s3:BypassGovernanceRetention` permission and must explicitly include `x-amz-bypass-governance-retention:true` as a request header with any request that requires overriding governance mode. 

**Note**  
By default, the Amazon S3 console includes the `x-amz-bypass-governance-retention:true` header. If you try to delete objects protected by *governance* mode and have the `s3:BypassGovernanceRetention` permission, the operation will succeed. 

### Legal holds
<a name="object-lock-legal-holds"></a>

With Object Lock, you can also place a *legal hold* on an object version. Like a retention period, a legal hold prevents an object version from being overwritten or deleted. However, a legal hold doesn't have an associated fixed amount of time and remains in effect until removed. Legal holds can be freely placed and removed by any user who has the `s3:PutObjectLegalHold` permission. 

Legal holds are independent from retention periods. Placing a legal hold on an object version doesn't affect the retention mode or retention period for that object version. 

For example, suppose that you place a legal hold on an object version and that object version is also protected by a retention period. If the retention period expires, the object doesn't lose its WORM protection. Rather, the legal hold continues to protect the object until an authorized user explicitly removes the legal hold. Similarly, if you remove a legal hold while an object version has a retention period in effect, the object version remains protected until the retention period expires.

### How deletes work with S3 Object Lock
<a name="object-lock-how-deletes-work"></a>

If your bucket has S3 Object Lock enabled and the object is protected by a retention period or legal hold and you try to delete an object, Amazon S3 returns one of the following responses, depending on how you tried to delete the object:
+ **Permanent `DELETE` request** – If you issued a permanent `DELETE` request (a request that specifies a version ID), Amazon S3 returns an Access Denied (`403 Forbidden`) error when you try to delete the object. For more information about troubleshooting Access Denied errors with Object Lock, see [S3 Object Lock settings](troubleshoot-403-errors.md#troubleshoot-403-object-lock).
+ **Simple `DELETE` request** – If you issued a simple `DELETE` request (a request that doesn't specify a version ID), Amazon S3 returns a `200 OK` response and inserts a [delete marker](DeleteMarker.md) in the bucket, and that marker becomes the current version of the object with a new ID. For more information about managing delete markers with Object Lock, see [Managing delete markers with Object Lock](object-lock-managing.md#object-lock-managing-delete-markers).

### Best practices for using S3 Object Lock
<a name="object-lock-best-practices"></a>

Consider using *Governance mode* if you want to protect objects from being deleted by most users during a pre-defined retention period, but at the same time want some users with special permissions to have the flexibility to alter the retention settings or delete the objects. 

Consider using *Compliance mode* if you never want any user, including the root user in your AWS account, to be able to delete the objects during a pre-defined retention period. You can use this mode in case you have a requirement to store compliant data. 

You can use *Legal Hold* when you are not sure for how long you want your objects to stay immutable. This could be because you have an upcoming external audit of your data and want to keep objects immutable till the audit is complete. Alternately, you may have an ongoing project utilizing a dataset that you want to keep immutable until the project is complete. 

### Required permissions
<a name="object-lock-permissions"></a>

Object Lock operations require specific permissions. Depending on the exact operation that you're attempting, you might need any of the following permissions:
+ `s3:BypassGovernanceRetention`
+ `s3:GetBucketObjectLockConfiguration`
+ `s3:GetObjectLegalHold`
+ `s3:GetObjectRetention`
+ `s3:PutBucketObjectLockConfiguration`
+ `s3:PutObjectLegalHold`
+ `s3:PutObjectRetention`

For a complete list of Amazon S3 permissions with descriptions, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.

For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

For information about using conditions with permissions, see [Bucket policy examples using condition keys](amazon-s3-policy-keys.md).

# Object Lock considerations
<a name="object-lock-managing"></a>

Amazon S3 Object Lock can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.

You can use the Amazon S3 console, AWS Command Line Interface (AWS CLI), AWS SDKs, or Amazon S3 REST API to view or set Object Lock information. For general information about S3 Object Lock capabilities, see [Locking objects with Object Lock](object-lock.md).

**Important**  
After you enable Object Lock on a bucket, you can't disable Object Lock or suspend versioning for that bucket. 
S3 buckets with Object Lock can't be used as destination buckets for server access logs. For more information, see [Logging requests with server access logging](ServerLogs.md).

**Topics**
+ [Permissions for viewing lock information](#object-lock-managing-view)
+ [Bypassing governance mode](#object-lock-managing-bypass)
+ [Using Object Lock with S3 Replication](#object-lock-managing-replication)
+ [Using Object Lock with encryption](#object-lock-managing-encryption)
+ [Using Object Lock with Amazon S3 Inventory](#object-lock-inv-report)
+ [Managing S3 Lifecycle policies with Object Lock](#object-lock-managing-lifecycle)
+ [Managing delete markers with Object Lock](#object-lock-managing-delete-markers)
+ [Using S3 Storage Lens with Object Lock](#object-lock-storage-lens)
+ [Uploading objects to an Object Lock enabled bucket](#object-lock-put-object)
+ [Configuring events and notifications](#object-lock-managing-events)
+ [Setting limits on retention periods with a bucket policy](#object-lock-managing-retention-limits)

## Permissions for viewing lock information
<a name="object-lock-managing-view"></a>

You can programmatically view the Object Lock status of an Amazon S3 object version by using the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html) or [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html) operations. Both operations return the retention mode, retain until date, and legal hold status for the specified object version. Additionally, you can view the Object Lock status for multiple objects in your S3 bucket using S3 Inventory. 

To view an object version's retention mode and retention period, you must have the `s3:GetObjectRetention` permission. To view an object version's legal hold status, you must have the `s3:GetObjectLegalHold` permission. To view a bucket's default retention configuration, you must have the `s3:GetBucketObjectLockConfiguration` permission. If you make a request for an Object Lock configuration on a bucket that doesn't have S3 Object Lock enabled, Amazon S3 returns an error. 

## Bypassing governance mode
<a name="object-lock-managing-bypass"></a>

If you have the `s3:BypassGovernanceRetention` permission, you can perform operations on object versions that are locked in governance mode as if they were unprotected. These operations include deleting an object version, shortening the retention period, or removing the Object Lock retention period by placing a new `PutObjectRetention` request with empty parameters. 

To bypass governance mode, you must explicitly indicate in your request that you want to bypass this mode. To do this, include the `x-amz-bypass-governance-retention:true` header with your `PutObjectRetention` API operation request, or use the equivalent parameter with requests made through the AWS CLI or AWS SDKs. The S3 console automatically applies this header for requests made through the S3 console if you have the `s3:BypassGovernanceRetention` permission.

**Note**  
Bypassing governance mode doesn't affect an object version's legal hold status. If an object version has a legal hold enabled, the legal hold remains and prevents requests to overwrite or delete the object version.

## Using Object Lock with S3 Replication
<a name="object-lock-managing-replication"></a>

You can use Object Lock with S3 Replication to enable automatic, asynchronous copying of locked objects and their retention metadata, across S3 buckets. This means that for replicated objects, Amazon S3 takes the object lock configuration of the source bucket. In other words, if the source bucket has Object Lock enabled, the destination buckets must also have Object Lock enabled. If an object is directly uploaded to the destination bucket (outside of S3 Replication), it takes the Object Lock set on the destination bucket. When you use replication, objects in a *source bucket* are replicated to one or more *destination buckets*. 

To set up replication on a bucket with Object Lock enabled, you can use the S3 console, AWS CLI, Amazon S3 REST API, or AWS SDKs.

**Note**  
To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Setting up permissions for live replication](setting-repl-config-perm-overview.md).  
For general information about S3 Replication, see [Replicating objects within and across Regions](replication.md).  
For examples of setting up S3 Replication, see [Examples for configuring live replication](replication-example-walkthroughs.md).

## Using Object Lock with encryption
<a name="object-lock-managing-encryption"></a>

Amazon S3 encrypts all new objects by default. You can use Object Lock with your encrypted objects. For more information, see [Protecting data with encryption](UsingEncryption.md).

While Object Lock can help prevent Amazon S3 objects from being deleted or overwritten, it does not protect against losing access to the encryption keys or encryption keys being deleted. For example, if you encrypt your objects with AWS KMS server-side encryption and your AWS KMS key is deleted your objects may become unreadable.

## Using Object Lock with Amazon S3 Inventory
<a name="object-lock-inv-report"></a>

You can configure Amazon S3 Inventory to create lists of the objects in an S3 bucket on a defined schedule. You can configure Amazon S3 Inventory to include the following Object Lock metadata for your objects:
+ The retain until date
+ The retention mode
+ The legal hold status

For more information, see [Cataloging and analyzing your data with S3 Inventory](storage-inventory.md).

## Managing S3 Lifecycle policies with Object Lock
<a name="object-lock-managing-lifecycle"></a>

Object lifecycle management configurations continue to function normally on protected objects, including placing delete markers. However, a locked version of an object cannot be deleted by a S3 Lifecycle expiration policy. Object Lock is maintained regardless of which storage class the object resides in and throughout S3 Lifecycle transitions between storage classes.

For more information about managing object lifecycles, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md).

## Managing delete markers with Object Lock
<a name="object-lock-managing-delete-markers"></a>

Although you can't delete a protected object version, you can still create a delete marker for that object. Placing a delete marker on an object doesn't delete the object or its object versions. However, it makes Amazon S3 behave in most ways as though the object has been deleted. For more information, see [Working with delete markers](DeleteMarker.md).

**Note**  
Delete markers are not WORM-protected, regardless of any retention period or legal hold in place on the underlying object.

## Using S3 Storage Lens with Object Lock
<a name="object-lock-storage-lens"></a>

To see metrics for Object Lock-enabled storage bytes and object count, you can use Amazon S3 Storage Lens. S3 Storage Lens is a cloud-storage analytics feature that you can use to gain organization-wide visibility into object-storage usage and activity.

For more information, see [Using S3 Storage Lens to protect your data](storage-lens-data-protection.md).

For a complete list of metrics, see [Amazon S3 Storage Lens metrics glossary](storage_lens_metrics_glossary.md).

## Uploading objects to an Object Lock enabled bucket
<a name="object-lock-put-object"></a>

The `Content-MD5` or `x-amz-sdk-checksum-algorithm` header is required for any request to upload an object with a retention period configured using Object Lock. Theses headers are a way to verify the integrity of your object during upload.

When uploading an object with the Amazon S3 console, S3 automatically adds the `Content-MD5` header. You can optionally specify an additional checksum function and checksum value through the console as the `x-amz-sdk-checksum-algorithm` header. If you use the [PutObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html) API you must specify the `Content-MD5` header, the `x-amz-sdk-checksum-algorithm` header, or both to configure the Object Lock retention period.

For more information, see [Checking object integrity in Amazon S3](checking-object-integrity.md).

## Configuring events and notifications
<a name="object-lock-managing-events"></a>

You can use Amazon S3 Event Notifications to track access and changes to your Object Lock configurations and data by using AWS CloudTrail. For information about CloudTrail, see [What is AWS CloudTrail?](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) in the *AWS CloudTrail User Guide*.

You can also use Amazon CloudWatch to generate alerts based on this data. For information about CloudWatch, see the [What is Amazon CloudWatch?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) in the *Amazon CloudWatch User Guide*.

## Setting limits on retention periods with a bucket policy
<a name="object-lock-managing-retention-limits"></a>

You can set minimum and maximum allowable retention periods for a bucket by using a bucket policy. The maximum retention period is 100 years.

The following example shows a bucket policy that uses the `s3:object-lock-remaining-retention-days` condition key to set a maximum retention period of 10 days.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "SetRetentionLimits",
    "Statement": [
        {
            "Sid": "SetRetentionPeriod",
            "Effect": "Deny",
            "Principal": "*",
            "Action": [
                "s3:PutObjectRetention"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket1/*",
            "Condition": {
                "NumericGreaterThan": {
                    "s3:object-lock-remaining-retention-days": "10"
                }
            }
        }
    ]
}
```

------

**Note**  
If your bucket is the destination bucket for a replication configuration, you can set up minimum and maximum allowable retention periods for object replicas that are created by using replication. To do so, you must allow the `s3:ReplicateObject` action in your bucket policy. For more information about replication permissions, see [Setting up permissions for live replication](setting-repl-config-perm-overview.md). 

For more information about bucket policies, see the following topics:
+ [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*

  For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).
+ [Object operations](security_iam_service-with-iam.md#using-with-s3-actions-related-to-objects)
+ [Bucket policy examples using condition keys](amazon-s3-policy-keys.md)

# Configuring S3 Object Lock
<a name="object-lock-configure"></a>

With Amazon S3 Object Lock, you can store objects in Amazon S3 general purpose buckets by using a *write-once-read-many* (WORM) model. You can use S3 Object Lock to prevent an object from being deleted or overwritten for a fixed amount of time or indefinitely. For general information about Object Lock capabilities, see [Locking objects with Object Lock](object-lock.md).

Before you lock any objects, you must enable S3 Versioning and Object Lock on a general purpose bucket. Afterward, you can set a retention period, a legal hold, or both. 

To work with Object Lock, you must have certain permissions. For a list of the permissions related to various Object Lock operations, see [Required permissions](object-lock.md#object-lock-permissions).

**Important**  
After you enable Object Lock on a bucket, you can't disable Object Lock or suspend versioning for that bucket. 
S3 buckets with Object Lock can't be used as destination buckets for server access logs. For more information, see [Logging requests with server access logging](ServerLogs.md).

**Topics**
+ [Enable Object Lock when creating a new S3 general purpose bucket](#object-lock-configure-new-bucket)
+ [Enable Object Lock on an existing S3 bucket](#object-lock-configure-existing-bucket)
+ [Set or modify a legal hold on an S3 object](#object-lock-configure-set-legal-hold)
+ [Set or modify a retention period on an S3 object](#object-lock-configure-set-retention-period-object)
+ [Set or modify a default retention period on an S3 bucket](#object-lock-configure-set-retention-period-bucket)

## Enable Object Lock when creating a new S3 general purpose bucket
<a name="object-lock-configure-new-bucket"></a>

You can enable Object Lock when creating a new S3 general purpose bucket by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), AWS SDKs, or Amazon S3 REST API.

### Using the S3 console
<a name="object-lock-new-bucket-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. Choose **Create bucket**.

   The **Create bucket** page opens.

1. For **Bucket name**, enter a name for your bucket.
**Note**  
After you create a bucket, you can't change its name. For more information about naming buckets, see [General purpose bucket naming rules](bucketnamingrules.md).

1. For **Region**, choose the AWS Region where you want the bucket to reside. 

1. Under **Object Ownership**, choose to disable or enable access control lists (ACLs) and control ownership of objects uploaded in your bucket.

1. Under **Block Public Access settings for this bucket**, choose the Block Public Access settings that you want to apply to the bucket. 

1. Under **Bucket Versioning**, choose **Enabled**.

   Object Lock works only with versioned buckets.

1. (Optional) Under **Tags**, you can choose to add tags to your bucket. Tags are key-value pairs that are used to categorize storage and allocate costs.

1. Under **Advanced settings**, find **Object Lock** and choose **Enable**.

   You must acknowledge that enabling Object Lock will permanently allow objects in this bucket to be locked.

1. Choose **Create bucket**.

### Using the AWS CLI
<a name="object-lock-new-bucket-cli"></a>

The following `create-bucket` example creates a new S3 bucket named `amzn-s3-demo-bucket1` with Object Lock enabled:

```
aws s3api create-bucket --bucket amzn-s3-demo-bucket1 --object-lock-enabled-for-bucket
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/create-bucket.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/create-bucket.html) in the *AWS CLI Command Reference*.

**Note**  
You can run AWS CLI commands from the console by using AWS CloudShell. AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. For more information, see [What is CloudShell?](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html) in the *AWS CloudShell User Guide*.

### Using the REST API
<a name="object-lock-new-bucket-rest"></a>

You can use the REST API to create a new S3 bucket with Object Lock enabled. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html) in the *Amazon Simple Storage Service API Reference*.

### Using the AWS SDKs
<a name="object-lock-new-bucket-sdk"></a>

For examples of how to enable Object Lock when creating a new S3 bucket with the AWS SDKs, see [Code examples](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_LCreateBucket_section.html) in the *Amazon S3 API Reference*.

For examples of how to get the current Object Lock configuration with the AWS SDKs, see [Code examples](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_GetObjectLockConfiguration_section.html) in the *Amazon S3 API Reference*.

For an interactive scenario demonstrating different Object Lock features using the AWS SDKs, see [Code examples](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_Scenario_ObjectLock_section.html) in the *Amazon S3 API Reference*.

For general information about using different AWS SDKs, see [Developing with Amazon S3 using the AWS SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/sdk-general-information-section.html) in the *Amazon S3 API Reference*.

## Enable Object Lock on an existing S3 bucket
<a name="object-lock-configure-existing-bucket"></a>

You can enable Object Lock for an existing S3 bucket by using the Amazon S3 console, the AWS CLI, AWS SDKs, or Amazon S3 REST API.

### Using the S3 console
<a name="object-lock-existing-bucket-console"></a>

**Note**  
Object Lock works only with versioned buckets.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the name of the bucket that you want to enable Object Lock on.

1. Choose the **Properties** tab.

1. Under **Properties**, scroll down to the **Object Lock** section, and choose **Edit**.

1. Under **Object Lock**, choose **Enable**.

   You must acknowledge that enabling Object Lock will permanently allow objects in this bucket to be locked.

1. Choose **Save changes**.



### Using the AWS CLI
<a name="object-lock-existing-bucket-cli"></a>

The following `put-object-lock-configuration` example command sets a 50-day Object Lock retention period on a bucket named `amzn-s3-demo-bucket1`:

```
aws s3api put-object-lock-configuration --bucket amzn-s3-demo-bucket1 --object-lock-configuration='{ "ObjectLockEnabled": "Enabled", "Rule": { "DefaultRetention": { "Mode": "COMPLIANCE", "Days": 50 }}}'
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object-lock-configuration.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object-lock-configuration.html) in the *AWS CLI Command Reference*.

**Note**  
You can run AWS CLI commands from the console by using AWS CloudShell. AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. For more information, see [What is CloudShell?](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html) in the *AWS CloudShell User Guide*.

### Using the REST API
<a name="object-lock-existing-bucket-rest"></a>

You can use the Amazon S3 REST API to enable Object Lock on an existing S3 bucket. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLockConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLockConfiguration.html) in the *Amazon Simple Storage Service API Reference*.

### Using the AWS SDKs
<a name="object-lock-existing-bucket-sdk"></a>

For examples of how to enable Object Lock for an existing S3 bucket with the AWS SDKs, see [Code examples](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_PutObjectLockConfiguration_section.html) in the *Amazon S3 API Reference*.

For examples of how to get the current Object Lock configuration with the AWS SDKs, see [Code examples](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_GetObjectLockConfiguration_section.html) in the *Amazon S3 API Reference*.

For an interactive scenario demonstrating different Object Lock features using the AWS SDKs, see [Code examples](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_Scenario_ObjectLock_section.html) in the *Amazon S3 API Reference*.

For general information about using different AWS SDKs, see [Developing with Amazon S3 using the AWS SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/sdk-general-information-section.html) in the *Amazon S3 API Reference*.

## Set or modify a legal hold on an S3 object
<a name="object-lock-configure-set-legal-hold"></a>

You can set or remove a legal hold on an S3 object by using the Amazon S3 console, AWS CLI, AWS SDKs, or Amazon S3 REST API.

**Important**  
If you want to set a legal hold on an object, the object's bucket must already have Object Lock enabled.
When you `PUT` an object version that has an explicit individual retention mode and period in a bucket, the object version's individual Object Lock settings override any bucket property retention settings.

For more information, see [Legal holds](object-lock.md#object-lock-legal-holds).

### Using the S3 console
<a name="object-lock-set-legal-hold-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the name of the bucket that contains the object that you want to set or modify a legal hold on.

1. In the **Objects** list, select the object that you want to set or modify a legal hold on.

1. On the **Object properties** page, find the **Object Lock legal hold** section, and choose **Edit**.

1. Choose **Enable** to set a legal hold or **Disable** to remove a legal hold.

1. Choose **Save changes**.

### Using the AWS CLI
<a name="object-lock-set-legal-hold-cli"></a>

The following `put-object-legal-hold` example sets a legal hold on the object *`my-image.fs`* in the bucket named `amzn-s3-demo-bucket1`:

```
aws s3api put-object-legal-hold --bucket amzn-s3-demo-bucket1 --key my-image.fs --legal-hold="Status=ON"
```

The following `put-object-legal-hold` example removes a legal hold on the object *`my-image.fs`* in the bucket named `amzn-s3-demo-bucket1`:

```
aws s3api put-object-legal-hold --bucket amzn-s3-demo-bucket1 --key my-image.fs --legal-hold="Status=OFF"
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object-legal-hold.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object-legal-hold.html) in the *AWS CLI Command Reference*.

**Note**  
You can run AWS CLI commands from the console by using AWS CloudShell. AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. For more information, see [What is CloudShell?](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html) in the *AWS CloudShell User Guide*.

### Using the REST API
<a name="object-lock-set-legal-hold-rest"></a>

You can use the REST API to set or modify a legal hold on an object. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLegalHold.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLegalHold.html) in the *Amazon Simple Storage Service API Reference*.

### Using the AWS SDKs
<a name="object-lock-set-legal-hold-sdk"></a>

For examples of how to set a legal hold on an object with the AWS SDKs, see [Code examples](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_PutObjectLegalHold_section.html) in the *Amazon S3 API Reference*.

For examples of how to get the current legal hold status with the AWS SDKs, see [Code examples](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_GetObjectLegalHoldConfiguration_section.html) in the *Amazon S3 API Reference*.

For an interactive scenario demonstrating different Object Lock features using the AWS SDKs, see [Code examples](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_Scenario_ObjectLock_section.html) in the *Amazon S3 API Reference*.

For general information about using different AWS SDKs, see [Developing with Amazon S3 using the AWS SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/sdk-general-information-section.html) in the *Amazon S3 API Reference*.

## Set or modify a retention period on an S3 object
<a name="object-lock-configure-set-retention-period-object"></a>

You can set or modify a retention period on an S3 object by using the Amazon S3 console, AWS CLI, AWS SDKs, or Amazon S3 REST API.

**Important**  
If you want to set a retention period on an object, the object's bucket must already have Object Lock enabled.
When you `PUT` an object version that has an explicit individual retention mode and period in a bucket, the object version's individual Object Lock settings override any bucket property retention settings.
The only way to delete an object under the compliance mode before its retention date expires is to delete the associated AWS account.

For more information, see [Retention periods](object-lock.md#object-lock-retention-periods).

### Using the S3 console
<a name="object-lock-set-retention-period-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the name of the bucket that contains the object that you want to set or modify a retention period on.

1. In the **Objects** list, select the object that you want to set or modify a retention period on.

1. On the **Object properties** page, find the **Object Lock retention** section, and choose **Edit**.

1. Under **Retention**, choose **Enable** to set a retention period or **Disable** to remove a retention period.

1. If you chose **Enable**, under **Retention mode**, choose either **Governance mode** or **Compliance mode**. For more information, see [Retention modes](object-lock.md#object-lock-retention-modes).

1. Under **Retain until date**, choose the date that you want to have the retention period end on. During this period, your object is WORM-protected and can't be overwritten or deleted. For more information, see [Retention periods](object-lock.md#object-lock-retention-periods).

1. Choose **Save changes**.

### Using the AWS CLI
<a name="object-lock-set-retention-period-cli"></a>

The following `put-object-retention` example sets a retention period on the object *`my-image.fs`* in the bucket named `amzn-s3-demo-bucket1` until January 1, 2025:

```
aws s3api put-object-retention --bucket amzn-s3-demo-bucket1 --key my-image.fs --retention='{ "Mode": "GOVERNANCE", "RetainUntilDate": "2025-01-01T00:00:00" }'
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object-retention.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object-retention.html) in the *AWS CLI Command Reference*.

**Note**  
You can run AWS CLI commands from the console by using AWS CloudShell. AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. For more information, see [What is CloudShell?](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html) in the *AWS CloudShell User Guide*.

### Using the REST API
<a name="object-lock-set-retention-period-rest"></a>

You can use the REST API to set a retention period on an object. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectRetention.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectRetention.html) in the *Amazon Simple Storage Service API Reference*.

### Using the AWS SDKs
<a name="object-lock-set-retention-period-sdk"></a>

For examples of how to set a retention period on an object with the AWS SDKs, see [Code examples](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_PutObjectRetention_section.html) in the *Amazon S3 API Reference*.

For examples of how to get the retention period on an object with the AWS SDKs, see [Code examples](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_GetObjectLockConfiguration_section.html) in the *Amazon S3 API Reference*.

For an interactive scenario demonstrating different Object Lock features using the AWS SDKs, see [Code examples](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_GetObjectLockConfiguration_section.html) in the *Amazon S3 API Reference*.

For general information about using different AWS SDKs, see [Developing with Amazon S3 using the AWS SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/sdk-general-information-section.html) in the *Amazon S3 API Reference*.

## Set or modify a default retention period on an S3 bucket
<a name="object-lock-configure-set-retention-period-bucket"></a>

You can set or modify a default retention period on an S3 bucket by using the Amazon S3 console, AWS CLI, AWS SDKs, or Amazon S3 REST API. You specify a duration, in either days or years, for how long to protect every object version placed in the bucket.

**Important**  
If you want to set a default retention period on a bucket, the bucket must already have Object Lock enabled.
When you `PUT` an object version that has an explicit individual retention mode and period in a bucket, the object version's individual Object Lock settings override any bucket property retention settings.
The only way to delete an object under the compliance mode before its retention date expires is to delete the associated AWS account.

For more information, see [Retention periods](object-lock.md#object-lock-retention-periods).

### Using the S3 console
<a name="object-lock-set-retention-period-bucket-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the name of the bucket that you want to set or modify a default retention period on.

1. Choose the **Properties** tab.

1. Under **Properties**, scroll down to the **Object Lock** section, and choose **Edit**.

1. Under **Default retention**, choose **Enable** to set a default retention or **Disable** to remove a default retention.

1. If you chose **Enable**, under **Retention mode**, choose either **Governance mode** or **Compliance mode**. For more information, see [Retention modes](object-lock.md#object-lock-retention-modes).

1. Under **Default retention period**, choose the number of days or years that you want the retention period to last for. Objects placed in this bucket will be locked for this number of days or years. For more information, see [Retention periods](object-lock.md#object-lock-retention-periods).

1. Choose **Save changes**.

### Using the AWS CLI
<a name="object-lock-configure-set-retention-period-bucket-cli"></a>

The following `put-object-lock-configuration` example command sets a 50-day Object Lock retention period on the bucket named `amzn-s3-demo-bucket1` by using compliance mode:

```
aws s3api put-object-lock-configuration --bucket amzn-s3-demo-bucket1 --object-lock-configuration='{ "ObjectLockEnabled": "Enabled", "Rule": { "DefaultRetention": { "Mode": "COMPLIANCE", "Days": 50 }}}'
```

The following `put-object-lock-configuration` example removes the default retention configuration on a bucket:

```
aws s3api put-object-lock-configuration --bucket amzn-s3-demo-bucket1 --object-lock-configuration='{ "ObjectLockEnabled": "Enabled"}'
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object-lock-configuration.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object-lock-configuration.html) in the *AWS CLI Command Reference*.

**Note**  
You can run AWS CLI commands from the console by using AWS CloudShell. AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. For more information, see [What is CloudShell?](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html) in the *AWS CloudShell User Guide*.

### Using the REST API
<a name="object-lock-configure-set-retention-period-bucket-rest"></a>

You can use the REST API to set a default retention period on an existing S3 bucket. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLockConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLockConfiguration.html) in the *Amazon Simple Storage Service API Reference*.

### Using the AWS SDKs
<a name="object-lock-configure-set-retention-period-bucket-sdk"></a>

For examples of how to set a default retention period on an existing S3 bucket with the AWS SDKs, see [Code examples](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_PutObjectLockConfiguration_section.html) in the *Amazon S3 API Reference*.

For an interactive scenario demonstrating different Object Lock features using the AWS SDKs, see [Code examples](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_Scenario_ObjectLock_section.html) in the *Amazon S3 API Reference*.

For general information about using different AWS SDKs, see [Developing with Amazon S3 using the AWS SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/sdk-general-information-section.html) in the *Amazon S3 API Reference*.

# Backing up your Amazon S3 data
<a name="backup-for-s3"></a>

Amazon S3 is natively integrated with AWS Backup, a fully managed, policy-based service that you can use to centrally define backup policies to protect your data in Amazon S3. After you define your backup policies and assign Amazon S3 resources to the policies, AWS Backup automates the creation of Amazon S3 backups and securely stores the backups in an encrypted backup vault that you designate in your backup plan. 

When using AWS Backup for Amazon S3, you can perform the following actions:
+ Create continuous backups and periodic backups. Continuous backups are useful for point-in-time restore, and periodic backups are useful to meet your long-term data-retention needs.
+ Automate backup scheduling and retention by centrally configuring backup policies.
+ Restore backups of Amazon S3 data to a point in time that you specify.

Along with AWS Backup, you can use S3 Versioning and S3 Replication to help recover from accidental deletions and perform your own self-recovery operations. 

**Prerequisites**  
You must activate [S3 Versioning](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html) on your bucket before AWS Backup can back it up. 

**Note**  
We recommend that you [ set a lifecycle expiration rule for versioning-enabled buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-configuration-examples.html#lifecycle-config-conceptual-ex6) that are being backed up. If you do not set a lifecycle expiration period, your Amazon S3 storage costs might increase because AWS Backup retains all versions of your Amazon S3 data.

**Getting started**  
To get started with AWS Backup for Amazon S3, see [ Creating Amazon S3 backups ](https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html) in the *AWS Backup Developer Guide*.

**Restrictions and limitations**  
To learn about the limitations, see [ Creating Amazon S3 backups ](https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html) in the *AWS Backup Developer Guide*. 