

# Setting up live replication overview
<a name="replication-how-setup"></a>

**Note**  
Objects that existed before you set up replication aren't replicated automatically. In other words, Amazon S3 doesn't replicate objects retroactively. To replicate objects that were created before your replication configuration, use S3 Batch Replication. For more information about configuring Batch Replication, see [Replicating existing objects](s3-batch-replication-batch.md).

To enable live replication—Same-Region Replication (SRR) or Cross-Region Replication (CRR)—add a replication configuration to your source bucket. This configuration tells Amazon S3 to replicate objects as specified. In the replication configuration, you must provide the following:
+ **The destination buckets** – The bucket or buckets where you want Amazon S3 to replicate the objects.
+ **The objects that you want to replicate** – You can replicate all objects in the source bucket or a subset of objects. You identify a subset by providing a [key name prefix](https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#keyprefix), one or more object tags, or both in the configuration.

  For example, if you configure a replication rule to replicate only objects with the key name prefix `Tax/`, Amazon S3 replicates objects with keys such as `Tax/doc1` or `Tax/doc2`. But it doesn't replicate objects with the key `Legal/doc3`. If you specify both a prefix and one or more tags, Amazon S3 replicates only objects that have the specific key prefix and tags.
+ **An AWS Identity and Access Management (IAM) role** – Amazon S3 assumes this IAM role to replicate objects on your behalf. For more information about creating this IAM role and managing permissions, see [Setting up permissions for live replication](setting-repl-config-perm-overview.md).

In addition to these minimum requirements, you can choose the following options: 
+ **Replica storage class** – By default, Amazon S3 stores object replicas using the same storage class as the source object. You can specify a different storage class for the replicas.
+ **Replica ownership** – Amazon S3 assumes that an object replica continues to be owned by the owner of the source object. So when it replicates objects, it also replicates the corresponding object access control list (ACL) or S3 Object Ownership setting. If the source and destination buckets are owned by different AWS accounts, you can configure replication to change the owner of a replica to the AWS account that owns the destination bucket. For more information, see [Changing the replica owner](replication-change-owner.md).

You can configure replication by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), AWS SDKs, or the Amazon S3 REST API. For detailed walkthroughs of how to set up replication, see [Examples for configuring live replication](replication-example-walkthroughs.md).

 Amazon S3 provides REST API operations to support setting up replication rules. For more information, see the following topics in the *Amazon Simple Storage Service API Reference*:
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html) 
+  [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketReplication.html) 

**Topics**
+ [

# Replication configuration file elements
](replication-add-config.md)
+ [

# Setting up permissions for live replication
](setting-repl-config-perm-overview.md)
+ [

# Examples for configuring live replication
](replication-example-walkthroughs.md)

# Replication configuration file elements
<a name="replication-add-config"></a>

Amazon S3 stores a replication configuration as XML. If you're configuring replication programmatically through the Amazon S3 REST API, you specify the various elements of your replication configuration in this XML file. If you're configuring replication through the AWS Command Line Interface (AWS CLI), you specify your replication configuration using JSON format. For JSON examples, see the walkthroughs in [Examples for configuring live replication](replication-example-walkthroughs.md).

**Note**  
The latest version of the replication configuration XML format is V2. XML V2 replication configurations are those that contain the `<Filter>` element for rules, and rules that specify S3 Replication Time Control (S3 RTC).  
To see your replication configuration version, you can use the `GetBucketReplication` API operation. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html) in the *Amazon Simple Storage Service API Reference*.   
For backward compatibility, Amazon S3 continues to support the XML V1 replication configuration format. If you've used the XML V1 replication configuration format, see [Backward compatibility considerations](#replication-backward-compat-considerations) for backward compatibility considerations.

In the replication configuration XML file, you must specify an AWS Identity and Access Management (IAM) role and one or more rules, as shown in the following example:

```
<ReplicationConfiguration>
    <Role>IAM-role-ARN</Role>
    <Rule>
        ...
    </Rule>
    <Rule>
         ... 
    </Rule>
     ...
</ReplicationConfiguration>
```

Amazon S3 can't replicate objects without your permission. You grant permissions to Amazon S3 with the IAM role that you specify in the replication configuration. Amazon S3 assumes this IAM role to replicate objects on your behalf. You must grant the required permissions to the IAM role first. For more information about managing permissions, see [Setting up permissions for live replication](setting-repl-config-perm-overview.md).

You add only one rule in a replication configuration in the following scenarios:
+ You want to replicate all objects.
+ You want to replicate only one subset of objects. You identify the object subset by adding a filter in the rule. In the filter, you specify an object key prefix, tags, or a combination of both to identify the subset of objects that the rule applies to. The filters target objects that match the exact values that you specify.

If you want to replicate different subsets of objects, you add multiple rules in a replication configuration. In each rule, you specify a filter that selects a different subset. For example, you might choose to replicate objects that have either `tax/` or `document/` key prefixes. To do this, you add two rules, one that specifies the `tax/` key prefix filter and another that specifies the `document/` key prefix. For more information about object key prefixes, see [Organizing objects using prefixes](using-prefixes.md).

The following sections provide additional information.

**Topics**
+ [

## Basic rule configuration
](#replication-config-min-rule-config)
+ [

## Optional: Specifying a filter
](#replication-config-optional-filter)
+ [

## Additional destination configurations
](#replication-config-optional-dest-config)
+ [

## Example replication configurations
](#replication-config-example-configs)
+ [

## Backward compatibility considerations
](#replication-backward-compat-considerations)

## Basic rule configuration
<a name="replication-config-min-rule-config"></a>

Each rule must include the rule's status and priority. The rule must also indicate whether to replicate delete markers. 
+ The `<Status>` element indicates whether the rule is enabled or disabled by using the values `Enabled` or `Disabled`. If a rule is disabled, Amazon S3 doesn't perform the actions specified in the rule. 
+ The `<Priority>` element indicates which rule has precedence whenever two or more replication rules conflict. Amazon S3 attempts to replicate objects according to all replication rules. However, if there are two or more rules with the same destination bucket, then objects are replicated according to the rule with the highest priority. The higher the number, the higher the priority.
+ The `<DeleteMarkerReplication>` element indicates whether to replicate delete markers by using the values `Enabled` or `Disabled`.

In the `<Destination>` element configuration, you must provide the name of the destination bucket or buckets where you want Amazon S3 to replicate objects. 

The following example shows the minimum requirements for a V2 rule. For backward compatibility, Amazon S3 continues to support the XML V1 format. For more information, see [Backward compatibility considerations](#replication-backward-compat-considerations).

```
...
    <Rule>
        <ID>Rule-1</ID>
        <Status>Enabled-or-Disabled</Status>
        <Filter>
            <Prefix></Prefix>   
        </Filter>
        <Priority>integer</Priority>
        <DeleteMarkerReplication>
           <Status>Enabled-or-Disabled</Status>
        </DeleteMarkerReplication>
        <Destination>        
           <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket> 
        </Destination>    
    </Rule>
    <Rule>
         ...
    </Rule>
     ...
...
```

You can also specify other configuration options. For example, you might choose to use a storage class for object replicas that differs from the class for the source object. 

## Optional: Specifying a filter
<a name="replication-config-optional-filter"></a>

To choose a subset of objects that the rule applies to, add an optional filter. You can filter by object key prefix, object tags, or a combination of both. If you filter on both a key prefix and object tags, Amazon S3 combines the filters by using a logical `AND` operator. In other words, the rule applies to a subset of objects with both a specific key prefix and specific tags. 

**Filter based on object key prefix**  
To specify a rule with a filter based on an object key prefix, use the following XML. You can specify only one prefix per rule.

```
<Rule>
    ...
    <Filter>
        <Prefix>key-prefix</Prefix>   
    </Filter>
    ...
</Rule>
...
```

**Filter based on object tags**  
To specify a rule with a filter based on object tags, use the following XML. You can specify one or more object tags.

```
<Rule>
    ...
    <Filter>
        <And>
            <Tag>
                <Key>key1</Key>
                <Value>value1</Value>
            </Tag>
            <Tag>
                <Key>key2</Key>
                <Value>value2</Value>
            </Tag>
             ...
        </And>
    </Filter>
    ...
</Rule>
...
```

**Filter with a key prefix and object tags**  
To specify a rule filter with a combination of a key prefix and object tags, use the following XML. You wrap these filters in an `<And>` parent element. Amazon S3 performs a logical `AND` operation to combine these filters. In other words, the rule applies to a subset of objects with both a specific key prefix and specific tags. 

```
<Rule>
    ...
    <Filter>
        <And>
            <Prefix>key-prefix</Prefix>
            <Tag>
                <Key>key1</Key>
                <Value>value1</Value>
            </Tag>
            <Tag>
                <Key>key2</Key>
                <Value>value2</Value>
            </Tag>
             ...
    </Filter>
    ...
</Rule>
...
```

**Note**  
If you specify a rule with an empty `<Filter>` element, your rule applies to all objects in your bucket.
When you're using tag-based replication rules with live replication, new objects must be tagged with the matching replication rule tag in the `PutObject` operation. Otherwise, the objects won't be replicated. If objects are tagged after the `PutObject` operation, those objects also won't be replicated.   
To replicate objects that have been tagged after the `PutObject` operation, you must use S3 Batch Replication. For more information about Batch Replication, see [Replicating existing objects](s3-batch-replication-batch.md).

## Additional destination configurations
<a name="replication-config-optional-dest-config"></a>

In the destination configuration, you specify the bucket or buckets where you want Amazon S3 to replicate objects. You can set configurations to replicate objects from one source bucket to one or more destination buckets. 

```
...
<Destination>        
    <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
</Destination>
...
```

You can add the following options in the `<Destination>` element.

**Topics**
+ [

### Specify storage class
](#storage-class-configuration)
+ [

### Add multiple destination buckets
](#multiple-destination-buckets-configuration)
+ [

### Specify different parameters for each replication rule with multiple destination buckets
](#replication-rule-configuration)
+ [

### Change replica ownership
](#replica-ownership-configuration)
+ [

### Enable S3 Replication Time Control
](#rtc-configuration)
+ [

### Replicate objects created with server-side encryption by using AWS KMS
](#sse-kms-configuration)

### Specify storage class
<a name="storage-class-configuration"></a>

You can specify the storage class for the object replicas. By default, Amazon S3 uses the storage class of the source object to create object replicas, as in the following example.

```
...
<Destination>
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
       <StorageClass>storage-class</StorageClass>
</Destination>
...
```

### Add multiple destination buckets
<a name="multiple-destination-buckets-configuration"></a>

You can add multiple destination buckets in a single replication configuration, as follows.

```
...
<Rule>
    <ID>Rule-1</ID>
    <Status>Enabled-or-Disabled</Status>
    <Priority>integer</Priority>
    <DeleteMarkerReplication>
       <Status>Enabled-or-Disabled</Status>
    </DeleteMarkerReplication>
    <Destination>        
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket1</Bucket> 
    </Destination>    
</Rule>
<Rule>
    <ID>Rule-2</ID>
    <Status>Enabled-or-Disabled</Status>
    <Priority>integer</Priority>
    <DeleteMarkerReplication>
       <Status>Enabled-or-Disabled</Status>
    </DeleteMarkerReplication>
    <Destination>        
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket2</Bucket> 
    </Destination>    
</Rule>
...
```

### Specify different parameters for each replication rule with multiple destination buckets
<a name="replication-rule-configuration"></a>

When adding multiple destination buckets in a single replication configuration, you can specify different parameters for each replication rule, as follows.

```
...
<Rule>
    <ID>Rule-1</ID>
    <Status>Enabled-or-Disabled</Status>
    <Priority>integer</Priority>
    <DeleteMarkerReplication>
       <Status>Disabled</Status>
    </DeleteMarkerReplication>
      <Metrics>
    <Status>Enabled</Status>
    <EventThreshold>
      <Minutes>15</Minutes> 
    </EventThreshold>
  </Metrics>
    <Destination>        
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket1</Bucket> 
    </Destination>    
</Rule>
<Rule>
    <ID>Rule-2</ID>
    <Status>Enabled-or-Disabled</Status>
    <Priority>integer</Priority>
    <DeleteMarkerReplication>
       <Status>Enabled</Status>
    </DeleteMarkerReplication>
      <Metrics>
    <Status>Enabled</Status>
    <EventThreshold>
      <Minutes>15</Minutes> 
    </EventThreshold>
  </Metrics>
  <ReplicationTime>
    <Status>Enabled</Status>
    <Time>
      <Minutes>15</Minutes>
    </Time>
  </ReplicationTime>
    <Destination>        
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket2</Bucket> 
    </Destination>    
</Rule>
...
```

### Change replica ownership
<a name="replica-ownership-configuration"></a>

When the source and destination buckets aren't owned by the same accounts, you can change the ownership of the replica to the AWS account that owns the destination bucket. To do so, add the `<AccessControlTranslation>` element. This element takes the value `Destination`.

```
...
<Destination>
   <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
   <Account>destination-bucket-owner-account-id</Account>
   <AccessControlTranslation>
       <Owner>Destination</Owner>
   </AccessControlTranslation>
</Destination>
...
```

If you don't add the `<AccessControlTranslation>` element to the replication configuration, the replicas are owned by the same AWS account that owns the source object. For more information, see [Changing the replica owner](replication-change-owner.md).

### Enable S3 Replication Time Control
<a name="rtc-configuration"></a>

You can enable S3 Replication Time Control (S3 RTC) in your replication configuration. S3 RTC replicates most objects in seconds and 99.99 percent of objects within 15 minutes (backed by a service-level agreement). 

**Note**  
Only a value of `<Minutes>15</Minutes>` is accepted for the `<EventThreshold>` and `<Time>` elements.

```
...
<Destination>
  <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
  <Metrics>
    <Status>Enabled</Status>
    <EventThreshold>
      <Minutes>15</Minutes> 
    </EventThreshold>
  </Metrics>
  <ReplicationTime>
    <Status>Enabled</Status>
    <Time>
      <Minutes>15</Minutes>
    </Time>
  </ReplicationTime>
</Destination>
...
```

For more information, see [Meeting compliance requirements with S3 Replication Time Control](replication-time-control.md). For API examples, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketReplication.html) in the *Amazon Simple Storage Service API Reference*.

### Replicate objects created with server-side encryption by using AWS KMS
<a name="sse-kms-configuration"></a>

Your source bucket might contain objects that were created with server-side encryption by using AWS Key Management Service (AWS KMS) keys (SSE-KMS). By default, Amazon S3 doesn't replicate these objects. You can optionally direct Amazon S3 to replicate these objects. To do so, first explicitly opt into this feature by adding the `<SourceSelectionCriteria>` element. Then provide the AWS KMS key (for the AWS Region of the destination bucket) to use for encrypting object replicas. The following example shows how to specify these elements.

```
...
<SourceSelectionCriteria>
  <SseKmsEncryptedObjects>
    <Status>Enabled</Status>
  </SseKmsEncryptedObjects>
</SourceSelectionCriteria>
<Destination>
  <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
  <EncryptionConfiguration>
    <ReplicaKmsKeyID>AWS KMS key ID to use for encrypting object replicas</ReplicaKmsKeyID>
  </EncryptionConfiguration>
</Destination>
...
```

For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).

## Example replication configurations
<a name="replication-config-example-configs"></a>

To get started, you can add the following example replication configurations to your bucket, as appropriate.

**Important**  
To add a replication configuration to a bucket, you must have the `iam:PassRole` permission. This permission allows you to pass the IAM role that grants Amazon S3 replication permissions. You specify the IAM role by providing the Amazon Resource Name (ARN) that is used in the `<Role>` element in the replication configuration XML. For more information, see [Granting a User Permissions to Pass a Role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*.

**Example 1: Replication configuration with one rule**  
The following basic replication configuration specifies one rule. The rule specifies an IAM role that Amazon S3 can assume and a single destination bucket for object replicas. The `<Status>` element value of `Enabled` indicates that the rule is in effect.  

```
<?xml version="1.0" encoding="UTF-8"?>
<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Role>arn:aws:iam::account-id:role/role-name</Role>
  <Rule>
    <Status>Enabled</Status>

    <Destination>
      <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>
  </Rule>
</ReplicationConfiguration>
```
To choose a subset of objects to replicate, you can add a filter. In the following configuration, the filter specifies an object key prefix. This rule applies to objects that have the prefix `Tax/` in their key names.   

```
<?xml version="1.0" encoding="UTF-8"?>
<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Role>arn:aws:iam::account-id:role/role-name</Role>
  <Rule>
    <Status>Enabled</Status>
    <Priority>1</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>

    <Filter>
       <Prefix>Tax/</Prefix>
    </Filter>

    <Destination>
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>

  </Rule>
</ReplicationConfiguration>
```
If you specify the `<Filter>` element, you must also include the `<Priority>` and `<DeleteMarkerReplication>` elements. In this example, the value that you set for the `<Priority>` element is irrelevant because there is only one rule.  
In the following configuration, the filter specifies one prefix and two tags. The rule applies to the subset of objects that have the specified key prefix and tags. Specifically, it applies to objects that have the `Tax/` prefix in their key names and the two specified object tags. In this example, the value that you set for the `<Priority>` element is irrelevant because there is only one rule.  

```
<?xml version="1.0" encoding="UTF-8"?>
<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Role>arn:aws:iam::account-id:role/role-name</Role>
  <Rule>
    <Status>Enabled</Status>
    <Priority>1</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>

    <Filter>
        <And>
          <Prefix>Tax/</Prefix>
          <Tag>
             <Tag>
                <Key>tagA</Key>
                <Value>valueA</Value>
             </Tag>
          </Tag>
          <Tag>
             <Tag>
                <Key>tagB</Key>
                <Value>valueB</Value>
             </Tag>
          </Tag>
       </And>

    </Filter>

    <Destination>
        <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>

  </Rule>
</ReplicationConfiguration>
```
You can specify a storage class for the object replicas as follows:  

```
<?xml version="1.0" encoding="UTF-8"?>

<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Role>arn:aws:iam::account-id:role/role-name</Role>
  <Rule>
    <Status>Enabled</Status>
    <Destination>
       <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
       <StorageClass>storage-class</StorageClass>
    </Destination>
  </Rule>
</ReplicationConfiguration>
```
You can specify any storage class that Amazon S3 supports.

**Example 2: Replication configuration with two rules**  

**Example**  
In the following replication configuration, the rules specify the following:  
+ Each rule filters on a different key prefix so that each rule applies to a distinct subset of objects. In this example, Amazon S3 replicates objects with the key names *`Tax/doc1.pdf`* and *`Project/project1.txt`*, but it doesn't replicate objects with the key name *`PersonalDoc/documentA`*. 
+ Although both rules specify a value for the `<Priority>` element, the rule priority is irrelevant because the rules apply to two distinct sets of objects. The next example shows what happens when rule priority is applied. 
+ The second rule specifies the S3 Standard-IA storage class for object replicas. Amazon S3 uses the specified storage class for those object replicas.
   

```
<?xml version="1.0" encoding="UTF-8"?>

<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Role>arn:aws:iam::account-id:role/role-name</Role>
  <Rule>
    <Status>Enabled</Status>
    <Priority>1</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>
    <Filter>
        <Prefix>Tax</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Destination>
      <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>
     ...
  </Rule>
 <Rule>
    <Status>Enabled</Status>
    <Priority>2</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>
    <Filter>
        <Prefix>Project</Prefix>
    </Filter>
    <Status>Enabled</Status>
    <Destination>
      <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
     <StorageClass>STANDARD_IA</StorageClass>
    </Destination>
     ...
  </Rule>


</ReplicationConfiguration>
```

**Example 3: Replication configuration with two rules with overlapping prefixes**  <a name="overlap-rule-example"></a>
In this configuration, the two rules specify filters with overlapping key prefixes, *`star`* and *`starship`*. Both rules apply to objects with the key name *`starship-x`*. In this case, Amazon S3 uses the rule priority to determine which rule to apply. The higher the number, the higher the priority.  

```
<ReplicationConfiguration>

  <Role>arn:aws:iam::account-id:role/role-name</Role>

  <Rule>
    <Status>Enabled</Status>
    <Priority>1</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>
    <Filter>
        <Prefix>star</Prefix>
    </Filter>
    <Destination>
      <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>
  </Rule>
  <Rule>
    <Status>Enabled</Status>
    <Priority>2</Priority>
    <DeleteMarkerReplication>
       <Status>string</Status>
    </DeleteMarkerReplication>
    <Filter>
        <Prefix>starship</Prefix>
    </Filter>    
    <Destination>
      <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
    </Destination>
  </Rule>
</ReplicationConfiguration>
```

**Example 4: Example walkthroughs**  
For example walkthroughs, see [Examples for configuring live replication](replication-example-walkthroughs.md).

For more information about the XML structure of replication configuration, see [PutBucketReplication](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html) in the *Amazon Simple Storage Service API Reference*. 

## Backward compatibility considerations
<a name="replication-backward-compat-considerations"></a>

The latest version of the replication configuration XML format is V2. XML V2 replication configurations are those that contain the `<Filter>` element for rules, and rules that specify S3 Replication Time Control (S3 RTC).

To see your replication configuration version, you can use the `GetBucketReplication` API operation. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketReplication.html) in the *Amazon Simple Storage Service API Reference*. 

For backward compatibility, Amazon S3 continues to support the XML V1 replication configuration format. If you've used the XML V1 replication configuration format, consider the following issues that affect backward compatibility:
+ The replication configuration XML V2 format includes the `<Filter>` element for rules. With the `<Filter>` element, you can specify object filters based on the object key prefix, tags, or both to scope the objects that the rule applies to. The replication configuration XML V1 format supports filtering based only on the key prefix. In that case, you add the `<Prefix>` element directly as a child element of the `<Rule>` element, as in the following example:

  ```
  <?xml version="1.0" encoding="UTF-8"?>
  <ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <Role>arn:aws:iam::account-id:role/role-name</Role>
    <Rule>
      <Status>Enabled</Status>
      <Prefix>key-prefix</Prefix>
      <Destination>
         <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
      </Destination>
  
    </Rule>
  </ReplicationConfiguration>
  ```
+ When you delete an object from your source bucket without specifying an object version ID, Amazon S3 adds a delete marker. If you use the replication configuration XML V1 format, Amazon S3 replicates only delete markers that result from user actions. In other words, Amazon S3 replicates the delete marker only if a user deletes an object. If an expired object is removed by Amazon S3 (as part of a lifecycle action), Amazon S3 doesn't replicate the delete marker. 

  In the replication configuration XML V2 format, you can enable delete marker replication for non-tag-based rules. For more information, see [Replicating delete markers between buckets](delete-marker-replication.md). 

 

# Setting up permissions for live replication
<a name="setting-repl-config-perm-overview"></a>

When setting up live replication in Amazon S3, you must acquire the necessary permissions as follows:
+ You must grant the AWS Identity and Access Management (IAM) principal (user or role) who will be creating replication rules a certain set of permissions.
+ Amazon S3 needs permissions to replicate objects on your behalf. You grant these permissions by creating an IAM role and then specifying that role in your replication configuration.
+ When the source and destination buckets aren't owned by the same accounts, the owner of the destination bucket must also grant the source bucket owner permissions to store the replicas.

**Note**  
If you're using S3 Batch Operations to replicate objects on demand instead of setting up live replication, a different IAM role and policies are required for S3 Batch Replication. For a Batch Replication IAM role and policy examples, see [Configuring an IAM role for S3 Batch Replication](s3-batch-replication-policies.md).

**Topics**
+ [

## Step 1: Granting permissions to the IAM principal who's creating replication rules
](#setting-repl-config-role)
+ [

## Step 2: Creating an IAM role for Amazon S3 to assume
](#setting-repl-config-same-acctowner)
+ [

## (Optional) Step 3: Granting permissions when the source and destination buckets are owned by different AWS accounts
](#setting-repl-config-crossacct)
+ [

## (Optional) Step 4: Granting permissions to change replica ownership
](#change-replica-ownership)

## Step 1: Granting permissions to the IAM principal who's creating replication rules
<a name="setting-repl-config-role"></a>

The IAM user or role that you will use to create replication rules needs permissions to create replication rules for one- or two-way replications. If the user or role doesn't have these permissions, you won't be able to create replication rules. For more information, see [IAM Identities](https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html) in the *IAM User Guide*.

The user or role needs the following actions:
+ `iam:AttachRolePolicy`
+ `iam:CreatePolicy`
+ `iam:CreateServiceLinkedRole`
+ `iam:PassRole`
+ `iam:PutRolePolicy`
+ `s3:GetBucketVersioning`
+ `s3:GetObjectVersionAcl`
+ `s3:GetObjectVersionForReplication`
+ `s3:GetReplicationConfiguration`
+ `s3:PutReplicationConfiguration`

Following is a sample IAM policy that includes these actions.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetAccessPoint",
                "s3:GetAccountPublicAccessBlock",
                "s3:GetBucketAcl",
                "s3:GetBucketLocation",
                "s3:GetBucketPolicyStatus",
                "s3:GetBucketPublicAccessBlock",
                "s3:ListAccessPoints",
                "s3:ListAllMyBuckets",
                "s3:PutReplicationConfiguration",
                "s3:GetReplicationConfiguration",
                "s3:GetBucketVersioning",
                "s3:GetObjectVersionForReplication",
                "s3:GetObjectVersionAcl",
                "s3:GetObject",
                "s3:ListBucket",
                "s3:GetObjectVersion",
                "s3:GetBucketOwnershipControls",
                "s3:PutBucketOwnershipControls",
                "s3:GetObjectLegalHold",
                "s3:GetObjectRetention",
                "s3:GetBucketObjectLockConfiguration"
            ],
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket1-*",
                "arn:aws:s3:::amzn-s3-demo-bucket2-*/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:List*AccessPoint*",
                "s3:GetMultiRegion*"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:Get*",
                "iam:CreateServiceLinkedRole",
                "iam:CreateRole",
                "iam:PassRole"
            ],
            "Resource": "arn:aws:iam::*:role/service-role/s3*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:List*"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:AttachRolePolicy",
                "iam:PutRolePolicy",
                "iam:CreatePolicy"
              ],
            "Resource": [
                "arn:aws:iam::*:policy/service-role/s3*",
                "arn:aws:iam::*:role/service-role/s3*"
            ]
        }
    ]
}
```

------

## Step 2: Creating an IAM role for Amazon S3 to assume
<a name="setting-repl-config-same-acctowner"></a>



By default, all Amazon S3 resources—buckets, objects, and related subresources—are private, and only the resource owner can access the resource. Amazon S3 needs permissions to read and replicate objects from the source bucket. You grant these permissions by creating an IAM role and specifying that role in your replication configuration. 

This section explains the trust policy and the minimum required permissions policy that are attached to this IAM role. The example walkthroughs provide step-by-step instructions to create an IAM role. For more information, see [Examples for configuring live replication](replication-example-walkthroughs.md).

**Note**  
If you're using the console to create your replication configuration, we recommend that you skip this section and instead have the console create this IAM role and the necessary trust and permission policies for you.

The *trust policy* identifies which principal identities can assume the IAM role. The *permissions policy* specifies which actions the IAM role can perform, on which resources, and under what conditions. 
+ The following example shows a *trust policy* where you identify Amazon S3 as the AWS service principal that can assume the role:

------
#### [ JSON ]

****  

  ```
  {
     "Version":"2012-10-17",		 	 	 
     "Statement":[
        {
           "Effect":"Allow",
           "Principal":{
              "Service":"s3.amazonaws.com"
           },
           "Action":"sts:AssumeRole"
        }
     ]
  }
  ```

------
+ The following example shows a *trust policy* where you identify Amazon S3 and S3 Batch Operations as service principals that can assume the role. Use this approach if you're creating a Batch Replication job. For more information, see [Create a Batch Replication job for new replication rules or destinations](s3-batch-replication-new-config.md).

------
#### [ JSON ]

****  

  ```
  {
     "Version":"2012-10-17",		 	 	 
     "Statement":[ 
        {
           "Effect":"Allow",
           "Principal":{
              "Service": [
                "s3.amazonaws.com",
                "batchoperations.s3.amazonaws.com"
             ]
           },
           "Action":"sts:AssumeRole"
        }
     ]
  }
  ```

------

  For more information about IAM roles, see [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) in the *IAM User Guide*.
+ The following example shows the *permissions policy*, where you grant the IAM role permissions to perform replication tasks on your behalf. When Amazon S3 assumes the role, it has the permissions that you specify in this policy. In this policy, `amzn-s3-demo-source-bucket` is the source bucket, and `amzn-s3-demo-destination-bucket` is the destination bucket.

------
#### [ JSON ]

****  

  ```
  {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
        {
           "Effect": "Allow",
           "Action": [
              "s3:GetReplicationConfiguration",
              "s3:ListBucket"
           ],
           "Resource": [
              "arn:aws:s3:::amzn-s3-demo-source-bucket"
           ]
        },
        {
           "Effect": "Allow",
           "Action": [
              "s3:GetObjectVersionForReplication",
              "s3:GetObjectVersionAcl",
              "s3:GetObjectVersionTagging"
           ],
           "Resource": [
              "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
           ]
        },
        {
           "Effect": "Allow",
           "Action": [
              "s3:ReplicateObject",
              "s3:ReplicateDelete",
              "s3:ReplicateTags"
           ],
           "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
        }
     ]
  }
  ```

------

  The permissions policy grants permissions for the following actions:
  +  `s3:GetReplicationConfiguration` and `s3:ListBucket` – Permissions for these actions on the `amzn-s3-demo-source-bucket` bucket allow Amazon S3 to retrieve the replication configuration and list the bucket content. (The current permissions model requires the `s3:ListBucket` permission for accessing delete markers.)
  + `s3:GetObjectVersionForReplication` and `s3:GetObjectVersionAcl` – Permissions for these actions are granted on all objects to allow Amazon S3 to get a specific object version and access control list (ACL) associated with the objects. 

    
  + `s3:ReplicateObject` and `s3:ReplicateDelete` – Permissions for these actions on all objects in the `amzn-s3-demo-destination-bucket` bucket allow Amazon S3 to replicate objects or delete markers to the destination bucket. For information about delete markers, see [How delete operations affect replication](replication-what-is-isnot-replicated.md#replication-delete-op). 
**Note**  
Permissions for the `s3:ReplicateObject` action on the `amzn-s3-demo-destination-bucket` bucket also allow replication of metadata such as object tags and ACLs. Therefore, you don't need to explicitly grant permission for the `s3:ReplicateTags` action.
  + `s3:GetObjectVersionTagging` – Permissions for this action on objects in the `amzn-s3-demo-source-bucket` bucket allow Amazon S3 to read object tags for replication. For more information about object tags, see [Categorizing your objects using tags](object-tagging.md). If Amazon S3 doesn't have the `s3:GetObjectVersionTagging` permission, it replicates the objects, but not the object tags.

  For a list of Amazon S3 actions, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html#list_amazons3-actions-as-permissions) in the *Service Authorization Reference*.

  For more information about the permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).
**Important**  
The AWS account that owns the IAM role must have permissions for the actions that it grants to the IAM role.   
For example, suppose that the source bucket contains objects owned by another AWS account. The owner of the objects must explicitly grant the AWS account that owns the IAM role the required permissions through the objects' access control lists (ACLs). Otherwise, Amazon S3 can't access the objects, and replication of the objects fails. For information about ACL permissions, see [Access control list (ACL) overview](acl-overview.md).  
  
The permissions described here are related to the minimum replication configuration. If you choose to add optional replication configurations, you must grant additional permissions to Amazon S3:   
To replicate encrypted objects, you also need to grant the necessary AWS Key Management Service (AWS KMS) key permissions. For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication).

## (Optional) Step 3: Granting permissions when the source and destination buckets are owned by different AWS accounts
<a name="setting-repl-config-crossacct"></a>

When the source and destination buckets aren't owned by the same accounts, the owner of the destination bucket must also add a bucket policy to grant the owner of the source bucket permissions to perform replication actions, as shown in the following example. In this example policy, `amzn-s3-demo-destination-bucket` is the destination bucket.

You can also use the Amazon S3 console to automatically generate this bucket policy for you. For more information, see [Enable receiving replicated objects from a source bucket](#receiving-replicated-objects).

**Note**  
The ARN format of the role might appear different. If the role was created by using the console, the ARN format is `arn:aws:iam::account-ID:role/service-role/role-name`. If the role was created by using the AWS CLI, the ARN format is `arn:aws:iam::account-ID:role/role-name`. For more information, see [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html) in the *IAM User Guide*. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "PolicyForDestinationBucket",
    "Statement": [
        {
            "Sid": "Permissions on objects",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/service-role/source-account-IAM-role"
            },
            "Action": [
                "s3:ReplicateDelete",
                "s3:ReplicateObject"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
        },
        {
            "Sid": "Permissions on bucket",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/service-role/source-account-IAM-role"
            },
            "Action": [
                "s3:List*",
                "s3:GetBucketVersioning",
                "s3:PutBucketVersioning"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket"
        }
    ]
}
```

------

For an example, see [Configuring replication for buckets in different accounts](replication-walkthrough-2.md).

If objects in the source bucket are tagged, note the following:
+ If the source bucket owner grants Amazon S3 permission for the `s3:GetObjectVersionTagging` and `s3:ReplicateTags` actions to replicate object tags (through the IAM role), Amazon S3 replicates the tags along with the objects. For information about the IAM role, see [Step 2: Creating an IAM role for Amazon S3 to assume](#setting-repl-config-same-acctowner).
+ If the owner of the destination bucket doesn't want to replicate the tags, they can add the following statement to the destination bucket policy to explicitly deny permission for the `s3:ReplicateTags` action. In this policy, `amzn-s3-demo-destination-bucket` is the destination bucket.

  ```
  ...
     "Statement":[
        {
           "Effect":"Deny",
           "Principal":{
              "AWS":"arn:aws:iam::source-bucket-account-id:role/service-role/source-account-IAM-role"
           },
           "Action":"s3:ReplicateTags",
           "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
        }
     ]
  ...
  ```

**Note**  
If you want to replicate encrypted objects, you also must grant the necessary AWS Key Management Service (AWS KMS) key permissions. For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication). 

**Enable receiving replicated objects from a source bucket**  
Instead of manually adding the preceding policy to your destination bucket, you can quickly generate the policies needed to enable receiving replicated objects from a source bucket through the Amazon S3 console. 

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the bucket that you want to use as a destination bucket.

1. Choose the **Management** tab, and scroll down to **Replication rules**.

1. For **Actions**, choose **Receive replicated objects**. 

   Follow the prompts and enter the AWS account ID of the source bucket account, and then choose **Generate policies**. The console generates an Amazon S3 bucket policy and a KMS key policy.

1. To add this policy to your existing bucket policy, either choose **Apply settings** or choose **Copy** to manually copy the changes. 

1. (Optional) Copy the AWS KMS policy to your desired KMS key policy in the AWS Key Management Service console. 

## (Optional) Step 4: Granting permissions to change replica ownership
<a name="change-replica-ownership"></a>

When different AWS accounts own the source and destination buckets, you can tell Amazon S3 to change the ownership of the replica to the AWS account that owns the destination bucket. To override the ownership of replicas, you must either grant some additional permissions or adjust the S3 Object Ownership settings for the destination bucket. For more information about owner override, see [Changing the replica owner](replication-change-owner.md).

# Examples for configuring live replication
<a name="replication-example-walkthroughs"></a>

The following examples provide step-by-step walkthroughs that show how to configure live replication for common use cases. 

**Note**  
Live replication refers to Same-Region Replication (SRR) and Cross-Region Replication (CRR). Live replication doesn't replicate any objects that existed in the bucket before you set up replication. To replicate objects that existed before you set up replication, use on-demand replication. To sync buckets and replicate existing objects on demand, see [Replicating existing objects](s3-batch-replication-batch.md).

These examples demonstrate how to create a replication configuration by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and AWS SDKs (AWS SDK for Java and AWS SDK for .NET examples are shown). 

For information about installing and configuring the AWS CLI, see the following topics in the *AWS Command Line Interface User Guide*:
+  [Get started with the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) 
+  [Configure the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) – You must set up at least one profile. If you are exploring cross-account scenarios, set up two profiles.

For information about the AWS SDKs, see [AWS SDK for Java](https://aws.amazon.com/sdk-for-java/) and [AWS SDK for .NET](https://aws.amazon.com/sdk-for-net/).

**Tip**  
For a step-by-step tutorial that demonstrates how to use live replication to replicate data, see [Tutorial: Replicating data within and between AWS Regions using S3 Replication](https://aws.amazon.com/getting-started/hands-on/replicate-data-using-amazon-s3-replication/?ref=docs_gateway/amazons3/replication-example-walkthroughs.html).

**Topics**
+ [Configuring for buckets in the same account](replication-walkthrough1.md)
+ [Configuring for buckets in different accounts](replication-walkthrough-2.md)
+ [Using S3 Replication Time Control](replication-time-control.md)
+ [Replicating encrypted objects](replication-config-for-kms-objects.md)
+ [Replicating metadata changes](replication-for-metadata-changes.md)
+ [Replicating delete markers](delete-marker-replication.md)

# Configuring replication for buckets in the same account
<a name="replication-walkthrough1"></a>

Live replication is the automatic, asynchronous copying of objects across general purpose buckets in the same or different AWS Regions. Live replication copies newly created objects and object updates from a source bucket to a destination bucket or buckets. For more information, see [Replicating objects within and across Regions](replication.md).

When you configure replication, you add replication rules to the source bucket. Replication rules define which source bucket objects to replicate and the destination bucket or buckets where the replicated objects are stored. You can create a rule to replicate all the objects in a bucket or a subset of objects with a specific key name prefix, one or more object tags, or both. A destination bucket can be in the same AWS account as the source bucket, or it can be in a different account.

If you specify an object version ID to delete, Amazon S3 deletes that object version in the source bucket. But it doesn't replicate the deletion in the destination bucket. In other words, it doesn't delete the same object version from the destination bucket. This protects data from malicious deletions.

When you add a replication rule to a bucket, the rule is enabled by default, so it starts working as soon as you save it. 

In this example, you set up live replication for source and destination buckets that are owned by the same AWS account. Examples are provided for using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), and the AWS SDK for Java and AWS SDK for .NET.

## Prerequisites
<a name="replication-prerequisites"></a>

Before you use the following procedures, make sure that you've set up the necessary permissions for replication, depending on whether the source and destination buckets are owned by the same or different accounts. For more information, see [Setting up permissions for live replication](setting-repl-config-perm-overview.md).

**Note**  
If you want to replicate encrypted objects, you also must grant the necessary AWS Key Management Service (AWS KMS) key permissions. For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication). 

## Using the S3 console
<a name="enable-replication"></a>

To configure a replication rule when the destination bucket is in the same AWS account as the source bucket, follow these steps.

If the destination bucket is in a different account from the source bucket, you must add a bucket policy to the destination bucket to grant the owner of the source bucket account permission to replicate objects in the destination bucket. For more information, see [(Optional) Step 3: Granting permissions when the source and destination buckets are owned by different AWS accounts](setting-repl-config-perm-overview.md#setting-repl-config-crossacct).

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want.

1. Choose the **Management** tab, scroll down to **Replication rules**, and then choose **Create replication rule**.

    

1. In the **Replication rule configuration** section, under **Replication rule name**, enter a name for your rule to help identify the rule later. The name is required and must be unique within the bucket.

1. Under **Status**, **Enabled** is selected by default. An enabled rule starts to work as soon as you save it. If you want to enable the rule later, choose **Disabled**.

1. If the bucket has existing replication rules, you are instructed to set a priority for the rule. You must set a priority for the rule to avoid conflicts caused by objects that are included in the scope of more than one rule. In the case of overlapping rules, Amazon S3 uses the rule priority to determine which rule to apply. The higher the number, the higher the priority. For more information about rule priority, see [Replication configuration file elements](replication-add-config.md).

1. Under **Source bucket**, you have the following options for setting the replication source:
   + To replicate the whole bucket, choose **Apply to all objects in the bucket**. 
   + To replicate all objects that have the same prefix, choose **Limit the scope of this rule using one or more filters**. This limits replication to all objects that have names that begin with the prefix that you specify (for example `pictures`). Enter a prefix in the **Prefix** box. 
**Note**  
If you enter a prefix that is the name of a folder, you must use **/** (forward slash) as the last character (for example, `pictures/`).
   + To replicate all objects with one or more object tags, choose **Add tag** and enter the key-value pair in the boxes. Repeat the procedure to add another tag. You can combine a prefix and tags. For more information about object tags, see [Categorizing your objects using tags](object-tagging.md).

   The new replication configuration XML schema supports prefix and tag filtering and the prioritization of rules. For more information about the new schema, see [Backward compatibility considerations](replication-add-config.md#replication-backward-compat-considerations). For more information about the XML used with the Amazon S3 API that works behind the user interface, see [Replication configuration file elements](replication-add-config.md). The new schema is described as *replication configuration XML V2*.

1. Under **Destination**, choose the bucket where you want Amazon S3 to replicate objects.
**Note**  
The number of destination buckets is limited to the number of AWS Regions in a given partition. A partition is a grouping of Regions. AWS currently has three partitions: `aws` (Standard Regions), `aws-cn` (China Regions), and `aws-us-gov` (AWS GovCloud (US) Regions). To request an increase in your destination bucket quota, you can use [service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html).
   + To replicate to a bucket or buckets in your account, choose **Choose a bucket in this account**, and enter or browse for the destination bucket names. 
   + To replicate to a bucket or buckets in a different AWS account, choose **Specify a bucket in another account**, and enter the destination bucket account ID and bucket name. 

     If the destination is in a different account from the source bucket, you must add a bucket policy to the destination buckets to grant the owner of the source bucket account permission to replicate objects. For more information, see [(Optional) Step 3: Granting permissions when the source and destination buckets are owned by different AWS accounts](setting-repl-config-perm-overview.md#setting-repl-config-crossacct).

     Optionally, if you want to help standardize ownership of new objects in the destination bucket, choose **Change object ownership to the destination bucket owner**. For more information about this option, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).
**Note**  
If versioning is not enabled on the destination bucket, you get a warning that contains an **Enable versioning** button. Choose this button to enable versioning on the bucket.

1. Set up an AWS Identity and Access Management (IAM) role that Amazon S3 can assume to replicate objects on your behalf.

   To set up an IAM role, in the **IAM role** section, select one of the following from the **IAM role** dropdown list:
   + We highly recommend that you choose **Create new role** to have Amazon S3 create a new IAM role for you. When you save the rule, a new policy is generated for the IAM role that matches the source and destination buckets that you choose.
   + You can choose to use an existing IAM role. If you do, you must choose a role that grants Amazon S3 the necessary permissions for replication. Replication fails if this role does not grant Amazon S3 sufficient permissions to follow your replication rule.
**Important**  
When you add a replication rule to a bucket, you must have the `iam:PassRole` permission to be able to pass the IAM role that grants Amazon S3 replication permissions. For more information, see [Granting a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*.

1. To replicate objects in the source bucket that are encrypted with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), under **Encryption**, select **Replicate objects encrypted with AWS KMS**. Under **AWS KMS keys for encrypting destination objects** are the source keys that you allow replication to use. All source KMS keys are included by default. To narrow the KMS key selection, you can choose an alias or key ID. 

   Objects encrypted by AWS KMS keys that you do not select are not replicated. A KMS key or a group of KMS keys is chosen for you, but you can choose the KMS keys if you want. For information about using AWS KMS with replication, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
**Important**  
When you replicate objects that are encrypted with AWS KMS, the AWS KMS request rate doubles in the source Region and increases in the destination Region by the same amount. These increased call rates to AWS KMS are due to the way that data is re-encrypted by using the KMS key that you define for the replication destination Region. AWS KMS has a request rate quota that is per calling account per Region. For information about the quota defaults, see [AWS KMS Quotas - Requests per Second: Varies](https://docs.aws.amazon.com/kms/latest/developerguide/limits.html#requests-per-second) in the *AWS Key Management Service Developer Guide*.   
If your current Amazon S3 `PUT` object request rate during replication is more than half the default AWS KMS rate limit for your account, we recommend that you request an increase to your AWS KMS request rate quota. To request an increase, create a case in the Support Center at [Contact Us](https://aws.amazon.com/contact-us/). For example, suppose that your current `PUT` object request rate is 1,000 requests per second and you use AWS KMS to encrypt your objects. In this case, we recommend that you ask Support to increase your AWS KMS rate limit to 2,500 requests per second, in both your source and destination Regions (if different), to ensure that there is no throttling by AWS KMS.   
To see your `PUT` object request rate in the source bucket, view `PutRequests` in the Amazon CloudWatch request metrics for Amazon S3. For information about viewing CloudWatch metrics, see [Using the S3 console](configure-request-metrics-bucket.md#configure-metrics).

   If you chose to replicate objects encrypted with AWS KMS, do the following: 

   1. Under **AWS KMS key for encrypting destination objects **, specify your KMS key in one of the following ways:
     + To choose from a list of available KMS keys, choose **Choose from your AWS KMS keys**, and choose your **KMS key** from the list of available keys.

       Both the AWS managed key (`aws/s3`) and your customer managed keys appear in this list. For more information about customer managed keys, see [Customer keys and AWS keys](https://docs.aws.amazon.com//kms/latest/developerguide/concepts.html#key-mgmt) in the *AWS Key Management Service Developer Guide*.
     + To enter the KMS key Amazon Resource Name (ARN), choose **Enter AWS KMS key ARN**, and enter your KMS key ARN in the field that appears. This encrypts the replicas in the destination bucket. You can find the ARN for your KMS key in the [IAM Console](https://console.aws.amazon.com/iam/), under **Encryption keys**. 
     + To create a new customer managed key in the AWS KMS console, choose **Create a KMS key**.

       For more information about creating an AWS KMS key, see [Creating keys](https://docs.aws.amazon.com//kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*.
**Important**  
You can only use KMS keys that are enabled in the same AWS Region as the bucket. When you choose **Choose from your KMS keys**, the S3 console lists only 100 KMS keys per Region. If you have more than 100 KMS keys in the same Region, you can see only the first 100 KMS keys in the S3 console. To use a KMS key that is not listed in the console, choose **Enter AWS KMS key ARN**, and enter your KMS key ARN.  
When you use an AWS KMS key for server-side encryption in Amazon S3, you must choose a symmetric encryption KMS key. Amazon S3 supports only symmetric encryption KMS keys and not asymmetric KMS keys. For more information, see [Identifying symmetric and asymmetric KMS keys](https://docs.aws.amazon.com//kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide*.

     For more information about creating an AWS KMS key, see [Creating keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*. For more information about using AWS KMS with Amazon S3, see [Using server-side encryption with AWS KMS keys (SSE-KMS)](UsingKMSEncryption.md).

1. Under **Destination storage class**, if you want to replicate your data into a specific storage class in the destination, choose **Change the storage class for the replicated objects**. Then choose the storage class that you want to use for the replicated objects in the destination. If you don't choose this option, the storage class for replicated objects is the same class as the original objects.

1. You have the following additional options while setting the **Additional replication options**:
   + If you want to enable S3 Replication Time Control (S3 RTC) in your replication configuration, select **Replication Time Control (RTC)**. For more information about this option, see [Meeting compliance requirements with S3 Replication Time Control](replication-time-control.md).
   + If you want to enable S3 Replication metrics in your replication configuration, select **Replication metrics and events**. For more information, see [Monitoring replication with metrics, event notifications, and statuses](replication-metrics.md).
   + If you want to enable delete marker replication in your replication configuration, select **Delete marker replication**. For more information, see [Replicating delete markers between buckets](delete-marker-replication.md).
   + If you want to enable Amazon S3 replica modification sync in your replication configuration, select **Replica modification sync**. For more information, see [Replicating metadata changes with replica modification sync](replication-for-metadata-changes.md).
**Note**  
When you use S3 RTC or S3 Replication metrics, additional fees apply.

1. To finish, choose **Save**.

1. After you save your rule, you can edit, enable, disable, or delete your rule by selecting your rule and choosing **Edit rule**. 

## Using the AWS CLI
<a name="replication-ex1-cli"></a>

To use the AWS CLI to set up replication when the source and destination buckets are owned by the same AWS account, you do the following:
+ Create source and destination buckets.
+ Enable versioning on the buckets.
+ Create an AWS Identity and Access Management (IAM) role that gives Amazon S3 permission to replicate objects.
+ Add the replication configuration to the source bucket.

To verify your setup, you test it.

**To set up replication when the source and destination buckets are owned by the same AWS account**

1. Set a credentials profile for the AWS CLI. This example uses the profile name `acctA`. For information about setting credential profiles and using named profiles, see [Configuration and credential file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) in the *AWS Command Line Interface User Guide*. 
**Important**  
The profile that you use for this example must have the necessary permissions. For example, in the replication configuration, you specify the IAM role that Amazon S3 can assume. You can do this only if the profile that you use has the `iam:PassRole` permission. For more information, see [Grant a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*. If you use administrator credentials to create a named profile, you can perform all the tasks. 

1. Create a source bucket and enable versioning on it by using the following AWS CLI commands. To use these commands, replace the *`user input placeholders`* with your own information. 

   The following `create-bucket` command creates a source bucket named `amzn-s3-demo-source-bucket` in the US East (N. Virginia) (`us-east-1`) Region:

   

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-source-bucket \
   --region us-east-1 \
   --profile acctA
   ```

   The following `put-bucket-versioning` command enables S3 Versioning on the `amzn-s3-demo-source-bucket` bucket: 

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-source-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctA
   ```

1. Create a destination bucket and enable versioning on it by using the following AWS CLI commands. To use these commands, replace the *`user input placeholders`* with your own information. 
**Note**  
To set up a replication configuration when both source and destination buckets are in the same AWS account, you use the same profile for the source and destination buckets. This example uses `acctA`.   
To test a replication configuration when the buckets are owned by different AWS accounts, specify different profiles for each account. For example, use an `acctB` profile for the destination bucket.

   

   The following `create-bucket` command creates a destination bucket named `amzn-s3-demo-destination-bucket` in the US West (Oregon) (`us-west-2`) Region:

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-destination-bucket \
   --region us-west-2 \
   --create-bucket-configuration LocationConstraint=us-west-2 \
   --profile acctA
   ```

   The following `put-bucket-versioning` command enables S3 Versioning on the `amzn-s3-demo-destination-bucket` bucket: 

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-destination-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctA
   ```

1. Create an IAM role. You specify this role in the replication configuration that you add to the source bucket later. Amazon S3 assumes this role to replicate objects on your behalf. You create an IAM role in two steps:
   + Create a role.
   + Attach a permissions policy to the role.

   1. Create the IAM role.

      1. Copy the following trust policy and save it to a file named `s3-role-trust-policy.json` in the current directory on your local computer. This policy grants the Amazon S3 service principal permissions to assume the role.

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Effect":"Allow",
                  "Principal":{
                     "Service":"s3.amazonaws.com"
                  },
                  "Action":"sts:AssumeRole"
               }
            ]
         }
         ```

------

      1. Run the following command to create a role.

         ```
         $ aws iam create-role \
         --role-name replicationRole \
         --assume-role-policy-document file://s3-role-trust-policy.json  \
         --profile acctA
         ```

   1. Attach a permissions policy to the role.

      1. Copy the following permissions policy and save it to a file named `s3-role-permissions-policy.json` in the current directory on your local computer. This policy grants permissions for various Amazon S3 bucket and object actions. 

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:GetObjectVersionForReplication",
                     "s3:GetObjectVersionAcl",
                     "s3:GetObjectVersionTagging"
                  ],
                  "Resource":[
                     "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
                  ]
               },
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:ListBucket",
                     "s3:GetReplicationConfiguration"
                  ],
                  "Resource":[
                     "arn:aws:s3:::amzn-s3-demo-source-bucket"
                  ]
               },
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:ReplicateObject",
                     "s3:ReplicateDelete",
                     "s3:ReplicateTags"
                  ],
                  "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
               }
            ]
         }
         ```

------
**Note**  
If you want to replicate encrypted objects, you also must grant the necessary AWS Key Management Service (AWS KMS) key permissions. For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication). 

      1. Run the following command to create a policy and attach it to the role. Replace the *`user input placeholders`* with your own information.

         ```
         $ aws iam put-role-policy \
         --role-name replicationRole \
         --policy-document file://s3-role-permissions-policy.json \
         --policy-name replicationRolePolicy \
         --profile acctA
         ```

1. Add a replication configuration to the source bucket. 

   1. Although the Amazon S3 API requires that you specify the replication configuration as XML, the AWS CLI requires that you specify the replication configuration as JSON. Save the following JSON in a file called `replication.json` to the local directory on your computer.

      ```
      {
        "Role": "IAM-role-ARN",
        "Rules": [
          {
            "Status": "Enabled",
            "Priority": 1,
            "DeleteMarkerReplication": { "Status": "Disabled" },
            "Filter" : { "Prefix": "Tax"},
            "Destination": {
              "Bucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket"
            }
          }
        ]
      }
      ```

   1. Update the JSON by replacing the values for the `amzn-s3-demo-destination-bucket` and `IAM-role-ARN` with your own information. Save the changes.

   1. Run the following `put-bucket-replication` command to add the replication configuration to your source bucket. Be sure to provide the source bucket name:

      ```
      $ aws s3api put-bucket-replication \
      --replication-configuration file://replication.json \
      --bucket amzn-s3-demo-source-bucket \
      --profile acctA
      ```

   To retrieve the replication configuration, use the `get-bucket-replication` command:

   ```
   $ aws s3api get-bucket-replication \
   --bucket amzn-s3-demo-source-bucket \
   --profile acctA
   ```

1. Test the setup in the Amazon S3 console, by doing the following steps:

   1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. In the left navigation pane, choose **Buckets**. In the **General purpose buckets** list, choose the source bucket.

   1. In the source bucket, create a folder named `Tax`. 

   1. Add sample objects to the `Tax` folder in the source bucket. 
**Note**  
The amount of time that it takes for Amazon S3 to replicate an object depends on the size of the object. For information about how to see the status of replication, see [Getting replication status information](replication-status.md).

      In the destination bucket, verify the following:
      + That Amazon S3 replicated the objects.
      + That the objects are replicas. On the **Properties** tab for your objects, scroll down to the **Object management overview** section. Under **Management configurations**, see the value under **Replication status**. Make sure that this value is set to `REPLICA`.
      + That the replicas are owned by the source bucket account. You can verify the object ownership on the **Permissions** tab for your objects. 

        If the source and destination buckets are owned by different accounts, you can add an optional configuration to tell Amazon S3 to change the replica ownership to the destination account. For an example, see [How to change the replica owner](replication-change-owner.md#replication-walkthrough-3). 

## Using the AWS SDKs
<a name="replication-ex1-sdk"></a>

Use the following code examples to add a replication configuration to a bucket with the AWS SDK for Java and AWS SDK for .NET, respectively.

**Note**  
If you want to replicate encrypted objects, you also must grant the necessary AWS Key Management Service (AWS KMS) key permissions. For more information, see [Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)](replication-config-for-kms-objects.md).
To use Object Lock with replication, you must grant two additional permissions on the source S3 bucket in the AWS Identity and Access Management (IAM) role that you use to set up replication. The two additional permissions are `s3:GetObjectRetention` and `s3:GetObjectLegalHold`. If the role has an `s3:Get*` permission statement, that statement satisfies the requirement. For more information, see [Using Object Lock with S3 Replication](object-lock-managing.md#object-lock-managing-replication). 

------
#### [ Java ]

To add a replication configuration to a bucket and then retrieve and verify the configuration using the AWS SDK for Java, you can use the S3Client to manage replication settings programmatically.

For examples of how to configure replication with the AWS SDK for Java, see [Set replication configuration on a bucket](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_PutBucketReplication_section.html) in the *Amazon S3 API Reference*.

------
#### [ C\$1 ]

The following AWS SDK for .NET code example adds a replication configuration to a bucket and then retrieves it. To use this code, provide the names for your buckets and the Amazon Resource Name (ARN) for your IAM role. For information about setting up and running the code examples, see [Getting Started with the AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config.html) in the *AWS SDK for .NET Developer Guide*. 

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
    class CrossRegionReplicationTest
    {
        private const string sourceBucket = "*** source bucket ***";
        // Bucket ARN example - arn:aws:s3:::destinationbucket
        private const string destinationBucketArn = "*** destination bucket ARN ***";
        private const string roleArn = "*** IAM Role ARN ***";
        // Specify your bucket region (an example region is shown).
        private static readonly RegionEndpoint sourceBucketRegion = RegionEndpoint.USWest2;
        private static IAmazonS3 s3Client;
        public static void Main()
        {
            s3Client = new AmazonS3Client(sourceBucketRegion);
            EnableReplicationAsync().Wait();
        }
        static async Task EnableReplicationAsync()
        {
            try
            {
                ReplicationConfiguration replConfig = new ReplicationConfiguration
                {
                    Role = roleArn,
                    Rules =
                        {
                            new ReplicationRule
                            {
                                Prefix = "Tax",
                                Status = ReplicationRuleStatus.Enabled,
                                Destination = new ReplicationDestination
                                {
                                    BucketArn = destinationBucketArn
                                }
                            }
                        }
                };

                PutBucketReplicationRequest putRequest = new PutBucketReplicationRequest
                {
                    BucketName = sourceBucket,
                    Configuration = replConfig
                };

                PutBucketReplicationResponse putResponse = await s3Client.PutBucketReplicationAsync(putRequest);

                // Verify configuration by retrieving it.
                await RetrieveReplicationConfigurationAsync(s3Client);
            }
            catch (AmazonS3Exception e)
            {
                Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message);
            }
            catch (Exception e)
            {
                Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
            }
        }
        private static async Task RetrieveReplicationConfigurationAsync(IAmazonS3 client)
        {
            // Retrieve the configuration.
            GetBucketReplicationRequest getRequest = new GetBucketReplicationRequest
            {
                BucketName = sourceBucket
            };
            GetBucketReplicationResponse getResponse = await client.GetBucketReplicationAsync(getRequest);
            // Print.
            Console.WriteLine("Printing replication configuration information...");
            Console.WriteLine("Role ARN: {0}", getResponse.Configuration.Role);
            foreach (var rule in getResponse.Configuration.Rules)
            {
                Console.WriteLine("ID: {0}", rule.Id);
                Console.WriteLine("Prefix: {0}", rule.Prefix);
                Console.WriteLine("Status: {0}", rule.Status);
            }
        }
    }
}
```

------

# Configuring replication for buckets in different accounts
<a name="replication-walkthrough-2"></a>

Live replication is the automatic, asynchronous copying of objects across buckets in the same or different AWS Regions. Live replication copies newly created objects and object updates from a source bucket to a destination bucket or buckets. For more information, see [Replicating objects within and across Regions](replication.md).

When you configure replication, you add replication rules to the source bucket. Replication rules define which source bucket objects to replicate and the destination bucket or buckets where the replicated objects are stored. You can create a rule to replicate all the objects in a bucket or a subset of objects with a specific key name prefix, one or more object tags, or both. A destination bucket can be in the same AWS account as the source bucket, or it can be in a different account.

If you specify an object version ID to delete, Amazon S3 deletes that object version in the source bucket. But it doesn't replicate the deletion in the destination bucket. In other words, it doesn't delete the same object version from the destination bucket. This protects data from malicious deletions.

When you add a replication rule to a bucket, the rule is enabled by default, so it starts working as soon as you save it. 

Setting up live replication when the source and destination buckets are owned by different AWS accounts is similar to setting up replication when both buckets are owned by the same account. However, there are several differences when you're configuring replication in a cross-account scenario: 
+ The destination bucket owner must grant the source bucket owner permission to replicate objects in the destination bucket policy. 
+ If you're replicating objects that are encrypted with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) in a cross-account scenario, the owner of the KMS key must grant the source bucket owner permission to use the KMS key. For more information, see [Granting additional permissions for cross-account scenarios](replication-config-for-kms-objects.md#replication-kms-cross-acct-scenario). 
+ By default, replicated objects are owned by the source bucket owner. In a cross-account scenario, you might want to configure replication to change the ownership of the replicated objects to the owner of the destination bucket. For more information, see [Changing the replica owner](replication-change-owner.md).

**To configure replication when the source and destination buckets are owned by different AWS accounts**

1. In this example, you create source and destination buckets in two different AWS accounts. You must have two credential profiles set for the AWS CLI. This example uses `acctA` and `acctB` for those profile names. For information about setting credential profiles and using named profiles, see [Configuration and credential file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) in the *AWS Command Line Interface User Guide*. 

1. Follow the step-by-step instructions in [Configuring replication for buckets in the same account](replication-walkthrough1.md) with the following changes:
   + For all AWS CLI commands related to source bucket activities (such as creating the source bucket, enabling versioning, and creating the IAM role), use the `acctA` profile. Use the `acctB` profile to create the destination bucket. 
   + Make sure that the permissions policy for the IAM role specifies the source and destination buckets that you created for this example.

1. In the console, add the following bucket policy on the destination bucket to allow the owner of the source bucket to replicate objects. For instructions, see [Adding a bucket policy by using the Amazon S3 console](add-bucket-policy.md). Be sure to edit the policy by providing the AWS account ID of the source bucket owner, the IAM role name, and the destination bucket name. 
**Note**  
To use the following example, replace the `user input placeholders` with your own information. Replace `amzn-s3-demo-destination-bucket` with your destination bucket name. Replace `source-bucket-account-ID:role/service-role/source-account-IAM-role` in the IAM Amazon Resource Name (ARN) with the IAM role that you're using for this replication configuration.  
If you created the IAM service role manually, set the role path in the IAM ARN as `role/service-role/`, as shown in the following policy example. For more information, see [IAM ARNs](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns) in the *IAM User Guide*. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Id": "",
       "Statement": [
           {
               "Sid": "Set-permissions-for-objects",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:role/service-role/source-account-IAM-role"
               },
               "Action": [
                   "s3:ReplicateObject",
                   "s3:ReplicateDelete"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
           },
           {
               "Sid": "Set-permissions-on-bucket",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:role/service-role/source-account-IAM-role"
               },
               "Action": [
                   "s3:GetBucketVersioning",
                   "s3:PutBucketVersioning"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-destination-bucket"
           }
       ]
   }
   ```

------

1. (Optional) If you're replicating objects that are encrypted with SSE-KMS, the owner of the KMS key must grant the source bucket owner permission to use the KMS key. For more information, see [Granting additional permissions for cross-account scenarios](replication-config-for-kms-objects.md#replication-kms-cross-acct-scenario).

1. (Optional) In replication, the owner of the source object owns the replica by default. When the source and destination buckets are owned by different AWS accounts, you can add optional configuration settings to change replica ownership to the AWS account that owns the destination buckets. This includes granting the `ObjectOwnerOverrideToBucketOwner` permission. For more information, see [Changing the replica owner](replication-change-owner.md).

# Changing the replica owner
<a name="replication-change-owner"></a>

In replication, the owner of the source object also owns the replica by default. However, when the source and destination buckets are owned by different AWS accounts, you might want to change the replica ownership. For example, you might want to change the ownership to restrict access to object replicas. In your replication configuration, you can add optional configuration settings to change replica ownership to the AWS account that owns the destination buckets. 

To change the replica owner, you do the following:
+ Add the *owner override* option to the replication configuration to tell Amazon S3 to change replica ownership. 
+ Grant Amazon S3 the `s3:ObjectOwnerOverrideToBucketOwner` permission to change replica ownership. 
+ Add the `s3:ObjectOwnerOverrideToBucketOwner` permission in the destination bucket policy to allow changing replica ownership. The `s3:ObjectOwnerOverrideToBucketOwner` permission allows the owner of the destination buckets to accept the ownership of object replicas.

For more information, see [Considerations for the ownership override option](#repl-ownership-considerations) and [Adding the owner override option to the replication configuration](#repl-ownership-owneroverride-option). For a working example with step-by-step instructions, see [How to change the replica owner](#replication-walkthrough-3).

**Important**  
Instead of using the owner override option, you can use the bucket owner enforced setting for Object Ownership. When you use replication and the source and destination buckets are owned by different AWS accounts, the bucket owner of the destination bucket can use the bucket owner enforced setting for Object Ownership to change replica ownership to the AWS account that owns the destination bucket. This setting disables object access control lists (ACLs).   
The bucket owner enforced setting mimics the existing owner override behavior without the need of the `s3:ObjectOwnerOverrideToBucketOwner` permission. All objects that are replicated to the destination bucket with the bucket owner enforced setting are owned by the destination bucket owner. For more information about Object Ownership, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

## Considerations for the ownership override option
<a name="repl-ownership-considerations"></a>

When you configure the ownership override option, the following considerations apply:
+ By default, the owner of the source object also owns the replica. Amazon S3 replicates the object version and the ACL associated with it.

  If you add the owner override option to your replication configuration, Amazon S3 replicates only the object version, not the ACL. In addition, Amazon S3 doesn't replicate subsequent changes to the source object ACL. Amazon S3 sets the ACL on the replica that grants full control to the destination bucket owner. 
+  When you update a replication configuration to enable or disable the owner override, the following behavior occurs:
  + If you add the owner override option to the replication configuration:

    When Amazon S3 replicates an object version, it discards the ACL that's associated with the source object. Instead, Amazon S3 sets the ACL on the replica, giving full control to the owner of the destination bucket. Amazon S3 doesn't replicate subsequent changes to the source object ACL. However, this ACL change doesn't apply to object versions that were replicated before you set the owner override option. ACL updates on source objects that were replicated before the owner override was set continue to be replicated (because the object and its replicas continue to have the same owner).
  + If you remove the owner override option from the replication configuration:

    Amazon S3 replicates new objects that appear in the source bucket and the associated ACLs to the destination buckets. For objects that were replicated before you removed the owner override, Amazon S3 doesn't replicate the ACLs because the object ownership change that Amazon S3 made remains in effect. That is, ACLs put on the object version that were replicated when the owner override was set continue to be not replicated.

## Adding the owner override option to the replication configuration
<a name="repl-ownership-owneroverride-option"></a>

**Warning**  
Add the owner override option only when the source and destination buckets are owned by different AWS accounts. Amazon S3 doesn't check if the buckets are owned by the same or different accounts. If you add the owner override when both buckets are owned by same AWS account, Amazon S3 applies the owner override. This option grants full permissions to the owner of the destination bucket and doesn't replicate subsequent updates to the source objects' access control lists (ACLs). The replica owner can directly change the ACL associated with a replica with a `PutObjectAcl` request, but not through replication.

To specify the owner override option, add the following to each `Destination` element: 
+ The `AccessControlTranslation` element, which tells Amazon S3 to change replica ownership
+ The `Account` element, which specifies the AWS account of the destination bucket owner 

```
<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    ...
    <Destination>
      ...
      <AccessControlTranslation>
           <Owner>Destination</Owner>
       </AccessControlTranslation>
      <Account>destination-bucket-owner-account-id</Account>
    </Destination>
  </Rule>
</ReplicationConfiguration>
```

The following example replication configuration tells Amazon S3 to replicate objects that have the *`Tax`* key prefix to the `amzn-s3-demo-destination-bucket` destination bucket and change ownership of the replicas. To use this example, replace the `user input placeholders` with your own information.

```
<?xml version="1.0" encoding="UTF-8"?>
<ReplicationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
   <Role>arn:aws:iam::account-id:role/role-name</Role>
   <Rule>
      <ID>Rule-1</ID>
      <Priority>1</Priority>
      <Status>Enabled</Status>
      <DeleteMarkerReplication>
         <Status>Disabled</Status>
      </DeleteMarkerReplication>
      <Filter>
         <Prefix>Tax</Prefix>
      </Filter>
      <Destination>
         <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
         <Account>destination-bucket-owner-account-id</Account>
         <AccessControlTranslation>
            <Owner>Destination</Owner>
         </AccessControlTranslation>
      </Destination>
   </Rule>
</ReplicationConfiguration>
```

## Granting Amazon S3 permission to change replica ownership
<a name="repl-ownership-add-role-permission"></a>

Grant Amazon S3 permissions to change replica ownership by adding permission for the `s3:ObjectOwnerOverrideToBucketOwner` action in the permissions policy that's associated with the AWS Identity and Access Management (IAM) role. This role is the IAM role that you specified in the replication configuration that allows Amazon S3 to assume and replicate objects on your behalf. To use the following example, replace `amzn-s3-demo-destination-bucket` with the name of the destination bucket.

```
...
{
    "Effect":"Allow",
         "Action":[
       "s3:ObjectOwnerOverrideToBucketOwner"
    ],
    "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
}
...
```

## Adding permission in the destination bucket policy to allow changing replica ownership
<a name="repl-ownership-accept-ownership-b-policy"></a>

The owner of the destination bucket must grant the owner of the source bucket permission to change replica ownership. The owner of the destination bucket grants the owner of the source bucket permission for the `s3:ObjectOwnerOverrideToBucketOwner` action. This permission allows the destination bucket owner to accept ownership of the object replicas. The following example bucket policy statement shows how to do this. To use this example, replace the `user input placeholders` with your own information.

```
...
{
    "Sid":"1",
    "Effect":"Allow",
    "Principal":{"AWS":"source-bucket-account-id"},
    "Action":["s3:ObjectOwnerOverrideToBucketOwner"],
    "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
}
...
```

## How to change the replica owner
<a name="replication-walkthrough-3"></a>

When the source and destination buckets in a replication configuration are owned by different AWS accounts, you can tell Amazon S3 to change replica ownership to the AWS account that owns the destination bucket. The following examples show how to use the Amazon S3 console, the AWS Command Line Interface (AWS CLI), and the AWS SDKs to change replica ownership. 

### Using the S3 console
<a name="replication-ex3-console"></a>

For step-by-step instructions, see [Configuring replication for buckets in the same account](replication-walkthrough1.md). This topic provides instructions for setting up a replication configuration when the source and destination buckets are owned by the same and different AWS accounts.

### Using the AWS CLI
<a name="replication-ex3-cli"></a>

The following procedure shows how to change replica ownership by using the AWS CLI. In this procedure, you do the following: 
+ Create the source and destination buckets.
+ Enable versioning on the buckets.
+ Create an AWS Identity and Access Management (IAM) role that gives Amazon S3 permission to replicate objects.
+ Add the replication configuration to the source bucket.
+ In the replication configuration, you direct Amazon S3 to change the replica ownership.
+ You test your replication configuration.

**To change replica ownership when the source and destination buckets are owned by different AWS accounts (AWS CLI)**

To use the example AWS CLI commands in this procedure, replace the `user input placeholders` with your own information. 

1. In this example, you create the source and destination buckets in two different AWS accounts. To work with these two accounts, configure the AWS CLI with two named profiles. This example uses profiles named *`acctA`* and *`acctB`*, respectively. For information about setting credential profiles and using named profiles, see [Configuration and credential file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) in the *AWS Command Line Interface User Guide*. 
**Important**  
The profiles that you use for this procedure must have the necessary permissions. For example, in the replication configuration, you specify the IAM role that Amazon S3 can assume. You can do this only if the profile that you use has the `iam:PassRole` permission. If you use administrator user credentials to create a named profile, then you can perform all of the tasks in this procedure. For more information, see [Granting a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*. 

1. Create the source bucket and enable versioning. This example creates a source bucket named `amzn-s3-demo-source-bucket` in the US East (N. Virginia) (`us-east-1`) Region. 

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-source-bucket \
   --region us-east-1 \
   --profile acctA
   ```

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-source-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctA
   ```

1. Create a destination bucket and enable versioning. This example creates a destination bucket named `amzn-s3-demo-destination-bucket` in the US West (Oregon) (`us-west-2`) Region. Use an AWS account profile that's different from the one that you used for the source bucket.

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-destination-bucket \
   --region us-west-2 \
   --create-bucket-configuration LocationConstraint=us-west-2 \
   --profile acctB
   ```

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-destination-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctB
   ```

1. You must add permissions to your destination bucket policy to allow changing the replica ownership.

   1.  Save the following policy to a file named `destination-bucket-policy.json`. Make sure to replace the *`user input placeholders`* with your own information.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Sid": "destination_bucket_policy_sid",
                  "Principal": {
                      "AWS": "source-bucket-owner-123456789012"
                  },
                  "Action": [
                      "s3:ReplicateObject",
                      "s3:ReplicateDelete",
                      "s3:ObjectOwnerOverrideToBucketOwner",
                      "s3:ReplicateTags",
                      "s3:GetObjectVersionTagging"
                  ],
                  "Effect": "Allow",
                  "Resource": [
                      "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
                  ]
              }
          ]
      }
      ```

------

   1. Add the preceding policy to the destination bucket by using the following `put-bucket-policy` command:

      ```
      aws s3api put-bucket-policy --region $ {destination-region} --bucket $ {amzn-s3-demo-destination-bucket} --policy file://destination_bucket_policy.json
      ```

1. Create an IAM role. You specify this role in the replication configuration that you add to the source bucket later. Amazon S3 assumes this role to replicate objects on your behalf. You create an IAM role in two steps:
   + Create the role.
   + Attach a permissions policy to the role.

   1. Create the IAM role.

      1. Copy the following trust policy and save it to a file named `s3-role-trust-policy.json` in the current directory on your local computer. This policy grants Amazon S3 permissions to assume the role.

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Effect":"Allow",
                  "Principal":{
                     "Service":"s3.amazonaws.com"
                  },
                  "Action":"sts:AssumeRole"
               }
            ]
         }
         ```

------

      1. Run the following AWS CLI `create-role` command to create the IAM role:

         ```
         $ aws iam create-role \
         --role-name replicationRole \
         --assume-role-policy-document file://s3-role-trust-policy.json  \
         --profile acctA
         ```

         Make note of the Amazon Resource Name (ARN) of the IAM role that you created. You will need this ARN in a later step.

   1. Attach a permissions policy to the role.

      1. Copy the following permissions policy and save it to a file named `s3-role-perm-pol-changeowner.json` in the current directory on your local computer. This policy grants permissions for various Amazon S3 bucket and object actions. In the following steps, you attach this policy to the IAM role that you created earlier. 

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:GetObjectVersionForReplication",
                     "s3:GetObjectVersionAcl"
                  ],
                  "Resource":[
                     "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
                  ]
               },
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:ListBucket",
                     "s3:GetReplicationConfiguration"
                  ],
                  "Resource":[
                     "arn:aws:s3:::amzn-s3-demo-source-bucket"
                  ]
               },
               {
                  "Effect":"Allow",
                  "Action":[
                     "s3:ReplicateObject",
                     "s3:ReplicateDelete",
                     "s3:ObjectOwnerOverrideToBucketOwner",
                     "s3:ReplicateTags",
                     "s3:GetObjectVersionTagging"
                  ],
                  "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
               }
            ]
         }
         ```

------

      1. To attach the preceding permissions policy to the role, run the following `put-role-policy` command:

         ```
         $ aws iam put-role-policy \
         --role-name replicationRole \
         --policy-document file://s3-role-perm-pol-changeowner.json \
         --policy-name replicationRolechangeownerPolicy \
         --profile acctA
         ```

1. Add a replication configuration to your source bucket.

   1. The AWS CLI requires specifying the replication configuration as JSON. Save the following JSON in a file named `replication.json` in the current directory on your local computer. In the configuration, the `AccessControlTranslation` specifies the change in replica ownership from the source bucket owner to the destination bucket owner. 

      ```
      {
         "Role":"IAM-role-ARN",
         "Rules":[
            {
               "Status":"Enabled",
               "Priority":1,
               "DeleteMarkerReplication":{
                  "Status":"Disabled"
               },
               "Filter":{
               },
               "Status":"Enabled",
               "Destination":{
                  "Bucket":"arn:aws:s3:::amzn-s3-demo-destination-bucket",
                  "Account":"destination-bucket-owner-account-id",
                  "AccessControlTranslation":{
                     "Owner":"Destination"
                  }
               }
            }
         ]
      }
      ```

   1. Edit the JSON by providing values for the destination bucket name, the destination bucket owner account ID, and the `IAM-role-ARN`. Replace *`IAM-role-ARN`* with the ARN of the IAM role that you created earlier. Save the changes.

   1. To add the replication configuration to the source bucket, run the following command:

      ```
      $ aws s3api put-bucket-replication \
      --replication-configuration file://replication.json \
      --bucket amzn-s3-demo-source-bucket \
      --profile acctA
      ```

1. Test your replication configuration by checking replica ownership in the Amazon S3 console.

   1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. Add objects to the source bucket. Verify that the destination bucket contains the object replicas and that the ownership of the replicas has changed to the AWS account that owns the destination bucket.

### Using the AWS SDKs
<a name="replication-ex3-sdk"></a>

 For a code example to add a replication configuration, see [Using the AWS SDKs](replication-walkthrough1.md#replication-ex1-sdk). You must modify the replication configuration appropriately. For conceptual information, see [Changing the replica owner](#replication-change-owner). 

# Meeting compliance requirements with S3 Replication Time Control
<a name="replication-time-control"></a>

S3 Replication Time Control (S3 RTC) helps you meet compliance or business requirements for data replication and provides visibility into Amazon S3 replication times. S3 RTC replicates most objects that you upload to Amazon S3 in seconds, and 99.9 percent of those objects within 15 minutes. 

By default, S3 RTC includes two ways to track the progress of replication: 
+ **S3 Replication metrics** – You can use S3 Replication metrics to monitor the total number of S3 API operations that are pending replication, the total size of objects pending replication, the maximum replication time to the destination Region, and the total number of operations that failed replication. You can then monitor each dataset that you replicate separately. You can also enable S3 Replication metrics independently of S3 RTC. For more information, see [Using S3 Replication metrics](repl-metrics.md).

  Replication rules with S3 Replication Time Control (S3 RTC) enabled publish S3 Replication metrics. Replication metrics are available within 15 minutes of enabling S3 RTC. Replication metrics are available through the Amazon S3 console, the Amazon S3 API, the AWS SDKs, the AWS Command Line Interface (AWS CLI), and Amazon CloudWatch. For more information about CloudWatch metrics, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md). For more information about viewing replication metrics through the Amazon S3 console, see [Viewing replication metrics](repl-metrics.md#viewing-replication-metrics).

  S3 Replication metrics are billed at the same rate as Amazon CloudWatch custom metrics. For information, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/). 
+ **Amazon S3 Event Notifications** – S3 RTC provides `OperationMissedThreshold` and `OperationReplicatedAfterThreshold` events that notify the bucket owner if object replication exceeds or occurs after the 15-minute threshold. With S3 RTC, Amazon S3 Event Notifications can notify you in the rare instance when objects don't replicate within 15 minutes and when those objects replicate after the 15-minute threshold. 

  Replication events are available within 15 minutes of enabling S3 RTC. Amazon S3 Event Notifications are available through Amazon SQS, Amazon SNS, or AWS Lambda. For more information, see [Receiving replication failure events with Amazon S3 Event Notifications](replication-metrics-events.md).

 

## Best practices and guidelines for S3 RTC
<a name="rtc-best-practices"></a>

When replicating data in Amazon S3 with S3 Replication Time Control (S3 RTC) enabled, follow these best practice guidelines to optimize replication performance for your workloads. 

**Topics**
+ [

### Amazon S3 Replication and request rate performance guidelines
](#rtc-request-rate-performance)
+ [

### Estimating your replication request rates
](#estimating-replication-request-rates)
+ [

### Exceeding S3 RTC data transfer rate quotas
](#exceed-rtc-data-transfer-limits)
+ [

### AWS KMS encrypted object replication request rates
](#kms-object-replication-request-rates)

### Amazon S3 Replication and request rate performance guidelines
<a name="rtc-request-rate-performance"></a>

When uploading and retrieving storage from Amazon S3, your applications can achieve thousands of transactions per second in request performance. For example, an application can achieve at least 3,500 `PUT`/`COPY`/`POST`/`DELETE` or 5,500 `GET`/`HEAD` requests per second per prefix in an S3 bucket, including the requests that S3 Replication makes on your behalf. There are no limits to the number of prefixes in a bucket. You can increase your read or write performance by parallelizing reads. For example, if you create 10 prefixes in an S3 bucket to parallelize reads, you can scale your read performance to 55,000 read requests per second. 

Amazon S3 automatically scales in response to sustained request rates above these guidelines, or sustained request rates concurrent with `LIST` requests. While Amazon S3 is internally optimizing for the new request rate, you might receive HTTP 503 request responses temporarily until the optimization is complete. This behavior might occur with increases in request per second rates, or when you first enable S3 RTC. During these periods, your replication latency might increase. The S3 RTC service level agreement (SLA) doesn’t apply to time periods when Amazon S3 performance guidelines on requests per second are exceeded. 

The S3 RTC SLA also doesn't apply during time periods where your replication data transfer rate exceeds the default 1 gigabit per second (Gbps) quota. If you expect your replication transfer rate to exceed 1 Gbps, you can contact [AWS Support Center](https://console.aws.amazon.com/support/home#/) or use [Service Quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) to request an increase in your replication transfer rate quota. 

### Estimating your replication request rates
<a name="estimating-replication-request-rates"></a>

Your total request rate including the requests that Amazon S3 replication makes on your behalf must be within the Amazon S3 request rate guidelines for both the replication source and destination buckets. For each object replicated, Amazon S3 replication makes up to five `GET`/`HEAD` requests and one `PUT` request to the source bucket, and one `PUT` request to each destination bucket.

For example, if you expect to replicate 100 objects per second, Amazon S3 replication might perform an additional 100 `PUT` requests on your behalf for a total of 200 `PUT` requests per second to the source S3 bucket. Amazon S3 replication also might perform up to 500 `GET`/`HEAD` requests (5 `GET`/`HEAD` requests for each object that's replicated.) 

**Note**  
You incur costs for only one `PUT` request per object replicated. For more information, see the pricing information in the [Amazon S3 FAQs about replication](https://aws.amazon.com/s3/faqs/#Replication). 

### Exceeding S3 RTC data transfer rate quotas
<a name="exceed-rtc-data-transfer-limits"></a>

If you expect your S3 RTC data transfer rate to exceed the default 1 Gbps quota, contact [AWS Support Center](https://console.aws.amazon.com/support/home#/) or use [Service Quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) to request an increase in your replication transfer rate quota. 

### AWS KMS encrypted object replication request rates
<a name="kms-object-replication-request-rates"></a>

When you replicate objects that are encrypted with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), AWS KMS requests per second quotas apply. AWS KMS might reject an otherwise valid request because your request rate exceeds the quota for the number of requests per second. When a request is throttled, AWS KMS returns a `ThrottlingException` error. The AWS KMS request rate quota applies to requests that you make directly and to requests made by Amazon S3 replication on your behalf. 

For example, if you expect to replicate 1,000 objects per second, you can subtract 2,000 requests from your AWS KMS request rate quota. The resulting request rate per second is available for your AWS KMS workloads excluding replication. You can use [AWS KMS request metrics in Amazon CloudWatch](https://docs.aws.amazon.com/kms/latest/developerguide/monitoring-cloudwatch.html) to monitor the total AWS KMS request rate on your AWS account.

To request an increase to your AWS KMS requests per second quota, contact [AWS Support Center](https://console.aws.amazon.com/support/home#/) or use [Service Quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html). 

## Enabling S3 Replication Time Control
<a name="replication-walkthrough-5"></a>

You can start using S3 Replication Time Control (S3 RTC) with a new or existing replication rule. You can choose to apply your replication rule to an entire bucket, or to objects with a specific prefix or tag. When you enable S3 RTC, S3 Replication metrics are also enabled on your replication rule. 

You can configure S3 RTC by using the Amazon S3 console, the Amazon S3 API, the AWS SDKs, and the AWS Command Line Interface (AWS CLI).

**Topics**

### Using the S3 console
<a name="replication-ex5-console"></a>

For step-by-step instructions, see [Configuring replication for buckets in the same account](replication-walkthrough1.md). This topic provides instructions for enabling S3 RTC in your replication configuration when the source and destination buckets are owned by the same and different AWS accounts.

### Using the AWS CLI
<a name="replication-ex5-cli"></a>

To use the AWS CLI to replicate objects with S3 RTC enabled, you create buckets, enable versioning on the buckets, create an IAM role that gives Amazon S3 permission to replicate objects, and add the replication configuration to the source bucket. The replication configuration must have S3 RTC enabled, as shown in the following example. 

For step-by-step instructions for setting up your replication configuration by using the AWS CLI, see [Configuring replication for buckets in the same account](replication-walkthrough1.md).

The following example replication configuration enables and sets the `ReplicationTime` and `EventThreshold` values for a replication rule. Enabling and setting these values enables S3 RTC on the rule.

```
{
    "Rules": [
        {
            "Status": "Enabled",
            "Filter": {
                "Prefix": "Tax"
            },
            "DeleteMarkerReplication": {
                "Status": "Disabled"
            },
            "Destination": {
                "Bucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket",
                "Metrics": {
                    "Status": "Enabled",
                    "EventThreshold": {
                        "Minutes": 15
                    }
                },
                "ReplicationTime": {
                    "Status": "Enabled",
                    "Time": {
                        "Minutes": 15
                    }
                }
            },
            "Priority": 1
        }
    ],
    "Role": "IAM-Role-ARN"
}
```

**Important**  
 `Metrics:EventThreshold:Minutes` and `ReplicationTime:Time:Minutes` can only have `15` as a valid value. 

### Using the AWS SDK for Java
<a name="replication-ex5-sdk"></a>

 The following Java example adds replication configuration with S3 Replication Time Control (S3 RTC) enabled.

```
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.model.DeleteMarkerReplication;
import software.amazon.awssdk.services.s3.model.Destination;
import software.amazon.awssdk.services.s3.model.Metrics;
import software.amazon.awssdk.services.s3.model.MetricsStatus;
import software.amazon.awssdk.services.s3.model.PutBucketReplicationRequest;
import software.amazon.awssdk.services.s3.model.ReplicationConfiguration;
import software.amazon.awssdk.services.s3.model.ReplicationRule;
import software.amazon.awssdk.services.s3.model.ReplicationRuleFilter;
import software.amazon.awssdk.services.s3.model.ReplicationTime;
import software.amazon.awssdk.services.s3.model.ReplicationTimeStatus;
import software.amazon.awssdk.services.s3.model.ReplicationTimeValue;

public class Main {

  public static void main(String[] args) {
    S3Client s3 = S3Client.builder()
      .region(Region.US_EAST_1)
      .credentialsProvider(() -> AwsBasicCredentials.create(
          "AWS_ACCESS_KEY_ID",
          "AWS_SECRET_ACCESS_KEY")
      )
      .build();

    ReplicationConfiguration replicationConfig = ReplicationConfiguration
      .builder()
      .rules(
          ReplicationRule
            .builder()
            .status("Enabled")
            .priority(1)
            .deleteMarkerReplication(
                DeleteMarkerReplication
                    .builder()
                    .status("Disabled")
                    .build()
            )
            .destination(
                Destination
                    .builder()
                    .bucket("destination_bucket_arn")
                    .replicationTime(
                        ReplicationTime.builder().time(
                            ReplicationTimeValue.builder().minutes(15).build()
                        ).status(
                            ReplicationTimeStatus.ENABLED
                        ).build()
                    )
                    .metrics(
                        Metrics.builder().eventThreshold(
                            ReplicationTimeValue.builder().minutes(15).build()
                        ).status(
                            MetricsStatus.ENABLED
                        ).build()
                    )
                    .build()
            )
            .filter(
                ReplicationRuleFilter
                    .builder()
                    .prefix("testtest")
                    .build()
            )
        .build())
        .role("role_arn")
        .build();

    // Put replication configuration
    PutBucketReplicationRequest putBucketReplicationRequest = PutBucketReplicationRequest
      .builder()
      .bucket("source_bucket")
      .replicationConfiguration(replicationConfig)
      .build();

    s3.putBucketReplication(putBucketReplicationRequest);
  }
}
```

# Replicating encrypted objects (SSE-S3, SSE-KMS, DSSE-KMS, SSE-C)
<a name="replication-config-for-kms-objects"></a>

**Important**  
Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3. Starting January 5, 2023, all new object uploads to Amazon S3 are automatically encrypted at no additional cost and with no impact on performance. The automatic encryption status for S3 bucket default encryption configuration and for new object uploads is available in CloudTrail logs, S3 Inventory, S3 Storage Lens, the Amazon S3 console, and as an additional Amazon S3 API response header in the AWS CLI and AWS SDKs. For more information, see [Default encryption FAQ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html).

There are some special considerations when you're replicating objects that have been encrypted by using server-side encryption. Amazon S3 supports the following types of server-side encryption:
+ Server-side encryption with Amazon S3 managed keys (SSE-S3)
+ Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS)
+ Dual-layer server-side encryption with AWS KMS keys (DSSE-KMS)
+ Server-side encryption with customer-provided keys (SSE-C)

For more information about server-side encryption, see [Protecting data with server-side encryption](serv-side-encryption.md).

This topic explains the permissions that you need to direct Amazon S3 to replicate objects that have been encrypted by using server-side encryption. This topic also provides additional configuration elements that you can add and example AWS Identity and Access Management (IAM) policies that grant the necessary permissions for replicating encrypted objects. 

For an example with step-by-step instructions, see [Enabling replication for encrypted objects](#replication-walkthrough-4). For information about creating a replication configuration, see [Replicating objects within and across Regions](replication.md). 

**Note**  
You can use multi-Region AWS KMS keys in Amazon S3. However, Amazon S3 currently treats multi-Region keys as though they were single-Region keys, and does not use the multi-Region features of the key. For more information, see [ Using multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the *AWS Key Management Service Developer Guide*.

**Topics**
+ [

## How default bucket encryption affects replication
](#replication-default-encryption)
+ [

## Replicating objects encrypted with SSE-C
](#replicationSSEC)
+ [

## Replicating objects encrypted with SSE-S3, SSE-KMS, or DSSE-KMS
](#replications)
+ [

## Enabling replication for encrypted objects
](#replication-walkthrough-4)

## How default bucket encryption affects replication
<a name="replication-default-encryption"></a>

When you enable default encryption for a replication destination bucket, the following encryption behavior applies:
+ If objects in the source bucket are not encrypted, the replica objects in the destination bucket are encrypted by using the default encryption settings of the destination bucket. As a result, the entity tags (ETags) of the source objects differ from the ETags of the replica objects. If you have applications that use ETags, you must update those applications to account for this difference.
+ If objects in the source bucket are encrypted by using server-side encryption with Amazon S3 managed keys (SSE-S3), server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), or dual-layer server-side encryption with AWS KMS keys (DSSE-KMS), the replica objects in the destination bucket use the same type of encryption as the source objects. The default encryption settings of the destination bucket are not used.

## Replicating objects encrypted with SSE-C
<a name="replicationSSEC"></a>

By using server-side encryption with customer-provided keys (SSE-C), you can manage your own proprietary encryption keys. With SSE-C, you manage the keys while Amazon S3 manages the encryption and decryption process. You must provide an encryption key as part of your request, but you don't need to write any code to perform object encryption or decryption. When you upload an object, Amazon S3 encrypts the object by using the key that you provided. Amazon S3 then purges that key from memory. When you retrieve an object, you must provide the same encryption key as part of your request. For more information, see [Using server-side encryption with customer-provided keys (SSE-C)](ServerSideEncryptionCustomerKeys.md).

S3 Replication supports objects that are encrypted with SSE-C. You can configure SSE-C object replication in the Amazon S3 console or with the AWS SDKs in the same way that you configure replication for unencrypted objects. There aren't additional SSE-C permissions beyond what are currently required for replication. 

S3 Replication automatically replicates newly uploaded SSE-C encrypted objects if they are eligible, as specified in your S3 Replication configuration. To replicate existing objects in your buckets, use S3 Batch Replication. For more information about replicating objects, see [Setting up live replication overview](replication-how-setup.md) and [Replicating existing objects with Batch Replication](s3-batch-replication-batch.md).

There are no additional charges for replicating SSE-C objects. For details about replication pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). 

## Replicating objects encrypted with SSE-S3, SSE-KMS, or DSSE-KMS
<a name="replications"></a>

By default, Amazon S3 doesn't replicate objects that are encrypted with SSE-KMS or DSSE-KMS. This section explains the additional configuration elements that you can add to direct Amazon S3 to replicate these objects. 

For an example with step-by-step instructions, see [Enabling replication for encrypted objects](#replication-walkthrough-4). For information about creating a replication configuration, see [Replicating objects within and across Regions](replication.md). 

### Specifying additional information in the replication configuration
<a name="replication-kms-extra-config"></a>

In the replication configuration, you do the following:
+ In the `Destination` element in your replication configuration, add the ID of the symmetric AWS KMS customer managed key that you want Amazon S3 to use to encrypt object replicas, as shown in the following example replication configuration. 
+ Explicitly opt in by enabling replication of objects encrypted by using KMS keys (SSE-KMS or DSSE-KMS). To opt in, add the `SourceSelectionCriteria` element, as shown in the following example replication configuration.

 

```
<ReplicationConfiguration>
   <Rule>
      ...
      <SourceSelectionCriteria>
         <SseKmsEncryptedObjects>
           <Status>Enabled</Status>
         </SseKmsEncryptedObjects>
      </SourceSelectionCriteria>

      <Destination>
          ...
          <EncryptionConfiguration>
             <ReplicaKmsKeyID>AWS KMS key ARN or Key Alias ARN that's in the same AWS Region as the destination bucket.</ReplicaKmsKeyID>
          </EncryptionConfiguration>
       </Destination>
      ...
   </Rule>
</ReplicationConfiguration>
```

**Important**  
The KMS key must have been created in the same AWS Region as the destination bucket. 
The KMS key *must* be valid. The `PutBucketReplication` API operation doesn't check the validity of KMS keys. If you use a KMS key that isn't valid, you will receive the HTTP `200 OK` status code in response, but replication fails.

The following example shows a replication configuration that includes optional configuration elements. This replication configuration has one rule. The rule applies to objects with the `Tax` key prefix. Amazon S3 uses the specified AWS KMS key ID to encrypt these object replicas.

```
<?xml version="1.0" encoding="UTF-8"?>
<ReplicationConfiguration>
   <Role>arn:aws:iam::account-id:role/role-name</Role>
   <Rule>
      <ID>Rule-1</ID>
      <Priority>1</Priority>
      <Status>Enabled</Status>
      <DeleteMarkerReplication>
         <Status>Disabled</Status>
      </DeleteMarkerReplication>
      <Filter>
         <Prefix>Tax</Prefix>
      </Filter>
      <Destination>
         <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
         <EncryptionConfiguration>
            <ReplicaKmsKeyID>AWS KMS key ARN or Key Alias ARN that's in the same AWS Region as the destination bucket.</ReplicaKmsKeyID>
         </EncryptionConfiguration>
      </Destination>
      <SourceSelectionCriteria>
         <SseKmsEncryptedObjects>
            <Status>Enabled</Status>
         </SseKmsEncryptedObjects>
      </SourceSelectionCriteria>
   </Rule>
</ReplicationConfiguration>
```

### Granting additional permissions for the IAM role
<a name="replication-kms-permissions"></a>

To replicate objects that are encrypted at rest by using SSE-S3, SSE-KMS, or DSSE-KMS, grant the following additional permissions to the AWS Identity and Access Management (IAM) role that you specify in the replication configuration. You grant these permissions by updating the permissions policy that's associated with the IAM role. 
+ **`s3:GetObjectVersionForReplication` action for source objects** – This action allows Amazon S3 to replicate both unencrypted objects and objects created with server-side encryption by using SSE-S3, SSE-KMS, or DSSE-KMS.
**Note**  
We recommend that you use the `s3:GetObjectVersionForReplication` action instead of the `s3:GetObjectVersion` action because `s3:GetObjectVersionForReplication` provides Amazon S3 with only the minimum permissions necessary for replication. In addition, the `s3:GetObjectVersion` action allows replication of unencrypted and SSE-S3-encrypted objects, but not of objects that are encrypted by using KMS keys (SSE-KMS or DSSE-KMS). 
+ **`kms:Decrypt` and `kms:Encrypt` AWS KMS actions for the KMS keys**
  + You must grant `kms:Decrypt` permissions for the AWS KMS key that's used to decrypt the source object.
  + You must grant `kms:Encrypt` permissions for the AWS KMS key that's used to encrypt the object replica.
+ **`kms:GenerateDataKey` action for replicating plaintext objects** – If you're replicating plaintext objects to a bucket with SSE-KMS or DSSE-KMS encryption enabled by default, you must include the `kms:GenerateDataKey` permission for the destination encryption context and the KMS key in the IAM policy.

**Important**  
If you use S3 Batch Replication to replicate datasets cross region and your objects previously had their server-side encryption type updated from SSE-S3 to SSE-KMS, you may need additional permissions. On the source region bucket, you must have `kms:decrypt` permissions. Then, you will need the `kms:decrypt` and `kms:encrypt` permissions for the bucket in the destination region. 

We recommend that you restrict these permissions only to the destination buckets and objects by using AWS KMS condition keys. The AWS account that owns the IAM role must have permissions for the `kms:Encrypt` and `kms:Decrypt` actions for the KMS keys that are listed in the policy. If the KMS keys are owned by another AWS account, the owner of the KMS keys must grant these permissions to the AWS account that owns the IAM role. For more information about managing access to these KMS keys, see [Using IAM policies with AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/iam-policies.html) in the* AWS Key Management Service Developer Guide*.

### S3 Bucket Keys and replication
<a name="bk-replication"></a>

To use replication with an S3 Bucket Key, the AWS KMS key policy for the KMS key that's used to encrypt the object replica must include the `kms:Decrypt` permission for the calling principal. The call to `kms:Decrypt` verifies the integrity of the S3 Bucket Key before using it. For more information, see [Using an S3 Bucket Key with replication](bucket-key.md#bucket-key-replication).

When an S3 Bucket Key is enabled for the source or destination bucket, the encryption context will be the bucket's Amazon Resource Name (ARN), not the object's ARN (for example, `arn:aws:s3:::bucket_ARN`). You must update your IAM policies to use the bucket ARN for the encryption context:

```
"kms:EncryptionContext:aws:s3:arn": [
"arn:aws:s3:::bucket_ARN"
]
```

For more information, see [Encryption context (`x-amz-server-side-encryption-context`)](specifying-kms-encryption.md#s3-kms-encryption-context) (in the "Using the REST API" section) and [Changes to note before enabling an S3 Bucket Key](bucket-key.md#bucket-key-changes).

### Example policies: Using SSE-S3 and SSE-KMS with replication
<a name="kms-replication-examples"></a>

The following example IAM policies show statements for using SSE-S3 and SSE-KMS with replication.

**Example – Using SSE-KMS with separate destination buckets**  
The following example policy shows statements for using SSE-KMS with separate destination buckets. 

**Example – Replicating objects created with SSE-S3 and SSE-KMS**  
The following is a complete IAM policy that grants the necessary permissions to replicate unencrypted objects, objects created with SSE-S3, and objects created with SSE-KMS.    
****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetReplicationConfiguration",
            "s3:ListBucket"
         ],
         "Resource":[
            "arn:aws:s3:::amzn-s3-demo-source-bucket"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetObjectVersionForReplication",
            "s3:GetObjectVersionAcl"
         ],
         "Resource":[
            "arn:aws:s3:::amzn-s3-demo-source-bucket/key-prefix1*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:ReplicateObject",
            "s3:ReplicateDelete"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/key-prefix1*"
      },
      {
         "Action":[
            "kms:Decrypt"
         ],
         "Effect":"Allow",
         "Condition":{
            "StringLike":{
               "kms:ViaService":"s3.us-east-1.amazonaws.com",
               "kms:EncryptionContext:aws:s3:arn":[
                  "arn:aws:s3:::amzn-s3-demo-source-bucket/key-prefix1*"
               ]
            }
         },
         "Resource":[
           "arn:aws:kms:us-east-1:111122223333:key/key-id"
         ]
      },
      {
         "Action":[
            "kms:Encrypt"
         ],
         "Effect":"Allow",
         "Condition":{
            "StringLike":{
               "kms:ViaService":"s3.us-east-1.amazonaws.com",
               "kms:EncryptionContext:aws:s3:arn":[
                  "arn:aws:s3:::amzn-s3-demo-destination-bucket/prefix1*"
               ]
            }
         },
         "Resource":[
            "arn:aws:kms:us-east-1:111122223333:key/key-id"
         ]
      }
   ]
}
```

**Example – Replicating objects with S3 Bucket Keys**  
The following is a complete IAM policy that grants the necessary permissions to replicate objects with S3 Bucket Keys.    
****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetReplicationConfiguration",
            "s3:ListBucket"
         ],
         "Resource":[
            "arn:aws:s3:::amzn-s3-demo-source-bucket"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetObjectVersionForReplication",
            "s3:GetObjectVersionAcl"
         ],
         "Resource":[
            "arn:aws:s3:::amzn-s3-demo-source-bucket/key-prefix1*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:ReplicateObject",
            "s3:ReplicateDelete"
         ],
         "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/key-prefix1*"
      },
      {
         "Action":[
            "kms:Decrypt"
         ],
         "Effect":"Allow",
         "Condition":{
            "StringLike":{
               "kms:ViaService":"s3.us-east-1.amazonaws.com",
               "kms:EncryptionContext:aws:s3:arn":[
                  "arn:aws:s3:::amzn-s3-demo-source-bucket"
               ]
            }
         },
         "Resource":[
           "arn:aws:kms:us-east-1:111122223333:key/key-id"
         ]
      },
      {
         "Action":[
            "kms:Encrypt"
         ],
         "Effect":"Allow",
         "Condition":{
            "StringLike":{
               "kms:ViaService":"s3.us-east-1.amazonaws.com",
               "kms:EncryptionContext:aws:s3:arn":[
                  "arn:aws:s3:::amzn-s3-demo-destination-bucket"
               ]
            }
         },
         "Resource":[
            "arn:aws:kms:us-east-1:111122223333:key/key-id"
         ]
      }
   ]
}
```

### Granting additional permissions for cross-account scenarios
<a name="replication-kms-cross-acct-scenario"></a>

In a cross-account scenario, where the source and destination buckets are owned by different AWS accounts, you can use a KMS key to encrypt object replicas. However, the KMS key owner must grant the source bucket owner permission to use the KMS key. 

**Note**  
If you need to replicate SSE-KMS data cross-account, then your replication rule must specify a [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) from AWS KMS for the destination account. [AWS managed keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) don't allow cross-account use, and therefore can't be used to perform cross-account replication.<a name="cross-acct-kms-key-permission"></a>

**To grant the source bucket owner permission to use the KMS key (AWS KMS console)**

1. Sign in to the AWS Management Console and open the AWS KMS console at [https://console.aws.amazon.com/kms](https://console.aws.amazon.com/kms).

1. To change the AWS Region, use the Region selector in the upper-right corner of the page.

1. To view the keys in your account that you create and manage, in the navigation pane choose **Customer managed keys**.

1. Choose the KMS key.

1. Under the **General configuration** section, choose the **Key policy** tab.

1. Scroll down to **Other AWS accounts**.

1. Choose **Add other AWS accounts**. 

   The **Other AWS accounts** dialog box appears. 

1. In the dialog box, choose **Add another AWS account**. For **arn:aws:iam::**, enter the source bucket account ID.

1. Choose **Save changes**.

**To grant the source bucket owner permission to use the KMS key (AWS CLI)**
+ For information about the `put-key-policy` AWS Command Line Interface (AWS CLI) command, see [https://docs.aws.amazon.com/cli/latest/reference/kms/put-key-policy.html](https://docs.aws.amazon.com/cli/latest/reference/kms/put-key-policy.html) in the* AWS CLI Command Reference*. For information about the underlying `PutKeyPolicy` API operation, see [https://docs.aws.amazon.com/kms/latest/APIReference/API_PutKeyPolicy.html](https://docs.aws.amazon.com/kms/latest/APIReference/API_PutKeyPolicy.html) in the [AWS Key Management Service API Reference](https://docs.aws.amazon.com/kms/latest/APIReference/).

### AWS KMS transaction quota considerations
<a name="crr-kms-considerations"></a>

When you add many new objects with AWS KMS encryption after enabling Cross-Region Replication (CRR), you might experience throttling (HTTP `503 Service Unavailable` errors). Throttling occurs when the number of AWS KMS transactions per second exceeds the current quota. For more information, see [Quotas](https://docs.aws.amazon.com/kms/latest/developerguide/limits.html) in the *AWS Key Management Service Developer Guide*.

To request a quota increase, use Service Quotas. For more information, see [Requesting a quota increase](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html). If Service Quotas isn't supported in your Region, [open an AWS Support case](https://console.aws.amazon.com/support/home#/). 

## Enabling replication for encrypted objects
<a name="replication-walkthrough-4"></a>

By default, Amazon S3 doesn't replicate objects that are encrypted by using server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) or dual-layer server-side encryption with AWS KMS keys (DSSE-KMS). To replicate objects encrypted with SSE-KMS or DSS-KMS, you must modify the bucket replication configuration to tell Amazon S3 to replicate these objects. This example explains how to use the Amazon S3 console and the AWS Command Line Interface (AWS CLI) to change the bucket replication configuration to enable replicating encrypted objects.

**Note**  
When an S3 Bucket Key is enabled for the source or destination bucket, the encryption context will be the bucket's Amazon Resource Name (ARN), not the object's ARN. You must update your IAM policies to use the bucket ARN for the encryption context. For more information, see [S3 Bucket Keys and replication](#bk-replication).

**Note**  
You can use multi-Region AWS KMS keys in Amazon S3. However, Amazon S3 currently treats multi-Region keys as though they were single-Region keys, and does not use the multi-Region features of the key. For more information, see [ Using multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the *AWS Key Management Service Developer Guide*.

### Using the S3 console
<a name="replication-ex4-console"></a>

For step-by-step instructions, see [Configuring replication for buckets in the same account](replication-walkthrough1.md). This topic provides instructions for setting a replication configuration when the source and destination buckets are owned by the same and different AWS accounts.

### Using the AWS CLI
<a name="replication-ex4-cli"></a>

To replicate encrypted objects with the AWS CLI, you do the following: 
+ Create source and destination buckets and enable versioning on these buckets. 
+ Create an AWS Identity and Access Management (IAM) service role that gives Amazon S3 permission to replicate objects. The IAM role's permissions include the necessary permissions to replicate the encrypted objects.
+ Add a replication configuration to the source bucket. The replication configuration provides information related to replicating objects that are encrypted by using KMS keys.
+ Add encrypted objects to the source bucket. 
+ Test the setup to confirm that your encrypted objects are being replicated to the destination bucket.

The following procedures walk you through this process. 

**To replicate server-side encrypted objects (AWS CLI)**

To use the examples in this procedure, replace the `user input placeholders` with your own information.

1. In this example, you create both the source (*`amzn-s3-demo-source-bucket`*) and destination (*`amzn-s3-demo-destination-bucket`*) buckets in the same AWS account. You also set a credentials profile for the AWS CLI. This example uses the profile name `acctA`. 

   For information about setting credential profiles and using named profiles, see [Configuration and credential file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) in the *AWS Command Line Interface User Guide*. 

1. Use the following commands to create the `amzn-s3-demo-source-bucket` bucket and enable versioning on it. The following example commands create the `amzn-s3-demo-source-bucket` bucket in the US East (N. Virginia) (`us-east-1`) Region.

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-source-bucket \
   --region us-east-1 \
   --profile acctA
   ```

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-source-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctA
   ```

1. Use the following commands to create the `amzn-s3-demo-destination-bucket` bucket and enable versioning on it. The following example commands create the `amzn-s3-demo-destination-bucket` bucket in the US West (Oregon) (`us-west-2`) Region. 
**Note**  
To set up a replication configuration when both `amzn-s3-demo-source-bucket` and `amzn-s3-demo-destination-bucket` buckets are in the same AWS account, you use the same profile. This example uses `acctA`. To configure replication when the buckets are owned by different AWS accounts, you specify different profiles for each. 

   

   ```
   aws s3api create-bucket \
   --bucket amzn-s3-demo-destination-bucket \
   --region us-west-2 \
   --create-bucket-configuration LocationConstraint=us-west-2 \
   --profile acctA
   ```

   ```
   aws s3api put-bucket-versioning \
   --bucket amzn-s3-demo-destination-bucket \
   --versioning-configuration Status=Enabled \
   --profile acctA
   ```

1. Next, you create an IAM service role. You will specify this role in the replication configuration that you add to the `amzn-s3-demo-source-bucket` bucket later. Amazon S3 assumes this role to replicate objects on your behalf. You create an IAM role in two steps:
   + Create a service role.
   + Attach a permissions policy to the role.

   1. To create an IAM service role, do the following:

      1. Copy the following trust policy and save it to a file called `s3-role-trust-policy-kmsobj.json` in the current directory on your local computer. This policy grants the Amazon S3 service principal permissions to assume the role so that Amazon S3 can perform tasks on your behalf.

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Effect":"Allow",
                  "Principal":{
                     "Service":"s3.amazonaws.com"
                  },
                  "Action":"sts:AssumeRole"
               }
            ]
         }
         ```

------

      1. Use the following command to create the role:

         ```
         $ aws iam create-role \
         --role-name replicationRolekmsobj \
         --assume-role-policy-document file://s3-role-trust-policy-kmsobj.json  \
         --profile acctA
         ```

   1. Next, you attach a permissions policy to the role. This policy grants permissions for various Amazon S3 bucket and object actions. 

      1. Copy the following permissions policy and save it to a file named `s3-role-permissions-policykmsobj.json` in the current directory on your local computer. You will create an IAM role and attach the policy to it later. 
**Important**  
In the permissions policy, you specify the AWS KMS key IDs that will be used for encryption of the `amzn-s3-demo-source-bucket` and `amzn-s3-demo-destination-bucket` buckets. You must create two separate KMS keys for the `amzn-s3-demo-source-bucket` and `amzn-s3-demo-destination-bucket` buckets. AWS KMS keys aren't shared outside the AWS Region in which they were created. 

------
#### [ JSON ]

****  

         ```
         {
            "Version":"2012-10-17",		 	 	 
            "Statement":[
               {
                  "Action":[
                     "s3:ListBucket",
                     "s3:GetReplicationConfiguration",
                     "s3:GetObjectVersionForReplication",
                     "s3:GetObjectVersionAcl",
                     "s3:GetObjectVersionTagging"
                  ],
                  "Effect":"Allow",
                  "Resource":[
                     "arn:aws:s3:::amzn-s3-demo-source-bucket",
                     "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
                  ]
               },
               {
                  "Action":[
                     "s3:ReplicateObject",
                     "s3:ReplicateDelete",
                     "s3:ReplicateTags"
                  ],
                  "Effect":"Allow",
                  "Condition":{
                     "StringLikeIfExists":{
                        "s3:x-amz-server-side-encryption":[
                           "aws:kms",
                           "AES256",
                           "aws:kms:dsse"
                        ],
                        "s3:x-amz-server-side-encryption-aws-kms-key-id":[
                           "AWS KMS key IDs(in ARN format) to use for encrypting object replicas"  
                        ]
                     }
                  },
                  "Resource":"arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
               },
               {
                  "Action":[
                     "kms:Decrypt"
                  ],
                  "Effect":"Allow",
                  "Condition":{
                     "StringLike":{
                        "kms:ViaService":"s3.us-east-1.amazonaws.com",
                        "kms:EncryptionContext:aws:s3:arn":[
                           "arn:aws:s3:::amzn-s3-demo-source-bucket/*"
                        ]
                     }
                  },
                  "Resource":[
                     "arn:aws:kms:us-east-1:111122223333:key/key-id" 
                  ]
               },
               {
                  "Action":[
                     "kms:Encrypt"
                  ],
                  "Effect":"Allow",
                  "Condition":{
                     "StringLike":{
                        "kms:ViaService":"s3.us-west-2.amazonaws.com",
                        "kms:EncryptionContext:aws:s3:arn":[
                           "arn:aws:s3:::amzn-s3-demo-destination-bucket/*"
                        ]
                     }
                  },
                  "Resource":[
                     "arn:aws:kms:us-west-2:111122223333:key/key-id" 
                  ]
               }
            ]
         }
         ```

------

      1. Create a policy and attach it to the role.

         ```
         $ aws iam put-role-policy \
         --role-name replicationRolekmsobj \
         --policy-document file://s3-role-permissions-policykmsobj.json \
         --policy-name replicationRolechangeownerPolicy \
         --profile acctA
         ```

1. Next, add the following replication configuration to the `amzn-s3-demo-source-bucket` bucket. It tells Amazon S3 to replicate objects with the `Tax/` prefix to the `amzn-s3-demo-destination-bucket` bucket. 
**Important**  
In the replication configuration, you specify the IAM role that Amazon S3 can assume. You can do this only if you have the `iam:PassRole` permission. The profile that you specify in the CLI command must have this permission. For more information, see [Granting a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*.

   ```
    <ReplicationConfiguration>
     <Role>IAM-Role-ARN</Role>
     <Rule>
       <Priority>1</Priority>
       <DeleteMarkerReplication>
          <Status>Disabled</Status>
       </DeleteMarkerReplication>
       <Filter>
          <Prefix>Tax</Prefix>
       </Filter>
       <Status>Enabled</Status>
       <SourceSelectionCriteria>
         <SseKmsEncryptedObjects>
           <Status>Enabled</Status>
         </SseKmsEncryptedObjects>
       </SourceSelectionCriteria>
       <Destination>
         <Bucket>arn:aws:s3:::amzn-s3-demo-destination-bucket</Bucket>
         <EncryptionConfiguration>
           <ReplicaKmsKeyID>AWS KMS key IDs to use for encrypting object replicas</ReplicaKmsKeyID>
         </EncryptionConfiguration>
       </Destination>
     </Rule>
   </ReplicationConfiguration>
   ```

   To add a replication configuration to the `amzn-s3-demo-source-bucket` bucket, do the following:

   1. The AWS CLI requires you to specify the replication configuration as JSON. Save the following JSON in a file (`replication.json`) in the current directory on your local computer. 

      ```
      {
         "Role":"IAM-Role-ARN",
         "Rules":[
            {
               "Status":"Enabled",
               "Priority":1,
               "DeleteMarkerReplication":{
                  "Status":"Disabled"
               },
               "Filter":{
                  "Prefix":"Tax"
               },
               "Destination":{
                  "Bucket":"arn:aws:s3:::amzn-s3-demo-destination-bucket",
                  "EncryptionConfiguration":{
                     "ReplicaKmsKeyID":"AWS KMS key IDs (in ARN format) to use for encrypting object replicas"
                  }
               },
               "SourceSelectionCriteria":{
                  "SseKmsEncryptedObjects":{
                     "Status":"Enabled"
                  }
               }
            }
         ]
      }
      ```

   1. Edit the JSON to provide values for the `amzn-s3-demo-destination-bucket` bucket, `AWS KMS key IDs (in ARN format)`, and `IAM-role-ARN`. Save the changes.

   1. Use the following command to add the replication configuration to your `amzn-s3-demo-source-bucket` bucket. Be sure to provide the `amzn-s3-demo-source-bucket` bucket name.

      ```
      $ aws s3api put-bucket-replication \
      --replication-configuration file://replication.json \
      --bucket amzn-s3-demo-source-bucket \
      --profile acctA
      ```

1. Test the configuration to verify that encrypted objects are replicated. In the Amazon S3 console, do the following:

   1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. In the `amzn-s3-demo-source-bucket` bucket, create a folder named `Tax`. 

   1. Add sample objects to the folder. Be sure to choose the encryption option and specify your KMS key to encrypt the objects. 

   1. Verify that the `amzn-s3-demo-destination-bucket` bucket contains the object replicas and that they are encrypted by using the KMS key that you specified in the configuration. For more information, see [Getting replication status information](replication-status.md).

### Using the AWS SDKs
<a name="replication-ex4-sdk"></a>

For a code example that shows how to add a replication configuration, see [Using the AWS SDKs](replication-walkthrough1.md#replication-ex1-sdk). You must modify the replication configuration appropriately. 

 

# Replicating metadata changes with replica modification sync
<a name="replication-for-metadata-changes"></a>

Amazon S3 replica modification sync can help you keep object metadata such as tags, access control lists (ACLs), and Object Lock settings replicated between replicas and source objects. By default, Amazon S3 replicates metadata from the source objects to the replicas only. When replica modification sync is enabled, Amazon S3 replicates metadata changes made to the replica copies back to the source object, making the replication bidirectional (two-way replication).

## Enabling replica modification sync
<a name="enabling-replication-for-metadata-changes"></a>

You can use Amazon S3 replica modification sync with new or existing replication rules. You can apply it to an entire bucket or to objects that have a specific prefix.

To enable replica modification sync by using the Amazon S3 console, see [Examples for configuring live replication](replication-example-walkthroughs.md). This topic provides instructions for enabling replica modification sync in your replication configuration when the source and destination buckets are owned by the same or different AWS accounts.

To enable replica modification sync by using the AWS Command Line Interface (AWS CLI), you must add a replication configuration to the bucket containing the replicas with `ReplicaModifications` enabled. To set up two-way replication, create a replication rule from the source bucket (`amzn-s3-demo-source-bucket`) to the bucket containing the replicas (`amzn-s3-demo-destination-bucket`). Then, create a second replication rule from the bucket containing the replicas (`amzn-s3-demo-destination-bucket`) to the source bucket (`amzn-s3-demo-source-bucket`). The source and destination buckets can be in the same or different AWS Regions.

**Note**  
You must enable replica modification sync on both the source and destination buckets to replicate replica metadata changes like object access control lists (ACLs), object tags, or Object Lock settings on the replicated objects. Like all replication rules, you can apply these rules to the entire bucket or to a subset of objects filtered by prefix or object tags.

In the following example configuration, Amazon S3 replicates metadata changes under the prefix `Tax` to the bucket `amzn-s3-demo-source-bucket`, which contains the source objects.

```
{
    "Rules": [
        {
            "Status": "Enabled",
            "Filter": {
                "Prefix": "Tax"
            },
            "SourceSelectionCriteria": {
                "ReplicaModifications":{
                    "Status": "Enabled"
                }
            },
            "Destination": {
                "Bucket": "arn:aws:s3:::amzn-s3-demo-source-bucket"
            },
            "Priority": 1
        }
    ],
    "Role": "IAM-Role-ARN"
}
```

For full instructions on creating replication rules by using the AWS CLI, see [Configuring replication for buckets in the same account](replication-walkthrough1.md).

# Replicating delete markers between buckets
<a name="delete-marker-replication"></a>

By default, when S3 Replication is enabled and an object is deleted in the source bucket, Amazon S3 adds a delete marker in the source bucket only. This action helps protect data in the destination buckets from accidental or malicious deletions. If you have *delete marker replication* enabled, these markers are copied to the destination buckets, and Amazon S3 behaves as if the object was deleted in both the source and destination buckets. For more information about how delete markers work, see [Working with delete markers](DeleteMarker.md).

**Note**  
Delete marker replication isn't supported for tag-based replication rules. Delete marker replication also doesn't adhere to the 15-minute service-level agreement (SLA) that's granted when you're using S3 Replication Time Control (S3 RTC).
If you're not using the latest replication configuration XML version, delete operations affect replication differently. For more information, see [How delete operations affect replication](replication-what-is-isnot-replicated.md#replication-delete-op).
If you enable delete marker replication and your source bucket has an S3 Lifecycle expiration rule, the delete markers added by the S3 Lifecycle expiration rule won't be replicated to the destination bucket.

## Enabling delete marker replication
<a name="enabling-delete-marker-replication"></a>

You can start using delete marker replication with a new or existing replication rule. You can apply delete marker replication to an entire bucket or to objects that have a specific prefix.

To enable delete marker replication by using the Amazon S3 console, see [Using the S3 console](replication-walkthrough1.md#enable-replication). This topic provides instructions for enabling delete marker replication in your replication configuration when the source and destination buckets are owned by the same or different AWS accounts.

To enable delete marker replication by using the AWS Command Line Interface (AWS CLI), you must add a replication configuration to the source bucket with `DeleteMarkerReplication` enabled, as shown in the following example configuration. 

In the following example replication configuration, delete markers are replicated to the destination bucket `amzn-s3-demo-destination-bucket` for objects under the prefix `Tax`.

```
{
    "Rules": [
        {
            "Status": "Enabled",
            "Filter": {
                "Prefix": "Tax"
            },
            "DeleteMarkerReplication": {
                "Status": "Enabled"
            },
            "Destination": {
                "Bucket": "arn:aws:s3:::amzn-s3-demo-destination-bucket"
            },
            "Priority": 1
        }
    ],
    "Role": "IAM-Role-ARN"
}
```

For full instructions on creating replication rules through the AWS CLI, see [Configuring replication for buckets in the same account](replication-walkthrough1.md).