Amazon S3 Construct Library

Define an S3 bucket.

bucket = s3.Bucket(self, "MyFirstBucket")

Bucket constructs expose the following deploy-time attributes:

  • bucketArn - the ARN of the bucket (i.e. arn:aws:s3:::amzn-s3-demo-bucket)

  • bucketName - the name of the bucket (i.e. amzn-s3-demo-bucket)

  • bucketWebsiteUrl - the Website URL of the bucket (i.e. http://amzn-s3-demo-bucket.s3-website-us-west-1.amazonaws.com)

  • bucketDomainName - the URL of the bucket (i.e. amzn-s3-demo-bucket.s3.amazonaws.com)

  • bucketDualStackDomainName - the dual-stack URL of the bucket (i.e. amzn-s3-demo-bucket.s3.dualstack.eu-west-1.amazonaws.com)

  • bucketRegionalDomainName - the regional URL of the bucket (i.e. amzn-s3-demo-bucket.s3.eu-west-1.amazonaws.com)

  • arnForObjects(pattern) - the ARN of an object or objects within the bucket (i.e. arn:aws:s3:::amzn-s3-demo-bucket/exampleobject.png or arn:aws:s3:::amzn-s3-demo-bucket/Development/*)

  • urlForObject(key) - the HTTP URL of an object within the bucket (i.e. https://s3---cn-north-1.amazonaws.com.rproxy.goskope.com.cn/china-bucket/mykey)

  • virtualHostedUrlForObject(key) - the virtual-hosted style HTTP URL of an object within the bucket (i.e. https://china-bucket-s3---cn-north-1.amazonaws.com.rproxy.goskope.com.cn/mykey)

  • s3UrlForObject(key) - the S3 URL of an object within the bucket (i.e. s3://bucket/mykey)

Encryption

Define a KMS-encrypted bucket:

bucket = s3.Bucket(self, "MyEncryptedBucket",
    encryption=s3.BucketEncryption.KMS
)

# you can access the encryption key:
assert(bucket.encryption_key instanceof kms.Key)

You can also supply your own key:

my_kms_key = kms.Key(self, "MyKey")

bucket = s3.Bucket(self, "MyEncryptedBucket",
    encryption=s3.BucketEncryption.KMS,
    encryption_key=my_kms_key
)

assert(bucket.encryption_key == my_kms_key)

Enable KMS-SSE encryption via S3 Bucket Keys:

bucket = s3.Bucket(self, "MyEncryptedBucket",
    encryption=s3.BucketEncryption.KMS,
    bucket_key_enabled=True
)

Use BucketEncryption.ManagedKms to use the S3 master KMS key:

bucket = s3.Bucket(self, "Buck",
    encryption=s3.BucketEncryption.KMS_MANAGED
)

assert(bucket.encryption_key == null)

Enable DSSE encryption:

const bucket = new s3.Bucket(stack, 'MyDSSEBucket', {
  encryption: s3.BucketEncryption.DSSE_MANAGED,
  bucketKeyEnabled: true,
});

Permissions

A bucket policy will be automatically created for the bucket upon the first call to addToResourcePolicy(statement):

bucket = s3.Bucket(self, "MyBucket")
result = bucket.add_to_resource_policy(
    iam.PolicyStatement(
        actions=["s3:GetObject"],
        resources=[bucket.arn_for_objects("file.txt")],
        principals=[iam.AccountRootPrincipal()]
    ))

If you try to add a policy statement to an existing bucket, this method will not do anything:

bucket = s3.Bucket.from_bucket_name(self, "existingBucket", "amzn-s3-demo-bucket")

# No policy statement will be added to the resource
result = bucket.add_to_resource_policy(
    iam.PolicyStatement(
        actions=["s3:GetObject"],
        resources=[bucket.arn_for_objects("file.txt")],
        principals=[iam.AccountRootPrincipal()]
    ))

That’s because it’s not possible to tell whether the bucket already has a policy attached, let alone to re-use that policy to add more statements to it. We recommend that you always check the result of the call:

bucket = s3.Bucket(self, "MyBucket")
result = bucket.add_to_resource_policy(
    iam.PolicyStatement(
        actions=["s3:GetObject"],
        resources=[bucket.arn_for_objects("file.txt")],
        principals=[iam.AccountRootPrincipal()]
    ))

if not result.statement_added:
    pass

The bucket policy can be directly accessed after creation to add statements or adjust the removal policy.

bucket = s3.Bucket(self, "MyBucket")
bucket.policy.apply_removal_policy(cdk.RemovalPolicy.RETAIN)

Most of the time, you won’t have to manipulate the bucket policy directly. Instead, buckets have “grant” methods called to give prepackaged sets of permissions to other resources. For example:

# my_lambda: lambda.Function


bucket = s3.Bucket(self, "MyBucket")
bucket.grant_read_write(my_lambda)

Will give the Lambda’s execution role permissions to read and write from the bucket.

AWS Foundational Security Best Practices

Enforcing SSL

To require all requests use Secure Socket Layer (SSL):

bucket = s3.Bucket(self, "Bucket",
    enforce_sSL=True
)

To require a minimum TLS version for all requests:

bucket = s3.Bucket(self, "Bucket",
    enforce_sSL=True,
    minimum_tLSVersion=1.2
)

Sharing buckets between stacks

To use a bucket in a different stack in the same CDK application, pass the object to the other stack:

#
# Stack that defines the bucket
#
class Producer(Stack):

    def __init__(self, scope, id, *, description=None, env=None, stackName=None, tags=None, notificationArns=None, synthesizer=None, terminationProtection=None, analyticsReporting=None, crossRegionReferences=None, permissionsBoundary=None, suppressTemplateIndentation=None):
        super().__init__(scope, id, description=description, env=env, stackName=stackName, tags=tags, notificationArns=notificationArns, synthesizer=synthesizer, terminationProtection=terminationProtection, analyticsReporting=analyticsReporting, crossRegionReferences=crossRegionReferences, permissionsBoundary=permissionsBoundary, suppressTemplateIndentation=suppressTemplateIndentation)

        bucket = s3.Bucket(self, "MyBucket",
            removal_policy=cdk.RemovalPolicy.DESTROY
        )
        self.my_bucket = bucket

#
# Stack that consumes the bucket
#
class Consumer(Stack):
    def __init__(self, scope, id, *, userBucket, description=None, env=None, stackName=None, tags=None, notificationArns=None, synthesizer=None, terminationProtection=None, analyticsReporting=None, crossRegionReferences=None, permissionsBoundary=None, suppressTemplateIndentation=None):
        super().__init__(scope, id, userBucket=userBucket, description=description, env=env, stackName=stackName, tags=tags, notificationArns=notificationArns, synthesizer=synthesizer, terminationProtection=terminationProtection, analyticsReporting=analyticsReporting, crossRegionReferences=crossRegionReferences, permissionsBoundary=permissionsBoundary, suppressTemplateIndentation=suppressTemplateIndentation)

        user = iam.User(self, "MyUser")
        user_bucket.grant_read_write(user)

app = App()
producer = Producer(app, "ProducerStack")
Consumer(app, "ConsumerStack", user_bucket=producer.my_bucket)

Importing existing buckets

To import an existing bucket into your CDK application, use the Bucket.fromBucketAttributes factory method. This method accepts BucketAttributes which describes the properties of an already existing bucket:

Note that this method allows importing buckets with legacy names containing uppercase letters (A-Z) or underscores (_), which were permitted for buckets created before March 1, 2018. For buckets created after this date, uppercase letters and underscores are not allowed in the bucket name.

# my_lambda: lambda.Function

bucket = s3.Bucket.from_bucket_attributes(self, "ImportedBucket",
    bucket_arn="arn:aws:s3:::amzn-s3-demo-bucket"
)

# now you can just call methods on the bucket
bucket.add_event_notification(s3.EventType.OBJECT_CREATED, s3n.LambdaDestination(my_lambda),
    prefix="home/myusername/*"
)

Alternatively, short-hand factories are available as Bucket.fromBucketName and Bucket.fromBucketArn, which will derive all bucket attributes from the bucket name or ARN respectively:

by_name = s3.Bucket.from_bucket_name(self, "BucketByName", "amzn-s3-demo-bucket")
by_arn = s3.Bucket.from_bucket_arn(self, "BucketByArn", "arn:aws:s3:::amzn-s3-demo-bucket")

The bucket’s region defaults to the current stack’s region, but can also be explicitly set in cases where one of the bucket’s regional properties needs to contain the correct values.

my_cross_region_bucket = s3.Bucket.from_bucket_attributes(self, "CrossRegionImport",
    bucket_arn="arn:aws:s3:::amzn-s3-demo-bucket",
    region="us-east-1"
)

Bucket Notifications

The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket as described under S3 Bucket Notifications of the S3 Developer Guide.

To subscribe for bucket notifications, use the bucket.addEventNotification method. The bucket.addObjectCreatedNotification and bucket.addObjectRemovedNotification can also be used for these common use cases.

The following example will subscribe an SNS topic to be notified of all s3:ObjectCreated:* events:

bucket = s3.Bucket(self, "MyBucket")
topic = sns.Topic(self, "MyTopic")
bucket.add_event_notification(s3.EventType.OBJECT_CREATED, s3n.SnsDestination(topic))

This call will also ensure that the topic policy can accept notifications for this specific bucket.

Supported S3 notification targets are exposed by the aws-cdk-lib/aws-s3-notifications package.

It is also possible to specify S3 object key filters when subscribing. The following example will notify myQueue when objects prefixed with foo/ and have the .jpg suffix are removed from the bucket.

# my_queue: sqs.Queue

bucket = s3.Bucket(self, "MyBucket")
bucket.add_event_notification(s3.EventType.OBJECT_REMOVED, s3n.SqsDestination(my_queue),
    prefix="foo/",
    suffix=".jpg"
)

Adding notifications on existing buckets:

# topic: sns.Topic

bucket = s3.Bucket.from_bucket_attributes(self, "ImportedBucket",
    bucket_arn="arn:aws:s3:::amzn-s3-demo-bucket"
)
bucket.add_event_notification(s3.EventType.OBJECT_CREATED, s3n.SnsDestination(topic))

If you do not want for S3 to validate permissions of Amazon SQS, Amazon SNS, and Lambda destinations you can use the notificationsSkipDestinationValidation flag:

# my_queue: sqs.Queue

bucket = s3.Bucket(self, "MyBucket",
    notifications_skip_destination_validation=True
)
bucket.add_event_notification(s3.EventType.OBJECT_REMOVED, s3n.SqsDestination(my_queue))

When you add an event notification to a bucket, a custom resource is created to manage the notifications. By default, a new role is created for the Lambda function that implements this feature. If you want to use your own role instead, you should provide it in the Bucket constructor:

# my_role: iam.IRole

bucket = s3.Bucket(self, "MyBucket",
    notifications_handler_role=my_role
)

Whatever role you provide, the CDK will try to modify it by adding the permissions from AWSLambdaBasicExecutionRole (an AWS managed policy) as well as the permissions s3:PutBucketNotification and s3:GetBucketNotification. If you’re passing an imported role, and you don’t want this to happen, configure it to be immutable:

imported_role = iam.Role.from_role_arn(self, "role", "arn:aws:iam::123456789012:role/RoleName",
    mutable=False
)

If you provide an imported immutable role, make sure that it has at least all the permissions mentioned above. Otherwise, the deployment will fail!

EventBridge notifications

Amazon S3 can send events to Amazon EventBridge whenever certain events happen in your bucket. Unlike other destinations, you don’t need to select which event types you want to deliver.

The following example will enable EventBridge notifications:

bucket = s3.Bucket(self, "MyEventBridgeBucket",
    event_bridge_enabled=True
)

Block Public Access

Use blockPublicAccess to specify block public access settings on the bucket.

Enable all block public access settings:

bucket = s3.Bucket(self, "MyBlockedBucket",
    block_public_access=s3.BlockPublicAccess.BLOCK_ALL
)

Block and ignore public ACLs:

bucket = s3.Bucket(self, "MyBlockedBucket",
    block_public_access=s3.BlockPublicAccess.BLOCK_ACLS
)

Alternatively, specify the settings manually:

bucket = s3.Bucket(self, "MyBlockedBucket",
    block_public_access=s3.BlockPublicAccess(block_public_policy=True)
)

When blockPublicPolicy is set to true, grantPublicRead() throws an error.

Public Read Access

Use publicReadAccess to allow public read access to the bucket.

Note that to enable publicReadAccess, make sure both bucket-level and account-level block public access control is disabled.

Bucket-level block public access control can be configured through blockPublicAccess property. Account-level block public access control can be configured on AWS Console -> S3 -> Block Public Access settings for this account (Navigation Panel).

bucket = s3.Bucket(self, "Bucket",
    public_read_access=True,
    block_public_access={
        "block_public_policy": False,
        "block_public_acls": False,
        "ignore_public_acls": False,
        "restrict_public_buckets": False
    }
)

Logging configuration

Use serverAccessLogsBucket to describe where server access logs are to be stored.

access_logs_bucket = s3.Bucket(self, "AccessLogsBucket")

bucket = s3.Bucket(self, "MyBucket",
    server_access_logs_bucket=access_logs_bucket
)

It’s also possible to specify a prefix for Amazon S3 to assign to all log object keys.

access_logs_bucket = s3.Bucket(self, "AccessLogsBucket")

bucket = s3.Bucket(self, "MyBucket",
    server_access_logs_bucket=access_logs_bucket,
    server_access_logs_prefix="logs"
)

You have two options for the log object key format. Non-date-based partitioning is the default log object key format and appears as follows:

[DestinationPrefix][YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]
access_logs_bucket = s3.Bucket(self, "AccessLogsBucket")

bucket = s3.Bucket(self, "MyBucket",
    server_access_logs_bucket=access_logs_bucket,
    server_access_logs_prefix="logs",
    # You can use a simple prefix with `TargetObjectKeyFormat.simplePrefix()`, but it is the same even if you do not specify `targetObjectKeyFormat` property.
    target_object_key_format=s3.TargetObjectKeyFormat.simple_prefix()
)

Another option is Date-based partitioning. If you choose this format, you can select either the event time or the delivery time of the log file as the date source used in the log format. This format appears as follows:

[DestinationPrefix][SourceAccountId]/[SourceRegion]/[SourceBucket]/[YYYY]/[MM]/[DD]/[YYYY]-[MM]-[DD]-[hh]-[mm]-[ss]-[UniqueString]
access_logs_bucket = s3.Bucket(self, "AccessLogsBucket")

bucket = s3.Bucket(self, "MyBucket",
    server_access_logs_bucket=access_logs_bucket,
    server_access_logs_prefix="logs",
    target_object_key_format=s3.TargetObjectKeyFormat.partitioned_prefix(s3.PartitionDateSource.EVENT_TIME)
)

S3 Inventory

An inventory contains a list of the objects in the source bucket and metadata for each object. The inventory lists are stored in the destination bucket as a CSV file compressed with GZIP, as an Apache optimized row columnar (ORC) file compressed with ZLIB, or as an Apache Parquet (Parquet) file compressed with Snappy.

You can configure multiple inventory lists for a bucket. You can configure what object metadata to include in the inventory, whether to list all object versions or only current versions, where to store the inventory list file output, and whether to generate the inventory on a daily or weekly basis.

inventory_bucket = s3.Bucket(self, "InventoryBucket")

data_bucket = s3.Bucket(self, "DataBucket",
    inventories=[s3.Inventory(
        frequency=s3.InventoryFrequency.DAILY,
        include_object_versions=s3.InventoryObjectVersion.CURRENT,
        destination=s3.InventoryDestination(
            bucket=inventory_bucket
        )
    ), s3.Inventory(
        frequency=s3.InventoryFrequency.WEEKLY,
        include_object_versions=s3.InventoryObjectVersion.ALL,
        destination=s3.InventoryDestination(
            bucket=inventory_bucket,
            prefix="with-all-versions"
        )
    )
    ]
)

If the destination bucket is created as part of the same CDK application, the necessary permissions will be automatically added to the bucket policy. However, if you use an imported bucket (i.e Bucket.fromXXX()), you’ll have to make sure it contains the following policy document:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "InventoryAndAnalyticsExamplePolicy",
      "Effect": "Allow",
      "Principal": { "Service": "s3.amazonaws.com" },
      "Action": "s3:PutObject",
      "Resource": ["arn:aws:s3:::amzn-s3-demo-destination-bucket/*"]
    }
  ]
}

Website redirection

You can use the two following properties to specify the bucket redirection policy. Please note that these methods cannot both be applied to the same bucket.

Static redirection

You can statically redirect a to a given Bucket URL or any other host name with websiteRedirect:

bucket = s3.Bucket(self, "MyRedirectedBucket",
    website_redirect=s3.RedirectTarget(host_name="www.example.com")
)

Routing rules

Alternatively, you can also define multiple websiteRoutingRules, to define complex, conditional redirections:

bucket = s3.Bucket(self, "MyRedirectedBucket",
    website_routing_rules=[s3.RoutingRule(
        host_name="www.example.com",
        http_redirect_code="302",
        protocol=s3.RedirectProtocol.HTTPS,
        replace_key=s3.ReplaceKey.prefix_with("test/"),
        condition=s3.RoutingRuleCondition(
            http_error_code_returned_equals="200",
            key_prefix_equals="prefix"
        )
    )
    ]
)

Filling the bucket as part of deployment

To put files into a bucket as part of a deployment (for example, to host a website), see the aws-cdk-lib/aws-s3-deployment package, which provides a resource that can do just that.

The URL for objects

S3 provides two types of URLs for accessing objects via HTTP(S). Path-Style and Virtual Hosted-Style URL. Path-Style is a classic way and will be deprecated. We recommend to use Virtual Hosted-Style URL for newly made bucket.

You can generate both of them.

bucket = s3.Bucket(self, "MyBucket")
bucket.url_for_object("objectname") # Path-Style URL
bucket.virtual_hosted_url_for_object("objectname") # Virtual Hosted-Style URL
bucket.virtual_hosted_url_for_object("objectname", regional=False)

Object Ownership

You can use one of following properties to specify the bucket object Ownership.

Object writer

The Uploading account will own the object.

s3.Bucket(self, "MyBucket",
    object_ownership=s3.ObjectOwnership.OBJECT_WRITER
)

Bucket owner preferred

The bucket owner will own the object if the object is uploaded with the bucket-owner-full-control canned ACL. Without this setting and canned ACL, the object is uploaded and remains owned by the uploading account.

s3.Bucket(self, "MyBucket",
    object_ownership=s3.ObjectOwnership.BUCKET_OWNER_PREFERRED
)

Bucket deletion

When a bucket is removed from a stack (or the stack is deleted), the S3 bucket will be removed according to its removal policy (which by default will simply orphan the bucket and leave it in your AWS account). If the removal policy is set to RemovalPolicy.DESTROY, the bucket will be deleted as long as it does not contain any objects.

To override this and force all objects to get deleted during bucket deletion, enable theautoDeleteObjects option.

When autoDeleteObjects is enabled, s3:PutBucketPolicy is added to the bucket policy. This is done to allow the custom resource this feature is built on to add a deny policy for s3:PutObject to the bucket policy when a delete stack event occurs. Adding this deny policy prevents new objects from being written to the bucket. Doing this prevents race conditions with external bucket writers during the deletion process.

bucket = s3.Bucket(self, "MyTempFileBucket",
    removal_policy=cdk.RemovalPolicy.DESTROY,
    auto_delete_objects=True
)

Warning if you have deployed a bucket with autoDeleteObjects: true, switching this to false in a CDK version before 1.126.0 will lead to all objects in the bucket being deleted. Be sure to update your bucket resources by deploying with CDK version 1.126.0 or later before switching this value to false.

Transfer Acceleration

Transfer Acceleration can be configured to enable fast, easy, and secure transfers of files over long distances:

bucket = s3.Bucket(self, "MyBucket",
    transfer_acceleration=True
)

To access the bucket that is enabled for Transfer Acceleration, you must use a special endpoint. The URL can be generated using method transferAccelerationUrlForObject:

bucket = s3.Bucket(self, "MyBucket",
    transfer_acceleration=True
)
bucket.transfer_acceleration_url_for_object("objectname")

Intelligent Tiering

Intelligent Tiering can be configured to automatically move files to glacier:

s3.Bucket(self, "MyBucket",
    intelligent_tiering_configurations=[s3.IntelligentTieringConfiguration(
        name="foo",
        prefix="folder/name",
        archive_access_tier_time=Duration.days(90),
        deep_archive_access_tier_time=Duration.days(180),
        tags=[s3.Tag(key="tagname", value="tagvalue")]
    )
    ]
)

Lifecycle Rule

Managing lifecycle can be configured transition or expiration actions.

bucket = s3.Bucket(self, "MyBucket",
    lifecycle_rules=[s3.LifecycleRule(
        abort_incomplete_multipart_upload_after=Duration.minutes(30),
        enabled=False,
        expiration=Duration.days(30),
        expiration_date=Date(),
        expired_object_delete_marker=False,
        id="id",
        noncurrent_version_expiration=Duration.days(30),

        # the properties below are optional
        noncurrent_versions_to_retain=123,
        noncurrent_version_transitions=[s3.NoncurrentVersionTransition(
            storage_class=s3.StorageClass.GLACIER,
            transition_after=Duration.days(30),

            # the properties below are optional
            noncurrent_versions_to_retain=123
        )
        ],
        object_size_greater_than=500,
        prefix="prefix",
        object_size_less_than=10000,
        transitions=[s3.Transition(
            storage_class=s3.StorageClass.GLACIER,

            # the properties below are optional
            transition_after=Duration.days(30),
            transition_date=Date()
        )
        ]
    )
    ]
)

To indicate which default minimum object size behavior is applied to the lifecycle configuration, use the transitionDefaultMinimumObjectSize property.

The default value of the property before September 2024 is TransitionDefaultMinimumObjectSize.VARIES_BY_STORAGE_CLASS that allows objects smaller than 128 KB to be transitioned only to the S3 Glacier and S3 Glacier Deep Archive storage classes, otherwise TransitionDefaultMinimumObjectSize.ALL_STORAGE_CLASSES_128_K that prevents objects smaller than 128 KB from being transitioned to any storage class.

To customize the minimum object size for any transition you can add a filter that specifies a custom objectSizeGreaterThan or objectSizeLessThan for lifecycleRules property. Custom filters always take precedence over the default transition behavior.

s3.Bucket(self, "MyBucket",
    transition_default_minimum_object_size=s3.TransitionDefaultMinimumObjectSize.VARIES_BY_STORAGE_CLASS,
    lifecycle_rules=[s3.LifecycleRule(
        transitions=[s3.Transition(
            storage_class=s3.StorageClass.DEEP_ARCHIVE,
            transition_after=Duration.days(30)
        )]
    ), s3.LifecycleRule(
        object_size_less_than=300000,
        object_size_greater_than=200000,
        transitions=[s3.Transition(
            storage_class=s3.StorageClass.ONE_ZONE_INFREQUENT_ACCESS,
            transition_after=Duration.days(30)
        )]
    )
    ]
)

Object Lock Configuration

Object Lock can be configured to enable a write-once-read-many model for an S3 bucket. Object Lock must be configured when a bucket is created; if a bucket is created without Object Lock, it cannot be enabled later via the CDK.

Object Lock can be enabled on an S3 bucket by specifying:

bucket = s3.Bucket(self, "MyBucket",
    object_lock_enabled=True
)

Usually, it is desired to not just enable Object Lock for a bucket but to also configure a retention mode and a retention period. These can be specified by providing objectLockDefaultRetention:

# Configure for governance mode with a duration of 7 years
s3.Bucket(self, "Bucket1",
    object_lock_default_retention=s3.ObjectLockRetention.governance(Duration.days(7 * 365))
)

# Configure for compliance mode with a duration of 1 year
s3.Bucket(self, "Bucket2",
    object_lock_default_retention=s3.ObjectLockRetention.compliance(Duration.days(365))
)

Replicating Objects

You can use replicating objects to enable automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. You can replicate objects to a single destination bucket or to multiple destination buckets. The destination buckets can be in different AWS Regions or within the same Region as the source bucket.

To replicate objects to a destination bucket, you can specify the replicationRules property:

# destination_bucket1: s3.IBucket
# destination_bucket2: s3.IBucket
# kms_key: kms.IKey


source_bucket = s3.Bucket(self, "SourceBucket",
    # Versioning must be enabled on both the source and destination bucket
    versioned=True,
    replication_rules=[s3.ReplicationRule(
        # The destination bucket for the replication rule.
        destination=destination_bucket1,
        # The priority of the rule.
        # Amazon S3 will attempt to replicate objects according to all replication rules.
        # However, if there are two or more rules with the same destination bucket, then objects will be replicated according to the rule with the highest priority.
        # The higher the number, the higher the priority.
        # It is essential to specify priority explicitly when the replication configuration has multiple rules.
        priority=1
    ), s3.ReplicationRule(
        destination=destination_bucket2,
        priority=2,
        # Whether to specify S3 Replication Time Control (S3 RTC).
        # S3 RTC replicates most objects that you upload to Amazon S3 in seconds,
        # and 99.99 percent of those objects within specified time.
        replication_time_control=s3.ReplicationTimeValue.FIFTEEN_MINUTES,
        # Whether to enable replication metrics about S3 RTC.
        # If set, metrics will be output to indicate whether replication by S3 RTC took longer than the configured time.
        metrics=s3.ReplicationTimeValue.FIFTEEN_MINUTES,
        # The kms key to use for the destination bucket.
        kms_key=kms_key,
        # The storage class to use for the destination bucket.
        storage_class=s3.StorageClass.INFREQUENT_ACCESS,
        # Whether to replicate objects with SSE-KMS encryption.
        sse_kms_encrypted_objects=False,
        # Whether to replicate modifications on replicas.
        replica_modifications=True,
        # Whether to replicate delete markers.
        # This property cannot be enabled if the replication rule has a tag filter.
        delete_marker_replication=False,
        # The ID of the rule.
        id="full-settings-rule",
        # The object filter for the rule.
        filter=s3.Filter(
            # The prefix filter for the rule.
            prefix="prefix",
            # The tag filter for the rule.
            tags=[s3.Tag(
                key="tagKey",
                value="tagValue"
            )
            ]
        )
    )
    ]
)

Cross Account Replication

You can also set a destination bucket from a different account as the replication destination.

In this case, the bucket policy for the destination bucket is required, to configure it through CDK use addReplicationPolicy() method to add bucket policy on destination bucket. In a cross-account scenario, where the source and destination buckets are owned by different AWS accounts, you can use a KMS key to encrypt object replicas. However, the KMS key owner must grant the source bucket owner permission to use the KMS key. For more information, please refer to https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-walkthrough-2.html .

NOTE: AWS managed keys don’t allow cross-account use, and therefore can’t be used to perform cross-account replication.

If you need to ovveride the bucket ownership to destination account pass the account value to the method to provide permissions to override bucket owner. addReplicationPolicy(bucket.replicationRoleArn, true, '11111111111');

However, if the destination bucket is a referenced bucket, CDK cannot set the bucket policy, so you will need to configure the necessary bucket policy separately.

# The destination bucket in a different account.
# destination_bucket: s3.IBucket


source_bucket = s3.Bucket(self, "SourceBucket",
    versioned=True,
    replication_rules=[s3.ReplicationRule(
        destination=destination_bucket,
        priority=1,
        # Whether to want to change replica ownership to the AWS account that owns the destination bucket.
        # The replicas are owned by same AWS account that owns the source object by default.
        access_control_transition=True
    )
    ]
)

# Add permissions to the destination after replication role is created
if source_bucket.replication_role_arn:
    destination_bucket.add_replication_policy(source_bucket.replication_role_arn, True, "111111111111")