

# Working with objects in a directory bucket
<a name="directory-buckets-objects"></a>

After you create an Amazon S3 directory bucket, you can work with objects by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and the AWS SDKs. 

For more information about performing bulk operations, importing, uploading, copying, deleting, and downloading objects in directory buckets, see the following topics.

**Topics**
+ [Importing objects into a directory bucket](create-import-job.md)
+ [Working with S3 Lifecycle for directory buckets](directory-buckets-objects-lifecycle.md)
+ [Using Batch Operations with directory buckets](directory-buckets-objects-Batch-Ops.md)
+ [Appending data to objects in directory buckets](directory-buckets-objects-append.md)
+ [Renaming objects in directory buckets](directory-buckets-objects-rename.md)
+ [Uploading objects to a directory bucket](directory-buckets-objects-upload.md)
+ [Copying objects from or to a directory bucket](directory-buckets-objects-copy.md)
+ [Deleting objects from a directory bucket](directory-bucket-delete-object.md)
+ [Downloading an object from a directory bucket](directory-buckets-objects-GetExamples.md)
+ [Generating presigned URLs to share objects directory bucket](directory-buckets-objects-generate-presigned-url-Examples.md)
+ [Retrieving object metadata from directory buckets](directory-buckets-objects-HeadObjectExamples.md)
+ [Listing objects from a directory bucket](directory-buckets-objects-listobjectsExamples.md)

# Importing objects into a directory bucket
<a name="create-import-job"></a>

After you create a directory bucket in Amazon S3, you can populate the new bucket with data by using the import action. Import is a streamlined method for creating S3 Batch Operations jobs to copy objects from general purpose buckets to directory buckets. 

**Note**  
The following limitations apply to import jobs:  
The source bucket and the destination bucket must be in the same AWS Region and account.
The source bucket cannot be a directory bucket.
Objects larger than 5GB are not supported and will be omitted from the copy operation.
Objects in the Glacier Flexible Retrieval, Glacier Deep Archive, Intelligent-Tiering Archive Access tier, and Intelligent-Tiering Deep Archive tier storage classes must be restored before they can be imported.
Imported objects with MD5 checksum algorithms are converted to use CRC32 checksums.
Imported objects use the Express One Zone storage class, which has a different pricing structure than the storage classes used by general purpose buckets. Consider this difference in cost when importing large numbers of objects.

When you configure an import job, you specify the source bucket or prefix where the existing objects will be copied from. You also provide an AWS Identity and Access Management (IAM) role that has permissions to access the source objects. Amazon S3 then starts a Batch Operations job that copies the objects and automatically applies appropriate storage class and checksum settings.

To configure import jobs, you use the Amazon S3 console.

## Using the Amazon S3 console
<a name="create-import-job-console-procedure"></a>

**To import objects into a directory bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**, and then choose the **Directory** buckets tab. Choose the option button next to the directory bucket that you want to import objects into.

1. Choose **Import**.

1. For **Source**, enter the general purpose bucket (or bucket path including prefix) that contains the objects that you want to import. To choose an existing general purpose bucket from a list, choose **Browse S3**.

1. For **Permission to access and copy source objects**, do one of the following to specify an IAM role with the permissions necessary to import your source objects:
   + To allow Amazon S3 to create a new IAM role on your behalf, choose **Create new IAM role**.
   + To choose an existing IAM role from a list, choose **Choose from existing IAM roles**.
   + To specify an existing IAM role by entering its Amazon Resource Name (ARN), choose **Enter IAM role ARN**, then enter the ARN in the corresponding field.

1. Review the information that's displayed in the **Destination** and **Copied object settings** sections. If the information in the **Destination** section is correct, choose **Import** to start the copy job.

   The Amazon S3 console displays the status of your new job on the **Batch Operations** page. For more information about the job, choose the option button next to the job name, and then on the **Actions** menu, choose **View details**. To open the directory bucket that the objects will be imported into, choose **View import destination**.

# Working with S3 Lifecycle for directory buckets
<a name="directory-buckets-objects-lifecycle"></a>

 S3 Lifecycle helps you store objects in S3 Express One Zone in directory buckets cost effectively by deleting expired objects on your behalf. To manage the lifecycle of your objects, create an S3 Lifecycle configuration for your directory bucket. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. You can set an Amazon S3 Lifecycle configuration on a directory bucket by using the AWS Command Line Interface (AWS CLI), the AWS SDKs, the Amazon S3 REST API and AWS CloudFormation. 

In your lifecycle configuration, you use rules to define actions that you want Amazon S3 to take on your objects. For objects stored in directory buckets, you can create lifecycle rules to expire objects as they age. You can also create lifecycle rules to delete incomplete multipart uploads in directory buckets at a daily frequency. 

When you add a Lifecycle configuration to a bucket, the configuration rules apply to both existing objects and objects that you add later. For example, if you add a Lifecycle configuration rule today with an expiration action that causes objects with a specific prefix to expire 30 days after creation, S3 will queue for removal any existing objects that are more than 30 days old and that have the specified prefix.

## How S3 Lifecycle for directory buckets is different
<a name="directory-bucket-lifecycle-differences"></a>

For objects in directory buckets, you can create lifecycle rules to expire objects and delete incomplete multipart uploads. However, S3 Lifecycle for directory buckets doesn't support transition actions between storage classes. 

**CreateSession**

Lifecycle uses public `DeleteObject` and `DeleteObjects` API operations to expire objects in directory buckets. To use these API operations, S3 Lifecycle will use the `CreateSession` API to establish temporary security credentials to access the objects in the directory buckets. For more information, see [`CreateSession`in the *Amazon S3 API Reference*.](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) 

If you have an active policy that denies delete permissions to the lifecycle principal, this will prevent you from allowing S3 Lifecycle to delete objects on your behalf. 

### Using a bucket policy to Grant permissions to the S3 Lifecycle service principal
<a name="lifecycle-directory-bucket-policy"></a>

The following bucket policy grants the S3 Lifecycle service principal permission to create sessions for performing operations such as `DeleteObject` and `DeleteObjects`. When no session mode is specified in a `CreateSession` request, the session is created with the maximum allowable privilege by the permissions in (attempting `ReadWrite` first, then `ReadOnly` if `ReadWrite` is not permitted). However, `ReadOnly` sessions are insufficient for lifecycle operations that modify or delete objects. Therefore, this example explicitly requires a `ReadWrite` session mode by using the `s3express:SessionMode` condition key.

**Example – Bucket policy to allow `CreateSession` calls with an explicit `ReadWrite` session mode for lifecycle operations**  

```
 { 
   "Version":"2008-10-17",		 	 	  
   "Statement":[
      {
         "Effect":"Allow",
         "Principal": {
            "Service":"lifecycle.s3.amazonaws.com"
          },
          "Action":"s3express:CreateSession", 
          "Condition": { 
             "StringEquals": {
                "s3express:SessionMode": "ReadWrite"
              }
           }, 
           "Resource":"arn:aws:s3express:us-east-2:412345678921:bucket/amzn-s3-demo-bucket--use2-az2--x-s3"
       }
   ] 
}
```

### Monitoring lifecycle rules
<a name="lifecycle-directory-bucket-monitoring"></a>

For objects stored in directory buckets, S3 Lifecycle generates AWS CloudTrail management and data event logs. For more information, see [CloudTrail log file examples for S3 Express One Zone](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-log-files.html). 

For more information about creating lifecycle configurations and troubleshooting S3 Lifecycle related issues, see the following topics: 

**Topics**

# Creating and managing a Lifecycle configuration for your directory bucket
<a name="directory-bucket-create-lc"></a>

You can create a lifecycle configuration for directory buckets by using the AWS Command Line Interface (AWS CLI), AWS SDKs and REST APIs.

## Using the AWS CLI
<a name="set-lifecycle-config-cli"></a>

You can use the following AWS CLI commands to manage S3 Lifecycle configurations:
+ `put-bucket-lifecycle-configuration`
+ `get-bucket-lifecycle-configuration`
+ `delete-bucket-lifecycle`

For instructions on setting up the AWS CLI, see [Developing with Amazon S3 using the AWS CLI](https://docs.aws.amazon.com/AmazonS3/latest/API/setup-aws-cli.html) in the *Amazon S3 API Reference*.

The Amazon S3 Lifecycle configuration is an XML file. But when you're using the AWS CLI, you cannot specify the XML format. You must specify the JSON format instead. The following are example XML lifecycle configurations and the equivalent JSON configurations that you can specify in an AWS CLIcommand.

The following AWS CLI example puts a lifecycle configuration policy on a directory bucket. This policy specifies that all objects that have the flagged prefix (myprefix) and are the defined object size expire after 7 days. To use this example, replace each *user input placeholder* with your own information.

Save the lifecycle configuration policy to a JSON file. In this example, the file is named lifecycle1.json.

**Example**  

```
{
    "Rules": [
        {
        "Expiration": {
            "Days": 7
        },
        "ID": "Lifecycle expiration rule",
        "Filter": {
            "And": {
                "Prefix": "myprefix/",
                "ObjectSizeGreaterThan": 500,
                "ObjectSizeLessThan": 64000
            }
        },
        "Status": "Enabled"
    }
    ]
}
```
Submit the JSON file as part of the `put-bucket-lifecycle-configuration` CLI command. To use this command, replace each *user input placeholder* with your own information.  

```
aws s3api put-bucket-lifecycle-configuration --region us-west-2 --profile default --bucket amzn-s3-demo-bucket--usw2-az1--x-s3 --lifecycle-configuration file://lc-policy.json --checksum-algorithm crc32c 
```

**Example**  

```
<LifecycleConfiguration>
    <Rule>
        <ID>Lifecycle expiration rule</ID>
        <Filter>
            <And>
                <Prefix>myprefix/</Prefix>
                <ObjectSizeGreaterThan>500</ObjectSizeGreaterThan>
                <ObjectSizeLessThan>64000</ObjectSizeLessThan>
            </And>
        </Filter>
        <Status>Enabled</Status>     
        <Expiration>
             <Days>7</Days>
        </Expiration>
    </Rule>
</LifecycleConfiguration>
```

## Using the AWS SDKs
<a name="directory-bucket-upload-sdks"></a>

------
#### [ SDK for Java ]

**Example**  

```
import software.amazon.awssdk.services.s3.model.PutBucketLifecycleConfigurationRequest;
import software.amazon.awssdk.services.s3.model.PutBucketLifecycleConfigurationResponse;
import software.amazon.awssdk.services.s3.model.ChecksumAlgorithm;
import software.amazon.awssdk.services.s3.model.BucketLifecycleConfiguration;
import software.amazon.awssdk.services.s3.model.LifecycleRule;
import software.amazon.awssdk.services.s3.model.LifecycleRuleFilter;
import software.amazon.awssdk.services.s3.model.LifecycleExpiration;
import software.amazon.awssdk.services.s3.model.LifecycleRuleAndOperator;
import software.amazon.awssdk.services.s3.model.GetBucketLifecycleConfigurationResponse;
import software.amazon.awssdk.services.s3.model.GetBucketLifecycleConfigurationRequest;
import software.amazon.awssdk.services.s3.model.DeleteBucketLifecycleRequest;
import software.amazon.awssdk.services.s3.model.DeleteBucketLifecycleResponse;
import software.amazon.awssdk.services.s3.model.AbortIncompleteMultipartUpload;

// PUT a Lifecycle policy
LifecycleRuleFilter objectExpirationFilter = LifecycleRuleFilter.builder().and(LifecycleRuleAndOperator.builder().prefix("dir1/").objectSizeGreaterThan(3L).objectSizeLessThan(20L).build()).build();
LifecycleRuleFilter mpuExpirationFilter = LifecycleRuleFilter.builder().prefix("dir2/").build();
       
LifecycleRule objectExpirationRule = LifecycleRule.builder().id("lc").filter(objectExpirationFilter).status("Enabled").expiration(LifecycleExpiration.builder()
                    .days(10)
                    .build())
                .build();
LifecycleRule mpuExpirationRule = LifecycleRule.builder().id("lc-mpu").filter(mpuExpirationFilter).status("Enabled").abortIncompleteMultipartUpload(AbortIncompleteMultipartUpload.builder()
                        .daysAfterInitiation(10)
                        .build())
                .build();
        
PutBucketLifecycleConfigurationRequest putLifecycleRequest = PutBucketLifecycleConfigurationRequest.builder()
                .bucket("amzn-s3-demo-bucket--usw2-az1--x-s3")
                .checksumAlgorithm(ChecksumAlgorithm.CRC32)
                .lifecycleConfiguration(
                        BucketLifecycleConfiguration.builder()
                                .rules(objectExpirationRule, mpuExpirationRule)
                                .build()
                ).build();

PutBucketLifecycleConfigurationResponse resp = client.putBucketLifecycleConfiguration(putLifecycleRequest);

// GET the Lifecycle policy 
GetBucketLifecycleConfigurationResponse getResp = client.getBucketLifecycleConfiguration(GetBucketLifecycleConfigurationRequest.builder().bucket("amzn-s3-demo-bucket--usw2-az1--x-s3").build());

// DELETE the Lifecycle policy
DeleteBucketLifecycleResponse delResp = client.deleteBucketLifecycle(DeleteBucketLifecycleRequest.builder().bucket("amzn-s3-demo-bucket--usw2-az1--x-s3").build());
```

------
#### [ SDK for Go ]

**Example**  

```
package main

import (
    "context"
    "log"

    "github.com/aws/aws-sdk-go-v2/aws"
    "github.com/aws/aws-sdk-go-v2/config"
    "github.com/aws/aws-sdk-go-v2/service/s3"
    "github.com/aws/aws-sdk-go-v2/service/s3/types"
)
// PUT a Lifecycle policy
func putBucketLifecycleConfiguration(client *s3.Client, bucketName string ) error {
    lifecycleConfig := &s3.PutBucketLifecycleConfigurationInput{
        Bucket: aws.String(bucketName),
        LifecycleConfiguration: &types.BucketLifecycleConfiguration{
            Rules: []types.LifecycleRule{
                {
                    ID:     aws.String("lc"),
                    Filter: &types.LifecycleRuleFilter{
                        And: &types.LifecycleRuleAndOperator{
                            Prefix: aws.String("foo/"), 
                            ObjectSizeGreaterThan: aws.Int64(1000000), 
                            ObjectSizeLessThan:    aws.Int64(100000000), 
                            },
                        },
                    
                    Status: types.ExpirationStatusEnabled,
                    Expiration: &types.LifecycleExpiration{
                        Days: aws.Int32(int32(1)), 
                    },
                },
                {
                    ID:     aws.String("abortmpu"),
                    Filter: &types.LifecycleRuleFilter{
                        Prefix: aws.String("bar/"), 
                    },
                    Status: types.ExpirationStatusEnabled,
                    AbortIncompleteMultipartUpload: &types.AbortIncompleteMultipartUpload{
                        DaysAfterInitiation: aws.Int32(int32(5)), 
                    },
                },
            },
        },
    }
    _, err := client.PutBucketLifecycleConfiguration(context.Background(), lifecycleConfig)
    return err
}
// Get the Lifecycle policy
func getBucketLifecycleConfiguration(client *s3.Client, bucketName string) error {
    getLifecycleConfig := &s3.GetBucketLifecycleConfigurationInput{
        Bucket: aws.String(bucketName),
    }

    resp, err := client.GetBucketLifecycleConfiguration(context.Background(), getLifecycleConfig)
    if err != nil {
        return err
    }
    return nil
}
// Delete the Lifecycle policy
func deleteBucketLifecycleConfiguration(client *s3.Client, bucketName string) error {
    deleteLifecycleConfig := &s3.DeleteBucketLifecycleInput{
        Bucket: aws.String(bucketName),
    }
    _, err := client.DeleteBucketLifecycle(context.Background(), deleteLifecycleConfig)
    return err
}
func main() {
    cfg, err := config.LoadDefaultConfig(context.Background(), config.WithRegion("us-west-2")) // Specify your region here
    if err != nil {
        log.Fatalf("unable to load SDK config, %v", err)
    }
    s3Client := s3.NewFromConfig(cfg)
    bucketName := "amzn-s3-demo-bucket--usw2-az1--x-s3" 
    putBucketLifecycleConfiguration(s3Client, bucketName)
    getBucketLifecycleConfiguration(s3Client, bucketName)
    deleteBucketLifecycleConfiguration(s3Client, bucketName)
    getBucketLifecycleConfiguration(s3Client, bucketName)
}
```

------
#### [ SDK for .NET ]

**Example**  

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
    class LifecycleTest
    {
        private const string bucketName = "amzn-s3-demo-bucket--usw2-az1--x-s3";
        // Specify your bucket region (an example region is shown).
        private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
        private static IAmazonS3 client;
        public static void Main()
        {
            client = new AmazonS3Client(bucketRegion);
            AddUpdateDeleteLifecycleConfigAsync().Wait();
        }

        private static async Task AddUpdateDeleteLifecycleConfigAsync()
        {
            try
            {
                var lifeCycleConfiguration = new LifecycleConfiguration()
                {
                    Rules = new List <LifecycleRule>
                        {
                            new LifecycleRule
                            {
                                 Id = "delete rule",
                                  Filter = new LifecycleFilter()
                                 {
                                     LifecycleFilterPredicate = new LifecyclePrefixPredicate()
                                     {
                                         Prefix = "projectdocs/"
                                     }
                                 },
                                 Status = LifecycleRuleStatus.Enabled,
                                 Expiration = new LifecycleRuleExpiration()
                                 {
                                       Days = 10
                                 }
                            }
                        }
                };

                // Add the configuration to the bucket. 
                await AddExampleLifecycleConfigAsync(client, lifeCycleConfiguration);

                // Retrieve an existing configuration. 
                lifeCycleConfiguration = await RetrieveLifecycleConfigAsync(client);

                // Add a new rule.
                lifeCycleConfiguration.Rules.Add(new LifecycleRule
                {
                    Id = "mpu abort rule",
                    Filter = new LifecycleFilter()
                    {
                        LifecycleFilterPredicate = new LifecyclePrefixPredicate()
                        {
                            Prefix = "YearlyDocuments/"
                        }
                    },
                    Expiration = new LifecycleRuleExpiration()
                    {
                        Days = 10
                    },
                    AbortIncompleteMultipartUpload = new LifecycleRuleAbortIncompleteMultipartUpload()
                    {
                        DaysAfterInitiation = 10
                    }
                });

                // Add the configuration to the bucket. 
                await AddExampleLifecycleConfigAsync(client, lifeCycleConfiguration);

                // Verify that there are now two rules.
                lifeCycleConfiguration = await RetrieveLifecycleConfigAsync(client);
                Console.WriteLine("Expected # of rulest=2; found:{0}", lifeCycleConfiguration.Rules.Count);

                // Delete the configuration.
                await RemoveLifecycleConfigAsync(client);

                // Retrieve a nonexistent configuration.
                lifeCycleConfiguration = await RetrieveLifecycleConfigAsync(client);

            }
            catch (AmazonS3Exception e)
            {
                Console.WriteLine("Error encountered ***. Message:'{0}' when writing an object", e.Message);
            }
            catch (Exception e)
            {
                Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
            }
        }

        static async Task AddExampleLifecycleConfigAsync(IAmazonS3 client, LifecycleConfiguration configuration)
        {

            PutLifecycleConfigurationRequest request = new PutLifecycleConfigurationRequest
            {
                BucketName = bucketName,
                Configuration = configuration
            };
            var response = await client.PutLifecycleConfigurationAsync(request);
        }

        static async Task <LifecycleConfiguration> RetrieveLifecycleConfigAsync(IAmazonS3 client)
        {
            GetLifecycleConfigurationRequest request = new GetLifecycleConfigurationRequest
            {
                BucketName = bucketName
            };
            var response = await client.GetLifecycleConfigurationAsync(request);
            var configuration = response.Configuration;
            return configuration;
        }

        static async Task RemoveLifecycleConfigAsync(IAmazonS3 client)
        {
            DeleteLifecycleConfigurationRequest request = new DeleteLifecycleConfigurationRequest
            {
                BucketName = bucketName
            };
            await client.DeleteLifecycleConfigurationAsync(request);
        }
    }
}
```

------
#### [ SDK for Python ]

**Example**  

```
import boto3

client = boto3.client("s3", region_name="us-west-2")
bucket_name = 'amzn-s3-demo-bucket--usw2-az1--x-s3'

client.put_bucket_lifecycle_configuration(
    Bucket=bucket_name,
    ChecksumAlgorithm='CRC32',
    LifecycleConfiguration={
        'Rules': [
            {
                'ID': 'lc',
                'Filter': {
                    'And': {
                        'Prefix': 'foo/',
                        'ObjectSizeGreaterThan': 1000000,
                        'ObjectSizeLessThan': 100000000,
                    }
                },
                'Status': 'Enabled',
                'Expiration': {
                    'Days': 1
                }
            },
            {
                'ID': 'abortmpu',
                'Filter': {
                    'Prefix': 'bar/'
                },
                'Status': 'Enabled',
                'AbortIncompleteMultipartUpload': {
                    'DaysAfterInitiation': 5
                }
            }
        ]
    }
)

result = client.get_bucket_lifecycle_configuration(
    Bucket=bucket_name
)

client.delete_bucket_lifecycle(
    Bucket=bucket_name
)
```

------

# Troubleshooting S3 Lifecycle issues for directory buckets
<a name="directory-buckets-lifecycle-troubleshooting"></a>

**Topics**
+ [I set up my lifecycle configuration but objects in my directory bucket are not expiring](#troubleshoot-directory-bucket-lifecycle-1)
+ [How do I monitor the actions taken by my lifecycle rules?](#troubleshoot-directory-bucket-lifecycle-2)

## I set up my lifecycle configuration but objects in my directory bucket are not expiring
<a name="troubleshoot-directory-bucket-lifecycle-1"></a>

S3 Lifecycle for directory buckets utilizes public APIs to delete objects in S3 Express One Zone. To use object level public APIs, you must grant permission to `CreateSession` and allow S3 Lifecycle permission to delete your objects. If you have an active policy that denies deletes, this will prevent you from allowing S3 Lifecycle to delete objects on your behalf.

It’s important to configure your bucket policies correctly to ensure that the objects that you want to delete are eligible for expiration. You can check your AWS CloudTrail logs for `AccessDenied` Trails for `CreateSession` API invocations in CloudTrail to verify if access has been denied. Checking your CloudTrail logs can assist you in troubleshooting access issues and identifying the root cause of access denied errors. You can then fix your incorrect access controls by updating the relevant policies. 

If you confirm that your bucket policies are set correctly and you are still experiencing issues, we recommend that you review the lifecycle rules to ensure that they are applied to the right subset of objects. 

## How do I monitor the actions taken by my lifecycle rules?
<a name="troubleshoot-directory-bucket-lifecycle-2"></a>

 You can use AWS CloudTrail data event logs to monitor actions taken by S3 Lifecycle in directory buckets. For more information, see [CloudTrail log file examples](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-log-files.html).

# Using Batch Operations with directory buckets
<a name="directory-buckets-objects-Batch-Ops"></a>

You can use Amazon S3 Batch Operations to perform operations on objects stored in S3 buckets. To learn more about S3 Batch Operations, see [Performing large-scale batch operations on Amazon S3 objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops.html).

The following topics discuss performing batch operations on objects stored in the S3 Express One Zone storage class in directory buckets.

**Topics**
+ [Using Batch Operations with directory buckets](#UsingBOPsDirectoryBuckets)
+ [Key differences](#UsingBOPsDirectoryBucketsKeyDiffs)

## Using Batch Operations with directory buckets
<a name="UsingBOPsDirectoryBuckets"></a>

You can perform the **Copy** operation and the **Invoke AWS Lambda function** operations on objects that are stored in directory buckets. With **Copy**, you can copy objects between buckets of the same type (for example, from a directory bucket to a directory bucket). You can also copy between general purpose buckets and directory buckets. With **Invoke AWS Lambda function**, you can use a Lambda function to perform actions on objects in your directory bucket with code that you define. 

**Copying objects**  
You can copy between the same bucket type or between directory buckets and general purpose buckets. When you copy to a directory bucket, you must use the correct Amazon Resource Name (ARN) format for this bucket type. The ARN format for a directory bucket is `arn:aws:s3express:region:account-id:bucket/bucket-base-name--x-s3`. 

**Note**  
Copying objects across different AWS Regions isn't supported when the source or destination bucket is in an AWS Local Zone. The source and destination buckets must have the same parent AWS Region. The source and destination buckets can be different bucket location types (Availability Zone or Local Zone).

You can also populate your directory bucket with data by using the **Import** action in the S3 console. **Import** is a streamlined method for creating Batch Operations jobs to copy objects from general purpose buckets to directory buckets. For **Import** copy jobs from general purpose buckets to directory buckets, S3 automatically generates a manifest. For more information, see [ Importing objects to a directory bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-import-job.html) and [Specifying a manifest](https://docs.aws.amazon.com/AmazonS3/latest/userguide/specify-batchjob-manifest.html).

**Invoking Lambda functions (`LambdaInvoke`)**  
There are special requirements for using Batch Operations to invoke Lambda functions that act on directory buckets. For example, you must structure your Lambda request by using a v2 JSON invocation schema, and specify InvocationSchemaVersion 2.0 when you create the job. For more information, see [Invoke AWS Lambda function](https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-invoke-lambda.html).

## Key differences
<a name="UsingBOPsDirectoryBucketsKeyDiffs"></a>

The following is a list of key differences when you're using Batch Operations to perform bulk operations on objects that are stored in directory buckets with the S3 Express One Zone storage class:
+ For directory buckets, SSE-S3 and server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) are supported. If you make a `CopyObject` request that specifies to use server-side encryption with customer-provided keys (SSE-C) on a directory bucket (source or destination), the response returns an HTTP `400 (Bad Request)` error. 

  We recommend that the bucket's default encryption uses the desired encryption configuration and you don't override the bucket default encryption in your `CreateSession` requests or `PUT` object requests. Then, new objects are automatically encrypted with the desired encryption settings. For more information about the encryption overriding behaviors in directory buckets and how to encrypt new object copies in a directory bucket with SSE-KMS, see [Specifying server-side encryption with AWS KMS for new object uploads](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-specifying-kms-encryption.html).

  S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through [the Copy operation in Batch Operations](#directory-buckets-objects-Batch-Ops). In this case, Amazon S3 makes a call to AWS KMS every time a copy request is made for a KMS-encrypted object. For more information about using SSE-KMS on directory buckets, see [Setting and monitoring default encryption for directory buckets](s3-express-bucket-encryption.md) and [Using server-side encryption with AWS KMS keys (SSE-KMS) in directory buckets](s3-express-UsingKMSEncryption.md).
+ Objects in directory buckets can't be tagged. You can only specify an empty tag set. By default, Batch Operations copies tags. If you copy an object that has tags between general purpose buckets and directory buckets, you receive a `501 (Not Implemented)` response.
+ S3 Express One Zone offers you the option to choose the checksum algorithm that is used to validate your data during uploads or downloads. You can select one of the following Secure Hash Algorithms (SHA) or Cyclic Redundancy Check (CRC) data-integrity check algorithms: CRC32, CRC32, SHA-1, and SHA-256. MD5-based checksums are not supported with the S3 Express One Zone storage class. 
+ By default, all Amazon S3 buckets set the S3 Object Ownership setting to bucket owner enforced and access control lists (ACLs) are disabled. For directory buckets, this setting can't be modified. You can copy an object from general purpose buckets to directory buckets. However, you can't overwrite the default ACL when you copy to or from a directory bucket. 
+ Regardless of how you specify your manifest, the list itself must be stored in a general purpose bucket. Batch Operations can't import existing manifests from (or save generated manifests to) directory buckets. However, objects described within the manifest can be stored in directory buckets. 
+ Batch Operations can't specify a directory bucket as a location in an S3 Inventory report. Inventory reports don't support directory buckets. You can create a manifest file for objects within a directory bucket by using the `ListObjectsV2` API operation to list the objects. You can then insert the list in a CSV file.

### Granting access
<a name="BOPsAccess"></a>

 To perform copy jobs, you must have the following permissions:
+ To copy objects from one directory bucket to another directory bucket, you must have the `s3express:CreateSession` permission.
+ To copy objects from directory buckets to general purpose buckets, you must have the `s3express:CreateSession` permission and the `s3:PutObject` permission to write the object copy to the destination bucket. 
+ To copy objects from general purpose buckets to directory buckets, you must have the `s3express:CreateSession` permission and the `s3:GetObject` permission to read the source object that is being copied. 

   For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) in the *Amazon Simple Storage Service API Reference*.
+ To invoke a Lambda function, you must grant permissions to your resource based on your Lambda function. To determine which permissions are required, check the corresponding API operations. 

# Appending data to objects in directory buckets
<a name="directory-buckets-objects-append"></a>

You can add data to the end of existing objects stored in the S3 Express One Zone storage class in directory buckets. We recommend that you use the ability to append data to an object if the data is written continuously over a period of time or if you need to read the object while you are writing to the object. Appending data to objects is common for use-cases such as adding new log entries to log files or adding new video segments to video files as they are transcoded then streamed. By appending data to objects, you can simplify applications that previously combined data in local storage before copying the final object to Amazon S3.

There is no minimum size requirement for the data you can append to an object. However, the maximum size of the data that you can append to an object in a single request is 5GB. This is the same limit as the largest request size when uploading data using any Amazon S3 API.

With each successful append operation, you create a part of the object and each object can have up to 10,000 parts. This means you can append data to an object up to 10,000 times. If an object is created using S3 multipart upload, each uploaded part is counted towards the total maximum of 10,000 parts. For example, you can append up to 9,000 times to an object created by multipart upload comprising of 1,000 parts.

**Note**  
 If you hit the limit of parts, you will receive a [TooManyParts](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject_Errors) error. You can use the `CopyObject` API to reset the count.

 If you want to upload parts to an object in parallel and you don’t need to read the parts while the parts are being uploaded, we recommend that you use Amazon S3 multipart upload. For more information, see [Using multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-using-multipart-upload.html).

Appending data to objects is only supported for objects in directory buckets that are stored in the S3 Express One Zone storage class. For more information on S3 Express One Zone Zone, see [Getting started with S3 Express One Zone](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-getting-started.html).

To get started appending data to objects in your directory buckets, you can use the AWS SDKs, AWS CLI, and the `PutObject` API . When you make a `PutObject` request, you set the `x-amz-write-offset-bytes` header to the size of the object that you are appending to. To use the `PutObject` API operation, you must use the `CreateSession` API to establish temporary security credentials to access the objects in your directory buckets. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html) and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) in the *Amazon S3 API Reference*. 

Each successful append operation is billed as a `PutObject` request. To learn more about pricing, see [https://aws.amazon.com/s3/pricing/](https://aws.amazon.com/s3/pricing/). 

**Note**  
Starting with the 1.12 release, Mountpoint for Amazon S3 supports appending data to objects stored in S3 Express One Zone. To get started, you must opt-in by setting the `--incremental-upload ` flag. For more information on Mountpoint, see [Working with Mountpoint](https://docs.aws.amazon.com/AmazonS3/latest/userguide/mountpoint.html). 

 If you use a CRC (Cyclic Redundancy Check) algorithm while uploading the appended data, you can retrieve full object CRC-based checksums using the `HeadObject` or `GetObject` request. If you use the SHA-1 or SHA-256 algorithm while uploading your appended data, you can retrieve a checksum of the appended parts and verify their integrity using the SHA checksums returned on prior PutObject responses. For more information, see [Data protection and encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-data-protection.html.html). 

## Appending data to your objects by using the AWS CLI, AWS SDKs and the REST API
<a name="directory-bucket-append"></a>

You can append data to your objects by using the AWS Command Line Interface (AWS CLI), AWS SDKs and REST API.

### Using the AWS CLI
<a name="set-append--cli"></a>

The following `put-object` example command shows how you can use the AWS CLI to append data to an object. To run this command, replace the *user input placeholders* with your own information

```
aws s3api put-object --bucket amzn-s3-demo-bucket--azid--x-s3 --key sampleinput/file001.bin --body bucket-seed/file001.bin --write-offset-bytes size-of-sampleinput/file001.bin
```

### Using the AWS SDKs
<a name="directory-bucket-append-sdks"></a>

------
#### [ SDK for Java ]

You can use the AWS SDK for Java to append data to your objects. 

```
var putObjectRequestBuilder = PutObjectRequest.builder()
                                              .key(key)
                                              .bucket(bucketName)
                                              .writeOffsetBytes(9);
var response = s3Client.putObject(putObjectRequestBuilder.build());
```

------
#### [ SDK for Python ]

```
s3.put_object(Bucket='amzn-s3-demo-bucket--use2-az2--x-s3', Key='2024-11-05-sdk-test', Body=b'123456789', WriteOffsetBytes=9)
```

------

### Using the REST API
<a name="directory-bucket-append-api"></a>

 You can send REST requests to append data to an object. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject). 

# Renaming objects in directory buckets
<a name="directory-buckets-objects-rename"></a>

Using the `RenameObject` operation, you can atomically rename an existing object in a directory bucket that uses the S3 Express One Zone storage class, without any data movement. You can rename an object by specifying the existing object’s name as the source and the new name of the object as the destination within the same directory bucket. The `RenameObject` API operation will not succeed on objects that end with the slash (`/`) delimiter character. For more information, see [Naming Amazon S3 objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-keys.html).

The `RenameObject` operation is typically completed in milliseconds regardless of the size of the object. This capability accelerates applications like log file management, media processing, and data analytics. Additionally, `RenameObject` preserves all object metadata properties, including the storage class, encryption type, creation date, last modified date, and checksum properties.

**Note**  
`RenameObject` is only supported for objects stored in the S3 Express One Zone storage class.

 To grant access to the `RenameObject` operation, we recommend that you use the `CreateSession` operation for session-based authorization. Specifically, you grant the `s3express:CreateSession` permission to the directory bucket in a bucket policy or an identity-based policy. Then, you make the `CreateSession` API call on the directory bucket to obtain a session token. With the session token in your request header, you can make API requests to this operation. After the session token expires, you make another `CreateSession` API call to generate a new session token for use. The AWS CLI and AWS SDKs will create and manage your session including refreshing the session token automatically to avoid service interruptions when a session expires. For more information about authorization, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) in the *Amazon S3 API Reference*. To learn more about Zonal endpoint API operations, see [Authorizing Zonal endpoint API operations with `CreateSession`](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-create-session.html). 

 If you don't want to overwrite an existing object, you can add the `If-None-Match` conditional header with the value `‘*’` in the `RenameObject` request. Amazon S3 returns a `412 Precondition Failed` error if the object name already exists. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_RenameObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_RenameObject.html) in the *Amazon S3 API Reference*. 

 `RenameObject` is a Zonal endpoint API operation (object-level or data plane operation) that is logged to AWS CloudTrail. You can use CloudTrail to gather information on the `RenameObject` operation performed on your objects in directory buckets. For more information, see [Logging with AWS CloudTrail for directory buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-one-zone-logging.html) and [CloudTrail log file examples for directory buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-log-files.html). 

S3 Express One Zone is the only storage class that supports `RenameObject`, which is priced the same as `PUT`, `COPY`, `POST`, and `LIST` requests (per 1,000 requests) in S3 Express One Zone. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). 

## Renaming an object
<a name="directory-bucket-rename"></a>

To rename an object in your directory bucket, you can use the Amazon S3 console, AWS CLI, AWS SDKs, the REST API or Mountpoint for Amazon S3 (version 1.19.0 or higher).

### Using the S3 console
<a name="set-rename--console"></a>

**To rename an object in a directory bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane, choose **Buckets**, and then choose the **Directory buckets** tab. Navigate to the Amazon S3 directory bucket that contains the object that you want to rename.

1. Select the check box for the object that you want to rename.

1. On the **Actions** menu, choose **Rename object**.

1. In the **New object name** box, enter the new name for the object.
**Note**  
If you specify the same object name as an existing object, the operation will fail and Amazon S3 returns a `412 Precondition Failed` error. The object key name length can't exceed 1,024 bytes. Prefixes included in the object name count toward the total length. 

1. Choose **Rename object**. Amazon S3 renames your object. 

### Using the AWS CLI
<a name="set-rename--cli"></a>

The `rename-object` examples show how you can use the AWS CLI to rename an object. To run these commands, replace the *user input placeholders* with your own information

The following example shows how to rename an object with a conditional check on the source object's ETag. 

```
aws s3api rename-object \                                    
    --bucket amzn-s3-demo-bucket--usw2-az1--x-s3 \
    --key new-file.txt \
    --rename-source original-file.txt \
    --source-if-match "\"a1b7c3d2e5f6\""
```

This command does the following:
+ Renames an object from *original-file.txt* to *new-file.txt* in the *amzn-s3-demo-bucket--usw2-az1--x-s3* directory bucket.
+ Only performs the rename if the source object's ETag matches "*a1b7c3d4e5f6*".

If the ETag doesn't match, the operation will fail with a `412 Precondition Failed` error. 

The following example shows how to rename an object with a conditional check on the new specified object name.

```
aws s3api rename-object \
    --bucket amzn-s3-demo-bucket--usw2-az1--x-s3 \
    --key new-file.txt \
    --rename-source amzn-s3-demo-bucket--usw2-az1--x-s3/original-file.txt \
    --destination-if-none-match "\"e5f3g7h8i9j0\""
```

This command does the following:
+ Renames an object from *original-file.txt* to *new-file.txt* in the *amzn-s3-demo-bucket--usw2-az1--x-s3* directory bucket.
+ Only performs the rename operation if the object exists and the object's ETag doesn't match "*e5f3g7h8i9j0*".

If an object already exists with the new specified name and the matching ETag, the operation will fail with a `412 Precondition Failed` error. 

### Using the AWS SDKs
<a name="directory-bucket-rename-sdks"></a>

------
#### [ SDK for Java ]

You can use the AWS SDK for Java to rename your objects. To use these examples, replace the *user input placeholders* with your own information

The following example demonstrates how to create a `RenameObjectRequest` using the AWS SDK for Java

```
String key = "key";
String newKey = "new-key";
String expectedETag = "e5f3g7h8i9j0";
RenameObjectRequest renameRequest = RenameObjectRequest.builder()
    .bucket(amzn-s3-demo-bucket--usw2-az1--x-s3)
    .key(newKey)
    .renameSource(key)
    .destinationIfMatch(e5f3g7h8i9j0)
    .build();
```

This code does the following:
+ Create a request to rename an object from "*key*" to "*new-key*" in the *amzn-s3-demo-bucket--usw2-az1--x-s3* directory bucket.
+ Includes a condition that the rename will only occur if the object's ETag matches "*e5f3g7h8i9j0*". 
+ If the ETag doesn't match or the object doesn't exist, the operation will fail.

The following example shows how to create a `RenameObjectRequest` with a none-match condition using the AWS SDK for Java.

```
String key = "key";
String newKey = "new-key";
String noneMatchETag = "e5f3g7h8i9j0";
RenameObjectRequest renameRequest = RenameObjectRequest.builder()
    .bucket(amzn-s3-demo-bucket--usw2-az1--x-s3)
    .key(newKey)
    .renameSource(key)
    .destinationIfNoneMatch(noneMatchETag)
    .build();
```

This code does the following:
+ Creates a request to rename an object from "*key*" to "*new-key*" in the *amzn-s3-demo-bucket--usw2-az1--x-s3* directory bucket.
+ Includes a condition using `.destinationIfNoneMatch(noneMatchETag)` that ensures the rename will only occur if the destination object's ETag doesn't match "*e5f3g7h8i9j0*".

The operation will fail with a `412 Precondition Failed` error if an object exists with the new specified name and has the specified ETag. 

------
#### [ SDK for Python ]

You can use the SDK for Python to rename your objects. To use these examples, replace the *user input placeholders* with your own information.

The following example demonstrates how to rename an object using the AWS SDK for Python (Boto3).

```
def basic_rename(bucket, source_key, destination_key):
    try:
        s3.rename_object(
            Bucket=amzn-s3-demo-bucket--usw2-az1--x-s3,
            Key=destination_key,
            RenameSource=f"{source_key}"
        )
        print(f"Successfully renamed {source_key} to {destination_key}")
    except ClientError as e:
        print(f"Error renaming object: {e}")
```

This code does the following:
+ Renames an object from *source\$1key* to *destination\$1key* in the *amzn-s3-demo-bucket--usw2-az1--x-s3* directory bucket.
+ Prints a success message if the renaming of your object is successful or prints an error message if it fails.

The following example demonstrate how to rename an object with the `SourceIfMatch` and `DestinationIfNoneMatch` conditions using the AWS SDK for Python (Boto3).

```
def rename_with_conditions(bucket, source_key, destination_key, source_etag, dest_etag):
    try:
        s3.rename_object(
            Bucket=amzn-s3-demo-bucket--usw2-az1--x-s3,
            Key=destination_key,
            RenameSource=f"{amzn-s3-demo-bucket--usw2-az1--x-s3}/{source_key}",
            SourceIfMatch=source_ETag,
            DestinationIfNoneMatch=dest_ETag
        )
        print(f"Successfully renamed {source_key} to {destination_key} with conditions")
    except ClientError as e:
        print(f"Error renaming object: {e}")
```

This code does the following:
+ Performs a conditional rename operation and applies two conditions, `SourceIfMatch` and `DestinationIfNoneMatch`. The combination of these conditions ensures that the object hasn't been modified and that an object doesn't already exist with the new specified name. 
+ Renames an object from *source\$1key* to *destination\$1key* in the *amzn-s3-demo-bucket--usw2-az1--x-s3* directory bucket.
+ Prints a success message if the renaming of your object is successful, or prints an error message if it fails or if conditions aren't met.

------
#### [ SDK for Rust ]

You can use the SDK for Rust to rename your objects. To use these examples, replace the *user input placeholders* with your own information.

The following example demonstrates how to rename an object in the *amzn-s3-demo-bucket--usw2-az1--x-s3* directory bucket using the SDK for Rust.

```
async fn basic_rename_example(client: &Client) -> Result<(), Box<dyn Error>> {
    let response = client
        .rename_object()
        .bucket(" amzn-s3-demo-bucket--usw2-az1--x-s3")
        .key("new-name.txt")  // New name/path for the object
        .rename_source("old-name.txt")  // Original object name/path
        .send()
        .await?;
    Ok(())
}
```

This code does the following:
+ Creates a request to rename an object from "*old-name.tx*" to "*new-name.txt*" in the *amzn-s3-demo-bucket--usw2-az1--x-s3* directory bucket.
+ Returns a `Result` type to handle potential errors. 

------

### Using the REST API
<a name="directory-bucket-rename-api"></a>

 You can send REST requests to rename an object. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_RenameObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_RenameObject.html) in the *Amazon S3 API Reference*. 

### Using Mountpoint for Amazon S3
<a name="directory-bucket-rename-api"></a>

 Starting with the 1.19.0 version or higher, Mountpoint for Amazon S3 supports renaming objects in S3 Express One Zone. For more information on Mountpoint, see [Working with Mountpoint](https://docs.aws.amazon.com/AmazonS3/latest/userguide/mountpoint.html).

# Uploading objects to a directory bucket
<a name="directory-buckets-objects-upload"></a>

After you create an Amazon S3 directory bucket, you can upload objects to it. The following examples show how to upload an object to a directory bucket by using the S3 console and the AWS SDKs. For information about bulk object upload operations with S3 Express One Zone, see [Object management](directory-bucket-high-performance.md#s3-express-features-object-management). 

## Using the S3 console
<a name="directory-bucket-upload-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Directory buckets**.

1. Choose the name of the bucket that you want to upload your folders or files to.

1. In the **Objects** list, choose **Upload**.

1. On the **Upload** page, do one of the following: 
   + Drag and drop files and folders to the dotted upload area.
   + Choose **Add files** or **Add folder**, choose the files or folders to upload, and then choose **Open** or **Upload**.

1. Under **Checksums**, choose the **Checksum function** that you want to use. 

   (Optional) If you're uploading a single object that's less than 16 MB in size, you can also specify a precalculated checksum value. When you provide a precalculated value, Amazon S3 compares it with the value that it calculates by using the selected checksum function. If the values don't match, the upload won't start. 

1. The options in the **Permissions ** and **Properties** sections are automatically set to default settings and can't be modified. Block Public Access is automatically enabled, and S3 Versioning and S3 Object Lock can't be enabled for directory buckets. 

   (Optional) If you want to add metadata in key-value pairs to your objects, expand the **Properties** section, and then in the **Metadata** section, choose **Add metadata**.

1. To upload the listed files and folders, choose **Upload**.

   Amazon S3 uploads your objects and folders. When the upload is finished, you see a success message on the **Upload: status** page.

## Using the AWS SDKs
<a name="directory-bucket-upload-sdks"></a>

------
#### [ SDK for Java 2.x ]

**Example**  

```
public static void putObject(S3Client s3Client, String bucketName, String objectKey, Path filePath) {
       //Using File Path to avoid loading the whole file into memory
       try {
           PutObjectRequest putObj = PutObjectRequest.builder()
                   .bucket(bucketName)
                   .key(objectKey)
                   //.metadata(metadata)
                   .build();
           s3Client.putObject(putObj, filePath);               
           System.out.println("Successfully placed " + objectKey +" into bucket "+bucketName);
                                              
       }
       
       catch (S3Exception e) {
           System.err.println(e.getMessage());
           System.exit(1);
       }
}
```

------
#### [ SDK for Python ]

**Example**  

```
import boto3
import botocore
from botocore.exceptions import ClientError
    
    
def put_object(s3_client, bucket_name, key_name, object_bytes):
    """  
    Upload data to a directory bucket.
    :param s3_client: The boto3 S3 client
    :param bucket_name: The bucket that will contain the object
    :param key_name: The key of the object to be uploaded
    :param object_bytes: The data to upload
    """
    try:
        response = s3_client.put_object(Bucket=bucket_name, Key=key_name,
                             Body=object_bytes)
        print(f"Upload object '{key_name}' to bucket '{bucket_name}'.") 
        return response
    except ClientError:    
        print(f"Couldn't upload object '{key_name}' to bucket '{bucket_name}'.")
        raise

def main():
    # Share the client session with functions and objects to benefit from S3 Express One Zone auth key
    s3_client = boto3.client('s3')
    # Directory bucket name must end with --zone-id--x-s3
    resp = put_object(s3_client, 'doc-bucket-example--use1-az5--x-s3', 'sample.txt', b'Hello, World!')
    print(resp)

if __name__ == "__main__":
    main()
```

------

## Using the AWS CLI
<a name="directory-upload-object-cli"></a>

The following `put-object` example command shows how you can use the AWS CLI to upload an object from Amazon S3. To run this command, replace the `user input placeholders` with your own information.

```
aws s3api put-object --bucket bucket-base-name--zone-id--x-s3 --key sampleinut/file001.bin --body bucket-seed/file001.bin
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object.html) in the *AWS CLI Command Reference*.

**Topics**
+ [Using multipart uploads with directory buckets](s3-express-using-multipart-upload.md)

# Using multipart uploads with directory buckets
<a name="s3-express-using-multipart-upload"></a>

You can use the multipart upload process to upload a single object as a set of parts. Each part is a contiguous portion of the object's data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation.

Using multipart upload provides the following advantages:
+ **Improved throughput** – You can upload parts in parallel to improve throughput. 
+ **Quick recovery from any network issues** – Smaller part sizes minimize the impact of restarting a failed upload because of a network error.
+ **Pause and resume object uploads** – You can upload object parts over time. After you initiate a multipart upload, there is no expiration date. You must explicitly complete or abort the multipart upload.
+ **Begin an upload before you know the final object size** – You can upload an object as you are creating it. 

We recommend that you use multipart uploads in the following ways:
+ If you're uploading large objects over a stable high-bandwidth network, use multipart uploads to maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded performance.
+ If you're uploading over a spotty network, use multipart uploads to increase resiliency to network errors by avoiding upload restarts. When using multipart uploads, you need to retry uploading only the parts that are interrupted during the upload. You don't need to restart uploading your object from the beginning.

When you're using multipart uploads to upload objects to the Amazon S3 Express One Zone storage class in directory buckets, the multipart upload process is similar to the process of using multipart upload to upload objects to general purpose buckets. However, there are some notable differences. 

For more information about using multipart uploads to upload objects to S3 Express One Zone, see the following topics.

**Topics**
+ [The multipart upload process](#s3-express-mpu-process)
+ [Checksums with multipart upload operations](#s3-express-mpuchecksums)
+ [Concurrent multipart upload operations](#s3-express-distributedmpupload)
+ [Multipart uploads and pricing](#s3-express-mpuploadpricing)
+ [Multipart upload API operations and permissions](#s3-express-mpuAndPermissions)
+ [Examples](#directory-buckets-multipart-upload-examples)

## The multipart upload process
<a name="s3-express-mpu-process"></a>

A multipart upload is a three-step process: 
+ You initiate the upload.
+ You upload the object parts.
+ After you have uploaded all of the parts, you complete the multipart upload.



Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object as you would any other object in your bucket. 

**Multipart upload initiation**  
When you send a request to initiate a multipart upload, Amazon S3 returns a response with an upload ID, which is a unique identifier for your multipart upload. You must include this upload ID whenever you upload parts, list the parts, complete an upload, or abort an upload. 

**Parts upload**  
When uploading a part, in addition to the upload ID, you must specify a part number. When you're using a multipart upload with S3 Express One Zone, the multipart part numbers must be consecutive part numbers. If you try to complete a multipart upload request with nonconsecutive part numbers, an `HTTP 400 Bad Request` (Invalid Part Order) error is generated. 

A part number uniquely identifies a part and its position in the object that you are uploading. If you upload a new part by using the same part number as a previously uploaded part, the previously uploaded part is overwritten. 

Whenever you upload a part, Amazon S3 returns an entity tag (ETag) header in its response. For each part upload, you must record the part number and the ETag value. The ETag values for all object part uploads will remain the same, but each part will be assigned a different part number. You must include these values in the subsequent request to complete the multipart upload.

Amazon S3 automatically encrypts all new objects that are uploaded to an S3 bucket. When doing a multipart upload, if you don't specify encryption information in your request, the encryption setting of the uploaded parts is set to the default encryption configuration of the destination bucket. The default encryption configuration of an Amazon S3 bucket is always enabled and is at a minimum set to server-side encryption with Amazon S3 managed keys (SSE-S3). For directory buckets, SSE-S3 and server-side encryption with AWS KMS keys (SSE-KMS) are supported. For more information, see [Data protection and encryption](s3-express-data-protection.md).

**Multipart upload completion**  
When you complete a multipart upload, Amazon S3 creates the object by concatenating the parts in ascending order based on the part number. After a successful *complete* request, the parts no longer exist. 

Your *complete multipart upload* request must include the upload ID and a list of both part numbers and their corresponding ETag values. The Amazon S3 response includes an ETag that uniquely identifies the combined object data. This ETag is not an MD5 hash of the object data. 

**Multipart upload listings**  
You can list the parts of a specific multipart upload or all in-progress multipart uploads. The list parts operation returns the parts information that you have uploaded for a specific multipart upload. For each list parts request, Amazon S3 returns the parts information for the specified multipart upload, up to a maximum of 1,000 parts. If there are more than 1,000 parts in the multipart upload, you must use pagination to retrieve all the parts. 

The returned list of parts doesn't include parts that haven't finished uploading. Using the *list multipart uploads* operation, you can obtain a list of multipart uploads that are in progress.

An in-progress multipart upload is an upload that you have initiated, but have not yet completed or aborted. Each request returns at most 1,000 multipart uploads. If there are more than 1,000 multipart uploads in progress, you must send additional requests to retrieve the remaining multipart uploads. Use the returned listing only for verification. Do not use the result of this listing when sending a *complete multipart upload* request. Instead, maintain your own list of the part numbers that you specified when uploading parts and the corresponding ETag values that Amazon S3 returns.

For more information about multipart upload listings, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html) in the *Amazon Simple Storage Service API Reference*. 

## Checksums with multipart upload operations
<a name="s3-express-mpuchecksums"></a>

When you upload an object to, you can specify a checksum algorithm to check object integrity. MD5 is not supported for directory buckets. You can specify one of the following Secure Hash Algorithms (SHA) or Cyclic Redundancy Check (CRC) data-integrity check algorithms:
+ CRC32 
+ CRC32C 
+ SHA-1
+ SHA-256

You can use the Amazon S3 REST API or the AWS SDKs to retrieve the checksum value for individual parts by using `GetObject` or `HeadObject`. If you want to retrieve the checksum values for individual parts of multipart uploads still in process, you can use `ListParts`.

**Important**  
When using the preceding checksum algorithms, the multipart part numbers must use consecutive part numbers. If you try to complete a multipart upload request with nonconsecutive part numbers, Amazon S3 generates an `HTTP 400 Bad Request` (Invalid Part Order) error.

 For more information about how checksums work with multipart upload objects, see [Checking object integrity in Amazon S3](checking-object-integrity.md).

## Concurrent multipart upload operations
<a name="s3-express-distributedmpupload"></a>

In a distributed development environment, your application can initiate several updates on the same object at the same time. For example, your application might initiate several multipart uploads by using the same object key. For each of these uploads, your application can then upload parts and send a complete upload request to Amazon S3 to create the object. For S3 Express One Zone, the object creation time is the completion date of the multipart upload.

**Important**  
Versioning isn’t supported for objects that are stored in directory buckets.

## Multipart uploads and pricing
<a name="s3-express-mpuploadpricing"></a>

After you initiate a multipart upload, Amazon S3 retains all the parts until you either complete or abort the upload. Throughout its lifetime, you are billed for all storage, bandwidth, and requests for this multipart upload and its associated parts. If you abort the multipart upload, Amazon S3 deletes the upload artifacts and any parts that you have uploaded, and you are no longer billed for them. There are no early delete charges for deleting incomplete multipart uploads, regardless of the storage class specified. For more information about pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

**Important**  
If the complete multipart upload request isn't sent successfully, the object parts aren't assembled and an object isn't created. You are billed for all storage associated with uploaded parts. It's important that you either complete the multipart upload to have the object created or abort the multipart upload to remove any uploaded parts.   
Before you can delete a directory bucket, you must complete or abort all in-progress multipart uploads. Directory buckets don't support S3 Lifecycle configurations. If needed, you can list your active multipart uploads, then abort the uploads, and then delete your bucket. 

## Multipart upload API operations and permissions
<a name="s3-express-mpuAndPermissions"></a>

To allow access to object management API operations on a directory bucket, you grant the `s3express:CreateSession` permission in a bucket policy or an AWS Identity and Access Management (IAM) identity-based policy.

You must have the necessary permissions to use the multipart upload operations. You can use bucket policies or IAM identity-based policies to grant IAM principals permissions to perform these operations. The following table lists the required permissions for various multipart upload operations. 

You can identify the initiator of a multipart upload through the `Initiator` element. If the initiator is an AWS account, this element provides the same information as the `Owner` element. If the initiator is an IAM user, this element provides the user ARN and display name.


| Action | Required permissions | 
| --- | --- | 
|  Create a multipart upload  |  To create the multipart upload, you must be allowed to perform the `s3express:CreateSession` action on the directory bucket.   | 
|  Initiate a multipart upload  |  To initiate the multipart upload, you must be allowed to perform the `s3express:CreateSession` action on the directory bucket.   | 
| Upload a part |  To upload a part, you must be allowed to perform the `s3express:CreateSession` action on the directory bucket.  For the initiator to upload a part, the bucket owner must allow the initiator to perform the `s3express:CreateSession` action on the directory bucket.  | 
| Upload a part (copy) |  To upload a part, you must be allowed to perform the `s3express:CreateSession` action on the directory bucket.  For the initiator to upload a part for an object, the owner of the bucket must allow the initiator to perform the `s3express:CreateSession` action on the object.  | 
| Complete a multipart upload |  To complete a multipart upload, you must be allowed to perform the `s3express:CreateSession` action on the directory bucket.  For the initiator to complete a multipart upload, the bucket owner must allow the initiator to perform the `s3express:CreateSession` action on the object.  | 
| Abort a multipart upload |  To abort a multipart upload, you must be allowed to perform the `s3express:CreateSession` action.  For the initiator to abort a multipart upload, the initiator must be granted explicit allow access to perform the `s3express:CreateSession` action.   | 
| List parts |  To list the parts in a multipart upload, you must be allowed to perform the `s3express:CreateSession` action on the directory bucket.  | 
| List in-progress multipart uploads |  To list the in-progress multipart uploads to a bucket, you must be allowed to perform the `s3:ListBucketMultipartUploads` action on that bucket.  | 

### API operation support for multipart uploads
<a name="s3-express-mpu-apis"></a>

The following sections in the Amazon Simple Storage Service API Reference describe the Amazon S3 REST API operations for multipart uploads. 
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html)

## Examples
<a name="directory-buckets-multipart-upload-examples"></a>

To use a multipart upload to upload an object to S3 Express One Zone in a directory bucket, see the following examples.

**Topics**
+ [Creating a multipart upload](#directory-buckets-multipart-upload-examples-create)
+ [Uploading the parts of a multipart upload](#directory-buckets-multipart-upload-examples-upload-part)
+ [Completing a multipart upload](#directory-buckets-multipart-upload-examples-complete)
+ [Aborting a multipart upload](#directory-buckets-multipart-upload-examples-abort)
+ [Creating a multipart upload copy operation](#directory-buckets-multipart-upload-examples-upload-part-copy)
+ [Listing in-progress multipart uploads](#directory-buckets-multipart-upload-examples-list)
+ [Listing the parts of a multipart upload](#directory-buckets-multipart-upload-examples-list-parts)

### Creating a multipart upload
<a name="directory-buckets-multipart-upload-examples-create"></a>

**Note**  
For directory buckets, when you perform a `CreateMultipartUpload` operation and an `UploadPartCopy` operation, the bucket's default encryption must use the desired encryption configuration, and the request headers you provide in the `CreateMultipartUpload` request must match the default encryption configuration of the destination bucket. 

The following examples show how to create a multipart upload. 

#### Using the AWS SDKs
<a name="directory-buckets-multipart-upload-create-sdks"></a>

------
#### [ SDK for Java 2.x ]

**Example**  

```
/**
 * This method creates a multipart upload request that generates a unique upload ID that is used to track
 * all the upload parts
 *
 * @param s3
 * @param bucketName - for example, 'doc-example-bucket--use1-az4--x-s3'
 * @param key
 * @return
 */
 private static String createMultipartUpload(S3Client s3, String bucketName, String key) {
 
     CreateMultipartUploadRequest createMultipartUploadRequest = CreateMultipartUploadRequest.builder() 
             .bucket(bucketName)
             .key(key)
             .build();
             
     String uploadId = null;
     
     try {
         CreateMultipartUploadResponse response = s3.createMultipartUpload(createMultipartUploadRequest);
         uploadId = response.uploadId();
     }
     catch (S3Exception e) {
         System.err.println(e.awsErrorDetails().errorMessage());
         System.exit(1);
     }
     return uploadId;
```

------
#### [ SDK for Python ]

**Example**  

```
def create_multipart_upload(s3_client, bucket_name, key_name):
    '''
   Create a multipart upload to a directory bucket
   
   :param s3_client: boto3 S3 client
   :param bucket_name: The destination bucket for the multipart upload
   :param key_name: The key name for the object to be uploaded
   :return: The UploadId for the multipart upload if created successfully, else None
   '''
   
   try:
        mpu = s3_client.create_multipart_upload(Bucket = bucket_name, Key = key_name)
        return mpu['UploadId'] 
    except ClientError as e:
        logging.error(e)
        return None
```

------

#### Using the AWS CLI
<a name="directory-bucket-multipart-upload-create-cli"></a>

This example shows how to create a multipart upload to a directory bucket by using the AWS CLI. This command starts a multipart upload to the directory bucket *bucket-base-name*--*zone-id*--x-s3 for the object *KEY\$1NAME*. To use the command replace the *user input placeholders* with your own information.

```
aws s3api create-multipart-upload --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME
```

For more information, see [create-multipart-upload](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/create-multipart-upload.html) in the AWS Command Line Interface.

### Uploading the parts of a multipart upload
<a name="directory-buckets-multipart-upload-examples-upload-part"></a>

The following examples show how to upload parts of a multipart upload. 

#### Using the AWS SDKs
<a name="directory-buckets-multipart-upload-part-sdks"></a>

------
#### [ SDK for Java 2.x ]

The following example shows how to break a single object into parts and then upload those parts to a directory bucket by using the SDK for Java 2.x. 

**Example**  

```
/**
 * This method creates part requests and uploads individual parts to S3 and then returns all the completed parts
 *
 * @param s3
 * @param bucketName
 * @param key
 * @param uploadId
 * @throws IOException
 */
 private static ListCompletedPartmultipartUpload(S3Client s3, String bucketName, String key, String uploadId, String filePath) throws IOException {

        int partNumber = 1;
        ListCompletedPart completedParts = new ArrayList<>();
        ByteBuffer bb = ByteBuffer.allocate(1024 * 1024 * 5); // 5 MB byte buffer

        // read the local file, breakdown into chunks and process
        try (RandomAccessFile file = new RandomAccessFile(filePath, "r")) {
            long fileSize = file.length();
            int position = 0;
            while (position < fileSize) {
                file.seek(position);
                int read = file.getChannel().read(bb);

                bb.flip(); // Swap position and limit before reading from the buffer.
                UploadPartRequest uploadPartRequest = UploadPartRequest.builder()
                        .bucket(bucketName)
                        .key(key)
                        .uploadId(uploadId)
                        .partNumber(partNumber)
                        .build();

                UploadPartResponse partResponse = s3.uploadPart(
                        uploadPartRequest,
                        RequestBody.fromByteBuffer(bb));

                CompletedPart part = CompletedPart.builder()
                        .partNumber(partNumber)
                        .eTag(partResponse.eTag())
                        .build();
                completedParts.add(part);

                bb.clear();
                position += read;
                partNumber++;
            }
        } 
        
        catch (IOException e) {
            throw e;
        }
        return completedParts;
    }
```

------
#### [ SDK for Python ]

The following example shows how to break a single object into parts and then upload those parts to a directory bucket by using the SDK for Python. 

**Example**  

```
def multipart_upload(s3_client, bucket_name, key_name, mpu_id, part_size):
    '''
    Break up a file into multiple parts and upload those parts to a directory bucket

    :param s3_client: boto3 S3 client
    :param bucket_name: Destination bucket for the multipart upload
    :param key_name: Key name for object to be uploaded and for the local file that's being uploaded
    :param mpu_id: The UploadId returned from the create_multipart_upload call
    :param part_size: The size parts that the object will be broken into, in bytes. 
                      Minimum 5 MiB, Maximum 5 GiB. There is no minimum size for the last part of your multipart upload.
    :return: part_list for the multipart upload if all parts are uploaded successfully, else None
    '''
    
    part_list = []
    try:
        with open(key_name, 'rb') as file:
            part_counter = 1
            while True:
                file_part = file.read(part_size)
                if not len(file_part):
                    break
                upload_part = s3_client.upload_part(
                    Bucket = bucket_name,
                    Key = key_name,
                    UploadId = mpu_id,
                    Body = file_part,
                    PartNumber = part_counter
                )
                part_list.append({'PartNumber': part_counter, 'ETag': upload_part['ETag']})
                part_counter += 1
    except ClientError as e:
        logging.error(e)
        return None
    return part_list
```

------

#### Using the AWS CLI
<a name="directory-bucket-multipart-upload-part-cli"></a>

This example shows how to break a single object into parts and then upload those parts to a directory bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3api upload-part --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME --part-number 1 --body LOCAL_FILE_NAME --upload-id "AS_mgt9RaQE9GEaifATue15dAAAAAAAAAAEMAAAAAAAAADQwNzI4MDU0MjUyMBYAAAAAAAAAAA0AAAAAAAAAAAH2AfYAAAAAAAAEBSD0WBKMAQAAAABneY9yBVsK89iFkvWdQhRCcXohE8RbYtc9QvBOG8tNpA"
```

For more information, see [upload-part](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/upload-part.html) in the AWS Command Line Interface.

### Completing a multipart upload
<a name="directory-buckets-multipart-upload-examples-complete"></a>

The following examples show how to complete a multipart upload. 

#### Using the AWS SDKs
<a name="directory-buckets-multipart-upload-complete-sdks"></a>

------
#### [ SDK for Java 2.x ]

The following examples show how to complete a multipart upload by using the SDK for Java 2.x.

**Example**  

```
/**
 * This method completes the multipart upload request by collating all the upload parts
 * @param s3
 * @param bucketName - for example, 'doc-example-bucket--usw2-az1--x-s3'
 * @param key
 * @param uploadId
 * @param uploadParts
 */
 private static void completeMultipartUpload(S3Client s3, String bucketName, String key, String uploadId, ListCompletedPart uploadParts) {
        CompletedMultipartUpload completedMultipartUpload = CompletedMultipartUpload.builder()
                .parts(uploadParts)
                .build();

        CompleteMultipartUploadRequest completeMultipartUploadRequest =
                CompleteMultipartUploadRequest.builder()
                        .bucket(bucketName)
                        .key(key)
                        .uploadId(uploadId)
                        .multipartUpload(completedMultipartUpload)
                        .build();

        s3.completeMultipartUpload(completeMultipartUploadRequest);
    }

    public static void multipartUploadTest(S3Client s3, String bucketName, String key, String localFilePath)  {
        System.out.println("Starting multipart upload for: " + key);
        try {
            String uploadId = createMultipartUpload(s3, bucketName, key);
            System.out.println(uploadId);
            ListCompletedPart parts = multipartUpload(s3, bucketName, key, uploadId, localFilePath);
            completeMultipartUpload(s3, bucketName, key, uploadId, parts);
            System.out.println("Multipart upload completed for: " + key);
        } 
        
        catch (Exception e) {
            System.err.println(e.getMessage());
            System.exit(1);
        }
    }
```

------
#### [ SDK for Python ]

The following examples show how to complete a multipart upload by using the SDK for Python.

**Example**  

```
def complete_multipart_upload(s3_client, bucket_name, key_name, mpu_id, part_list):
    '''
    Completes a multipart upload to a directory bucket

    :param s3_client: boto3 S3 client
    :param bucket_name: The destination bucket for the multipart upload
    :param key_name: The key name for the object to be uploaded
    :param mpu_id: The UploadId returned from the create_multipart_upload call
    :param part_list: The list of uploaded part numbers with their associated ETags 
    :return: True if the multipart upload was completed successfully, else False
    '''
    
    try:
        s3_client.complete_multipart_upload(
            Bucket = bucket_name,
            Key = key_name,
            UploadId = mpu_id,
            MultipartUpload = {
                'Parts': part_list
            }
        )
    except ClientError as e:
        logging.error(e)
        return False
    return True
    
if __name__ == '__main__':
    MB = 1024 ** 2
    region = 'us-west-2'
    bucket_name = 'BUCKET_NAME'
    key_name = 'OBJECT_NAME'
    part_size = 10 * MB
    s3_client = boto3.client('s3', region_name = region)
    mpu_id = create_multipart_upload(s3_client, bucket_name, key_name)
    if mpu_id is not None:
        part_list = multipart_upload(s3_client, bucket_name, key_name, mpu_id, part_size)
        if part_list is not None:
            if complete_multipart_upload(s3_client, bucket_name, key_name, mpu_id, part_list):
                print (f'{key_name} successfully uploaded through a ultipart upload to {bucket_name}')
            else:
                print (f'Could not upload {key_name} hrough a multipart upload to {bucket_name}')
```

------

#### Using the AWS CLI
<a name="directory-bucket-multipart-upload-complete-cli"></a>

This example shows how to complete a multipart upload for a directory bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3api complete-multipart-upload --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME --upload-id "AS_mgt9RaQE9GEaifATue15dAAAAAAAAAAEMAAAAAAAAADQwNzI4MDU0MjUyMBYAAAAAAAAAAA0AAAAAAAAAAAH2AfYAAAAAAAAEBSD0WBKMAQAAAABneY9yBVsK89iFkvWdQhRCcXohE8RbYtc9QvBOG8tNpA" --multipart-upload file://parts.json
```

This example takes a JSON structure that describes the parts of the multipart upload that should be reassembled into the complete file. In this example, the `file://` prefix is used to load the JSON structure from a file in the local folder named `parts`.

parts.json:

```
parts.json
{
  "Parts": [
    {
      "ETag": "6b78c4a64dd641a58dac8d9258b88147",
      "PartNumber": 1
    }
  ]
}
```

For more information, see [complete-multipart-upload](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/complete-multipart-upload.html) in the AWS Command Line Interface.

### Aborting a multipart upload
<a name="directory-buckets-multipart-upload-examples-abort"></a>

The following examples show how to abort a multipart upload. 

#### Using the AWS SDKs
<a name="directory-bucket-multipart-upload-abort-sdk"></a>

------
#### [ SDK for Java 2.x ]

The following example shows how to abort a multipart upload by using the SDK for Java 2.x.

**Example**  

```
public static void abortMultiPartUploads( S3Client s3, String bucketName ) {

         try {
             ListMultipartUploadsRequest listMultipartUploadsRequest = ListMultipartUploadsRequest.builder()
                     .bucket(bucketName)
                     .build();

             ListMultipartUploadsResponse response = s3.listMultipartUploads(listMultipartUploadsRequest);
             ListMultipartUpload uploads = response.uploads();

             AbortMultipartUploadRequest abortMultipartUploadRequest;
             for (MultipartUpload upload: uploads) {
                 abortMultipartUploadRequest = AbortMultipartUploadRequest.builder()
                         .bucket(bucketName)
                         .key(upload.key())
                         .uploadId(upload.uploadId())
                         .build();

                 s3.abortMultipartUpload(abortMultipartUploadRequest);
             }

         } 
         
         catch (S3Exception e) {
             System.err.println(e.getMessage());
             System.exit(1);
         }
     }
```

------
#### [ SDK for Python ]

The following example shows how to abort a multipart upload by using the SDK for Python.

**Example**  

```
import logging
import boto3
from botocore.exceptions import ClientError


def abort_multipart_upload(s3_client, bucket_name, key_name, upload_id):
    '''
    Aborts a partial multipart upload in a directory bucket.
    
    :param s3_client: boto3 S3 client
    :param bucket_name: Bucket where the multipart upload was initiated - for example, 'doc-example-bucket--usw2-az1--x-s3'
    :param key_name: Name of the object for which the multipart upload needs to be aborted
    :param upload_id: Multipart upload ID for the multipart upload to be aborted
    :return: True if the multipart upload was successfully aborted, False if not
    '''
    try:
        s3_client.abort_multipart_upload(
            Bucket = bucket_name,
            Key = key_name,
            UploadId = upload_id
        )
    except ClientError as e:
        logging.error(e)
        return False
    return True


if __name__ == '__main__':
    region = 'us-west-2'
    bucket_name = 'BUCKET_NAME'
    key_name = 'KEY_NAME'
        upload_id = 'UPLOAD_ID'
    s3_client = boto3.client('s3', region_name = region)
    if abort_multipart_upload(s3_client, bucket_name, key_name, upload_id):
        print (f'Multipart upload for object {key_name} in {bucket_name} bucket has been aborted')
    else:
        print (f'Unable to abort multipart upload for object {key_name} in {bucket_name} bucket')
```

------

#### Using the AWS CLI
<a name="directory-bucket-multipart-upload-complete-cli"></a>

The following example shows how to abort a multipart upload by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3api abort-multipart-upload --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME --upload-id "AS_mgt9RaQE9GEaifATue15dAAAAAAAAAAEMAAAAAAAAADQwNzI4MDU0MjUyMBYAAAAAAAAAAA0AAAAAAAAAAAH2AfYAAAAAAAAEAX5hFw-MAQAAAAB0OxUFeA7LTbWWFS8WYwhrxDxTIDN-pdEEq_agIHqsbg"
```

For more information, see [abort-multipart-upload](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/abort-multipart-upload.html) in the AWS Command Line Interface.

### Creating a multipart upload copy operation
<a name="directory-buckets-multipart-upload-examples-upload-part-copy"></a>

**Note**  
To encrypt new object part copies in a directory bucket with SSE-KMS, you must specify SSE-KMS as the directory bucket's default encryption configuration with a KMS key (specifically, a [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk)). The [AWS managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) (`aws/s3`) isn't supported. Your SSE-KMS configuration can only support 1 [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) per directory bucket for the lifetime of the bucket. After you specify a customer managed key for SSE-KMS, you can't override the customer managed key for the bucket's SSE-KMS configuration. You can't specify server-side encryption settings for new object part copies with SSE-KMS in the [UploadPartCopy ](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html) request headers. Also, the request headers you provide in the `CreateMultipartUpload` request must match the default encryption configuration of the destination bucket. 
S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html). In this case, Amazon S3 makes a call to AWS KMS every time a copy request is made for a KMS-encrypted object.

The following examples show how to copy objects from one bucket to another using a multipart upload. 

#### Using the AWS SDKs
<a name="directory-bucket-multipart-upload-copy-sdk"></a>

------
#### [ SDK for Java 2.x ]

The following example shows how to use a multipart upload to programmatically copy an object from one bucket to another by using the SDK for Java 2.x.

**Example**  

```
/**
 * This method creates a multipart upload request that generates a unique upload ID that is used to track
 * all the upload parts.
 *
 * @param s3
 * @param bucketName
 * @param key
 * @return
 */
 private static String createMultipartUpload(S3Client s3, String bucketName, String key) {
        CreateMultipartUploadRequest createMultipartUploadRequest = CreateMultipartUploadRequest.builder()
                .bucket(bucketName)
                .key(key)
                .build();
        String uploadId = null;
        try {
            CreateMultipartUploadResponse response = s3.createMultipartUpload(createMultipartUploadRequest);
            uploadId = response.uploadId();
        } catch (S3Exception e) {
            System.err.println(e.awsErrorDetails().errorMessage());
            System.exit(1);
        }
        return uploadId;
  }

  /**
   * Creates copy parts based on source object size and copies over individual parts
   *
   * @param s3
   * @param sourceBucket
   * @param sourceKey
   * @param destnBucket
   * @param destnKey
   * @param uploadId
   * @return
   * @throws IOException
   */
    public static ListCompletedPart multipartUploadCopy(S3Client s3, String sourceBucket, String sourceKey, String destnBucket, String destnKey, String uploadId) throws IOException {

        // Get the object size to track the end of the copy operation.
        HeadObjectRequest headObjectRequest = HeadObjectRequest
                .builder()
                .bucket(sourceBucket)
                .key(sourceKey)
                .build();
        HeadObjectResponse response = s3.headObject(headObjectRequest);
        Long objectSize = response.contentLength();

        System.out.println("Source Object size: " + objectSize);

        // Copy the object using 20 MB parts.
        long partSize = 20 * 1024 * 1024;
        long bytePosition = 0;
        int partNum = 1;
        ListCompletedPart completedParts = new ArrayList<>();
        while (bytePosition < objectSize) {
            // The last part might be smaller than partSize, so check to make sure
            // that lastByte isn't beyond the end of the object.
            long lastByte = Math.min(bytePosition + partSize - 1, objectSize - 1);

            System.out.println("part no: " + partNum + ", bytePosition: " + bytePosition + ", lastByte: " + lastByte);

            // Copy this part.
            UploadPartCopyRequest req = UploadPartCopyRequest.builder()
                    .uploadId(uploadId)
                    .sourceBucket(sourceBucket)
                    .sourceKey(sourceKey)
                    .destinationBucket(destnBucket)
                    .destinationKey(destnKey)
                    .copySourceRange("bytes="+bytePosition+"-"+lastByte)
                    .partNumber(partNum)
                    .build();
            UploadPartCopyResponse res = s3.uploadPartCopy(req);
            CompletedPart part = CompletedPart.builder()
                    .partNumber(partNum)
                    .eTag(res.copyPartResult().eTag())
                    .build();
            completedParts.add(part);
            partNum++;
            bytePosition += partSize;
        }
        return completedParts;
    }


    public static void multipartCopyUploadTest(S3Client s3, String srcBucket, String srcKey, String destnBucket, String destnKey)  {
        System.out.println("Starting multipart copy for: " + srcKey);
        try {
            String uploadId = createMultipartUpload(s3, destnBucket, destnKey);
            System.out.println(uploadId);
            ListCompletedPart parts = multipartUploadCopy(s3, srcBucket, srcKey,destnBucket,  destnKey, uploadId);
            completeMultipartUpload(s3, destnBucket, destnKey, uploadId, parts);
            System.out.println("Multipart copy completed for: " + srcKey);
        } catch (Exception e) {
            System.err.println(e.getMessage());
            System.exit(1);
        }
    }
```

------
#### [ SDK for Python ]

The following example shows how to use a multipart upload to programmatically copy an object from one bucket to another by using the SDK for Python.

**Example**  

```
import logging
import boto3
from botocore.exceptions import ClientError

def head_object(s3_client, bucket_name, key_name):
    '''
    Returns metadata for an object in a directory bucket

    :param s3_client: boto3 S3 client
    :param bucket_name: Bucket that contains the object to query for metadata
    :param key_name: Key name to query for metadata
    :return: Metadata for the specified object if successful, else None
    '''

    try:
        response = s3_client.head_object(
            Bucket = bucket_name,
            Key = key_name
        )
        return response
    except ClientError as e:
        logging.error(e)
        return None
    
def create_multipart_upload(s3_client, bucket_name, key_name):
    '''
    Create a multipart upload to a directory bucket

    :param s3_client: boto3 S3 client
    :param bucket_name: Destination bucket for the multipart upload
    :param key_name: Key name of the object to be uploaded
    :return: UploadId for the multipart upload if created successfully, else None
    '''
    
    try:
        mpu = s3_client.create_multipart_upload(Bucket = bucket_name, Key = key_name)
        return mpu['UploadId'] 
    except ClientError as e:
        logging.error(e)
        return None

def multipart_copy_upload(s3_client, source_bucket_name, key_name, target_bucket_name, mpu_id, part_size):
    '''
    Copy an object in a directory bucket to another bucket in multiple parts of a specified size
    
    :param s3_client: boto3 S3 client
    :param source_bucket_name: Bucket where the source object exists
    :param key_name: Key name of the object to be copied
    :param target_bucket_name: Destination bucket for copied object
    :param mpu_id: The UploadId returned from the create_multipart_upload call
    :param part_size: The size parts that the object will be broken into, in bytes. 
                      Minimum 5 MiB, Maximum 5 GiB. There is no minimum size for the last part of your multipart upload.
    :return: part_list for the multipart copy if all parts are copied successfully, else None
    '''
    
    part_list = []
    copy_source = {
        'Bucket': source_bucket_name,
        'Key': key_name
    }
    try:
        part_counter = 1
        object_size = head_object(s3_client, source_bucket_name, key_name)
        if object_size is not None:
            object_size = object_size['ContentLength']
        while (part_counter - 1) * part_size <object_size:
            bytes_start = (part_counter - 1) * part_size
            bytes_end = (part_counter * part_size) - 1
            upload_copy_part = s3_client.upload_part_copy (
                Bucket = target_bucket_name,
                CopySource = copy_source,
                CopySourceRange = f'bytes={bytes_start}-{bytes_end}',
                Key = key_name,
                PartNumber = part_counter,
                UploadId = mpu_id
            )
            part_list.append({'PartNumber': part_counter, 'ETag': upload_copy_part['CopyPartResult']['ETag']})
            part_counter += 1
    except ClientError as e:
        logging.error(e)
        return None
    return part_list

def complete_multipart_upload(s3_client, bucket_name, key_name, mpu_id, part_list):
    '''
    Completes a multipart upload to a directory bucket

    :param s3_client: boto3 S3 client
    :param bucket_name: Destination bucket for the multipart upload
    :param key_name: Key name of the object to be uploaded
    :param mpu_id: The UploadId returned from the create_multipart_upload call
    :param part_list: List of uploaded part numbers with associated ETags 
    :return: True if the multipart upload was completed successfully, else False
    '''
    
    try:
        s3_client.complete_multipart_upload(
            Bucket = bucket_name,
            Key = key_name,
            UploadId = mpu_id,
            MultipartUpload = {
                'Parts': part_list
            }
        )
    except ClientError as e:
        logging.error(e)
        return False
    return True

if __name__ == '__main__':
    MB = 1024 ** 2
    region = 'us-west-2'
    source_bucket_name = 'SOURCE_BUCKET_NAME'
    target_bucket_name = 'TARGET_BUCKET_NAME'
    key_name = 'KEY_NAME'
    part_size = 10 * MB
    s3_client = boto3.client('s3', region_name = region)
    mpu_id = create_multipart_upload(s3_client, target_bucket_name, key_name)
    if mpu_id is not None:
        part_list = multipart_copy_upload(s3_client, source_bucket_name, key_name, target_bucket_name, mpu_id, part_size)
        if part_list is not None:
            if complete_multipart_upload(s3_client, target_bucket_name, key_name, mpu_id, part_list):
                print (f'{key_name} successfully copied through multipart copy from {source_bucket_name} to {target_bucket_name}')
            else:
                print (f'Could not copy {key_name} through multipart copy from {source_bucket_name} to {target_bucket_name}')
```

------

#### Using the AWS CLI
<a name="directory-bucket-multipart-upload-copy-cli"></a>

The following example shows how to use a multipart upload to programmatically copy an object from one bucket to a directory bucket using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3api upload-part-copy --bucket bucket-base-name--zone-id--x-s3 --key TARGET_KEY_NAME --copy-source SOURCE_BUCKET_NAME/SOURCE_KEY_NAME --part-number 1 --upload-id "AS_mgt9RaQE9GEaifATue15dAAAAAAAAAAEMAAAAAAAAADQwNzI4MDU0MjUyMBYAAAAAAAAAAA0AAAAAAAAAAAH2AfYAAAAAAAAEBnJ4cxKMAQAAAABiNXpOFVZJ1tZcKWib9YKE1C565_hCkDJ_4AfCap2svg"
```

For more information, see [upload-part-copy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/upload-part-copy.html                         ) in the AWS Command Line Interface.

### Listing in-progress multipart uploads
<a name="directory-buckets-multipart-upload-examples-list"></a>

To list in-progress multipart uploads to a directory bucket, you can use the AWS SDKs, or the AWS CLI. 

#### Using the AWS SDKs
<a name="directory-bucket-multipart-upload-list-sdk"></a>

------
#### [ SDK for Java 2.x ]

The following examples show how to list in-progress (incomplete) multipart uploads by using the SDK for Java 2.x.

**Example**  

```
 public static void listMultiPartUploads( S3Client s3, String bucketName) {
        try {
            ListMultipartUploadsRequest listMultipartUploadsRequest = ListMultipartUploadsRequest.builder()
                .bucket(bucketName)
                .build();
                
            ListMultipartUploadsResponse response = s3.listMultipartUploads(listMultipartUploadsRequest);
            List MultipartUpload uploads = response.uploads();
            for (MultipartUpload upload: uploads) {
                System.out.println("Upload in progress: Key = \"" + upload.key() + "\", id = " + upload.uploadId());
            }
      }
      catch (S3Exception e) {
            System.err.println(e.getMessage());
            System.exit(1);
      }
  }
```

------
#### [ SDK for Python ]

The following examples show how to list in-progress (incomplete) multipart uploads by using the SDK for Python.

**Example**  

```
import logging
import boto3
from botocore.exceptions import ClientError

def list_multipart_uploads(s3_client, bucket_name):
    '''
    List any incomplete multipart uploads in a directory bucket in e specified gion

    :param s3_client: boto3 S3 client
    :param bucket_name: Bucket to check for incomplete multipart uploads
    :return: List of incomplete multipart uploads if there are any, None if not
    '''
    
    try:
        response = s3_client.list_multipart_uploads(Bucket = bucket_name)
        if 'Uploads' in response.keys():
            return response['Uploads']
        else:
            return None 
    except ClientError as e:
        logging.error(e)

if __name__ == '__main__':
    bucket_name = 'BUCKET_NAME'
    region = 'us-west-2'
    s3_client = boto3.client('s3', region_name = region)
    multipart_uploads = list_multipart_uploads(s3_client, bucket_name)
    if multipart_uploads is not None:
        print (f'There are {len(multipart_uploads)} ncomplete multipart uploads for {bucket_name}')
    else:
        print (f'There are no incomplete multipart uploads for {bucket_name}')
```

------

#### Using the AWS CLI
<a name="directory-bucket-multipart-upload-list-cli"></a>

The following examples show how to list in-progress (incomplete) multipart uploads by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3api list-multipart-uploads --bucket bucket-base-name--zone-id--x-s3
```

For more information, see [list-multipart-uploads](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/list-multipart-uploads.html                         ) in the AWS Command Line Interface.

### Listing the parts of a multipart upload
<a name="directory-buckets-multipart-upload-examples-list-parts"></a>

The following examples show how to list the parts of a multipart upload to a directory bucket.

#### Using the AWS SDKs
<a name="directory-bucket-multipart-upload-list-parts-sdk"></a>

------
#### [ SDK for Java 2.x ]

The following examples show how to list the parts of a multipart upload to a directory bucket by using SDK for Java 2.x.

```
public static void listMultiPartUploadsParts( S3Client s3, String bucketName, String objKey, String uploadID) {
         
         try {
             ListPartsRequest listPartsRequest = ListPartsRequest.builder()
                 .bucket(bucketName)
                 .uploadId(uploadID)
                 .key(objKey)
                 .build();

             ListPartsResponse response = s3.listParts(listPartsRequest);
             ListPart parts = response.parts();
             for (Part part: parts) {
                 System.out.println("Upload in progress: Part number = \"" + part.partNumber() + "\", etag = " + part.eTag());
             }

         } 
         
         catch (S3Exception e) {
             System.err.println(e.getMessage());
             System.exit(1);
         }
         
         
     }
```

------
#### [ SDK for Python ]

The following examples show how to list the parts of a multipart upload to a directory bucket by using SDK for Python.

```
import logging
import boto3
from botocore.exceptions import ClientError

def list_parts(s3_client, bucket_name, key_name, upload_id):
    '''
    Lists the parts that have been uploaded for a specific multipart upload to a directory bucket.
    
    :param s3_client: boto3 S3 client
    :param bucket_name: Bucket that multipart uploads parts have been uploaded to
    :param key_name: Name of the object that has parts uploaded
    :param upload_id: Multipart upload ID that the parts are associated with
    :return: List of parts associated with the specified multipart upload, None if there are no parts
    '''
    parts_list = []
    next_part_marker = ''
    continuation_flag = True
    try:
        while continuation_flag:
            if next_part_marker == '':
                response = s3_client.list_parts(
                    Bucket = bucket_name,
                    Key = key_name,
                    UploadId = upload_id
                )
            else:
                response = s3_client.list_parts(
                    Bucket = bucket_name,
                    Key = key_name,
                    UploadId = upload_id,
                    NextPartMarker = next_part_marker
                )
            if 'Parts' in response:
                for part in response['Parts']:
                    parts_list.append(part)
                if response['IsTruncated']:
                    next_part_marker = response['NextPartNumberMarker']
                else:
                    continuation_flag = False
            else:
                continuation_flag = False
        return parts_list
    except ClientError as e:
        logging.error(e)
        return None

if __name__ == '__main__':
    region = 'us-west-2'
    bucket_name = 'BUCKET_NAME'
    key_name = 'KEY_NAME'
    upload_id = 'UPLOAD_ID'
    s3_client = boto3.client('s3', region_name = region)
    parts_list = list_parts(s3_client, bucket_name, key_name, upload_id)
    if parts_list is not None:
        print (f'{key_name} has {len(parts_list)} parts uploaded to {bucket_name}')
    else:
        print (f'There are no multipart uploads with that upload ID for {bucket_name} bucket')
```

------

#### Using the AWS CLI
<a name="directory-bucket-multipart-upload-list-parts-cli"></a>

The following examples show how to list the parts of a multipart upload to a directory bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3api list-parts --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME --upload-id "AS_mgt9RaQE9GEaifATue15dAAAAAAAAAAEMAAAAAAAAADQwNzI4MDU0MjUyMBYAAAAAAAAAAA0AAAAAAAAAAAH2AfYAAAAAAAAEBSD0WBKMAQAAAABneY9yBVsK89iFkvWdQhRCcXohE8RbYtc9QvBOG8tNpA"
```

For more information, see [list-parts](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/list-parts.html) in the AWS Command Line Interface.

# Copying objects from or to a directory bucket
<a name="directory-buckets-objects-copy"></a>

The copy operation creates a copy of an object that is already stored in Amazon S3. You can copy objects between directory buckets and general purpose buckets. You can also copy objects within a bucket and across buckets of the same type, for example, from directory bucket to directory bucket. 

**Note**  
Copying objects across different AWS Regions isn't supported when the source or destination bucket is in an AWS Local Zone. The source and destination buckets must have the same parent AWS Region. The source and destination buckets can be different bucket location types (Availability Zone or Local Zone).

You can create a copy of object up to 5 GB in a single atomic operation. However, to copy an object that is greater than 5 GB, you must use the multipart upload API operations. For more information, see [Using multipart uploads with directory buckets](s3-express-using-multipart-upload.md).

**Permissions**  
 To copy objects, you must have the following permissions:
+ To copy objects from one directory bucket to another directory bucket, you must have the `s3express:CreateSession` permission.
+ To copy objects from directory buckets to general purpose buckets, you must have the `s3express:CreateSession` permission and the `s3:PutObject` permission to write the object copy to the destination bucket. 
+ To copy objects from general purpose buckets to directory buckets, you must have the `s3express:CreateSession` permission and `s3:GetObject` permission to read the source object that is being copied. 

   For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) in the *Amazon Simple Storage Service API Reference*.

**Encryption**  
Amazon S3 automatically encrypts all new objects that are uploaded to an S3 bucket. The default encryption configuration of an S3 bucket is always enabled and is at a minimum set to server-side encryption with Amazon S3 managed keys (SSE-S3). 

For directory buckets, SSE-S3 and server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) are supported. When the destination bucket is a directory bucket, we recommend that the destination bucket's default encryption uses the desired encryption configuration and you don't override the bucket default encryption. Then, new objects are automatically encrypted with the desired encryption settings. Also, S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html). In this case, Amazon S3 makes a call to AWS KMS every time a copy request is made for a KMS-encrypted object. For more information about the encryption overriding behaviors in directory buckets, see [Specifying server-side encryption with AWS KMS for new object uploads](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-specifying-kms-encryption.html).

For general purpose buckets, you can use SSE-S3 (the default), server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), dual-layer server-side encryption with AWS KMS keys (DSSE-KMS), or server-side encryption with customer-provided keys (SSE-C). 

If you make a copy request that specifies to use DSSE-KMS or SSE-C for a directory bucket (either the source or destination bucket), the response returns an error.

**Tags**  
Directory buckets don't support tags. If you copy an object that has tags from a general purpose bucket to a directory bucket, you receive an HTTP `501 (Not Implemented)` response. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) in the *Amazon Simple Storage Service API Reference*.

**ETags**  
Entity tags (ETags) for S3 Express One Zone are random alphanumeric strings and are not MD5 checksums. To help ensure object integrity, use additional checksums.

**Additional checksums**  
S3 Express One Zone offers you the option to choose the checksum algorithm that is used to validate your data during upload or download. You can select one of the following Secure Hash Algorithms (SHA) or Cyclic Redundancy Check (CRC) data-integrity check algorithms: CRC32, CRC32C, SHA-1, and SHA-256. MD5-based checksums are not supported with the S3 Express One Zone storage class. 

For more information, see [S3 additional checksum best practices](s3-express-optimizing-performance.md#s3-express-optimizing-performance-checksums).

**Supported features**  
For more information about which Amazon S3 features are supported for S3 Express One Zone, see [Differences for directory buckets](s3-express-differences.md). 

## Using the S3 console (copy to a directory bucket)
<a name="directory-bucket-copy-console"></a>

**Note**  
The restrictions and limitations when you copy an object to a directory bucket with the console are as follows:  
The `Copy` action applies to all objects within the specified folders (prefixes). Objects added to these folders while the action is in progress might be affected.
Objects encrypted with customer-provided encryption keys (SSE-C) cannot be copied by using the S3 console. To copy objects encrypted with SSE-C, use the AWS CLI, AWS SDK, or the Amazon S3 REST API.
Copied objects will not retain the Object Lock settings from the original objects.
If the bucket you are copying objects from uses the bucket owner enforced setting for S3 Object Ownership, object ACLs will not be copied to the specified destination.
If you want to copy objects to a bucket that uses the bucket owner enforced setting for S3 Object Ownership, make sure that the source bucket also uses the bucket owner enforced setting, or remove any object ACL grants to other AWS accounts and groups.
Objects copied from a general purpose bucket to a directory bucket will not retain object tags, ACLs, or Etag values. Checksum values can be copied, but are not equivalent to an Etag. The checksum value may change compared to when it was added.
 All objects copied to a directory bucket will be with the bucket owner enforced setting for S3 Object Ownership.

**To copy an object from a general purpose bucket or a directory bucket to a directory bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, the bucket type that you want to copy objects from:
   + To copy from a general purpose bucket, choose the **General purpose buckets** tab.
   + To copy from a directory bucket, choose the **Directory buckets** tab.

1. Choose the general purpose bucket or directory bucket that contains the objects that you want to copy.

1. Choose the **Objects** tab. On the **Objects** page, select the check box to the left of the names of the objects that you want to copy.

1. On the **Actions** menu, choose **Copy**.

   The **Copy** page appears.

1. Under **Destination**, choose **Directory bucket ** for your destination type. To specify the destination path, choose **Browse S3**, navigate to the destination, and then choose the option button to the left of the destination. Choose **Choose destination** in the lower-right corner. 

   Alternatively, enter the destination path. 

1. Under **Additional copy settings**, choose whether you want to **Copy source settings**, **Don’t specify settings**, or **Specify settings**. **Copy source settings** is the default option. If you only want to copy the object without the source settings attributes, choose **Don’t specify settings**. Choose **Specify settings** to specify settings for server-side encryption, checksums, and metadata.

1. Choose **Copy** in the bottom-right corner. Amazon S3 copies your objects to the destination.

## Using the S3 console (copy to a general purpose bucket)
<a name="directory-bucket-copy-console"></a>

**Note**  
The restrictions and limitations when you copy an object to a general purpose bucket with the console are as follows:  
The `Copy` action applies to all objects within the specified folders (prefixes). Objects added to these folders while the action is in progress might be affected.
Objects encrypted with customer-provided encryption keys (SSE-C) cannot be copied by using the S3 console. To copy objects encrypted with SSE-C, use the AWS CLI, AWS SDK, or the Amazon S3 REST API.
Copied objects will not retain the Object Lock settings from the original objects.
If the bucket you are copying objects from uses the bucket owner enforced setting for S3 Object Ownership, object ACLs will not be copied to the specified destination.
If you want to copy objects to a bucket that uses the bucket owner enforced setting for S3 Object Ownership, make sure that the source bucket also uses the bucket owner enforced setting, or remove any object ACL grants to other AWS accounts and groups.

**To copy an object from a directory bucket to a general purpose bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. Choose the ** Directory buckets** tab.

1. Choose the directory bucket that contains the objects that you want to copy.

1. Choose the **Objects** tab. On the **Objects** page, select the check box to the left of the names of the objects that you want to copy.

1. On the **Actions** menu, choose **Copy**.

    

1. Under **Destination**, choose **General purpose bucket** for your destination type. To specify the destination path, choose **Browse S3**, navigate to the destination, and choose the option button to the left of the destination. Choose **Choose destination** in the lower-right corner. 

   Alternatively, enter the destination path. 

1. Under **Additional copy settings**, choose whether you want to **Copy source settings**, **Don’t specify settings**, or **Specify settings**. **Copy source settings** is the default option. If you only want to copy the object without the source settings attributes, choose **Don’t specify settings**. Choose **Specify settings** to specify settings for storage class, ACLs, object tags, metadata, server-side encryption, and additional checksums.

1. Choose **Copy** in the bottom-right corner. Amazon S3 copies your objects to the destination.

## Using the AWS SDKs
<a name="directory-bucket-copy-sdks"></a>

------
#### [ SDK for Java 2.x ]

**Example**  

```
 public static void copyBucketObject (S3Client s3, String sourceBucket, String objectKey, String targetBucket) {
      CopyObjectRequest copyReq = CopyObjectRequest.builder()
          .sourceBucket(sourceBucket)
          .sourceKey(objectKey)
          .destinationBucket(targetBucket)
          .destinationKey(objectKey)
          .build();
       String temp = "";
                                             
       try {
           CopyObjectResponse copyRes = s3.copyObject(copyReq);
           System.out.println("Successfully copied " + objectKey +" from bucket " + sourceBucket +" into bucket "+targetBucket);
       }
       
       catch (S3Exception e) {
           System.err.println(e.awsErrorDetails().errorMessage());
           System.exit(1);
       }
 }
```

------

## Using the AWS CLI
<a name="directory-copy-object-cli"></a>

The following `copy-object` example command shows how you can use the AWS CLI to copy an object from one bucket to another bucket. You can copy objects between bucket types. To run this command, replace the user input placeholders with your own information.

```
aws s3api copy-object --copy-source SOURCE_BUCKET/SOURCE_KEY_NAME --key TARGET_KEY_NAME --bucket TARGET_BUCKET_NAME
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/copy-object.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/copy-object.html) in the *AWS CLI Command Reference*.

# Deleting objects from a directory bucket
<a name="directory-bucket-delete-object"></a>

You can delete objects from an Amazon S3 directory bucket by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), or AWS SDKs. For more information, see [Working with directory buckets](directory-buckets-overview.md) and [S3 Express One Zone](directory-bucket-high-performance.md#s3-express-one-zone).

**Warning**  
Deleting an object can't be undone.
This action deletes all specified objects. When deleting folders, wait for the delete action to finish before adding new objects to the folder. Otherwise, new objects might be deleted as well.

**Note**  
When you programmatically delete multiple objects from a directory bucket, note the following:  
Object keys in `DeleteObjects` requests must contain at least one non-white space character. Strings of all white space characters are not supported.
Object keys in `DeleteObjects` requests cannot contain Unicode control characters, except for newline (`\n`), tab (`\t`), and carriage return (`\r`).

## Using the S3 console
<a name="delete-object-directory-bucket-console"></a>

**To delete objects**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Directory buckets**.

1. Choose the directory bucket that contains the objects that you want to delete.

1. Choose the **Objects** tab. In the **Objects** list, select the check box to the left of the object or objects that you want to delete.

1. Choose **Delete**.

   

1. On the **Delete objects** page, enter **permanently delete** in the text box.

1. Choose **Delete objects**.

## Using the AWS SDKs
<a name="delete-object-directory-bucket-sdks"></a>

------
#### [ SDK for Java 2.x ]

**Example**  
The following example deletes objects in a directory bucket by using the AWS SDK for Java 2.x.   

```
static void deleteObject(S3Client s3Client, String bucketName, String objectKey) {


        
        try {
            
            DeleteObjectRequest del = DeleteObjectRequest.builder()
                    .bucket(bucketName)
                    .key(objectKey)
                    .build();

            s3Client.deleteObject(del);
            
            System.out.println("Object " + objectKey + " has been deleted");
            
            
        } catch (S3Exception e) {
            System.err.println(e.awsErrorDetails().errorMessage());
            System.exit(1);
        }
        
    }
```

------
#### [ SDK for Python ]

**Example**  
The following example deletes objects in a directory bucket by using the AWS SDK for Python (Boto3).   

```
import logging
import boto3
from botocore.exceptions import ClientError

def delete_objects(s3_client, bucket_name, objects):
    '''
    Delete a list of objects in a directory bucket

    :param s3_client: boto3 S3 client
    :param bucket_name: Bucket that contains objects to be deleted; for example, 'doc-example-bucket--usw2-az1--x-s3'
    :param objects: List of dictionaries that specify the key names to delete
    :return: Response output, else False
    '''

    try:
        response = s3_client.delete_objects(
            Bucket = bucket_name,
            Delete = {
                'Objects': objects
            } 
        )
        return response
    except ClientError as e:
        logging.error(e)
        return False
    

if __name__ == '__main__':
    region = 'us-west-2'
    bucket_name = 'BUCKET_NAME'
    objects = [
        {
            'Key': '0.txt'
        },
        {
            'Key': '1.txt'
        },
        {
            'Key': '2.txt'
        },
        {
            'Key': '3.txt'
        },
        {
            'Key': '4.txt'
        }
    ]
    
    s3_client = boto3.client('s3', region_name = region)
    results = delete_objects(s3_client, bucket_name, objects)
    if results is not None:
        if 'Deleted' in results:
            print (f'Deleted {len(results["Deleted"])} objects from {bucket_name}')
        if 'Errors' in results:
            print (f'Failed to delete {len(results["Errors"])} objects from {bucket_name}')
```

------

## Using the AWS CLI
<a name="directory-download-object-cli"></a>

The following `delete-object` example command shows how you can use the AWS CLI to delete an object from a directory bucket. To run this command, replace the `user input placeholders` with your own information.

```
aws s3api delete-object --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME 
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/delete-object.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/delete-object.html) in the *AWS CLI Command Reference*.

The following `delete-objects` example command shows how you can use the AWS CLI to delete objects from a directory bucket. To run this command, replace the `user input placeholders` with your own information.

The `delete.json` file is as follows: 

```
{
    "Objects": [
        {
            "Key": "0.txt"
        },
        {
            "Key": "1.txt"
        },
        {
            "Key": "2.txt"
        },
        {
            "Key": "3.txt"
        }
    ]
}
```

The `delete-objects` example command is as follows:

```
aws s3api delete-objects --bucket bucket-base-name--zone-id--x-s3 --delete file://delete.json 
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/delete-objects.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/delete-objects.html) in the *AWS CLI Command Reference*.

# Downloading an object from a directory bucket
<a name="directory-buckets-objects-GetExamples"></a>

 The following code examples show how to read data from (download) an object in an Amazon S3 directory bucket by using the `GetObject` API operation. 

## Using the AWS SDKs
<a name="directory-bucket-copy-sdks"></a>

------
#### [ SDK for Java 2.x ]

**Example**  
The following code example shows how to read data from an object in a directory bucket by using the AWS SDK for Java 2.x.   

```
public static void getObject(S3Client s3Client, String bucketName, String objectKey) {
     try {
         GetObjectRequest objectRequest = GetObjectRequest
            .builder()
            .key(objectKey)
            .bucket(bucketName)
            .build();
            
         ResponseBytes GetObjectResponse objectBytes = s3Client.getObjectAsBytes(objectRequest);
         byte[] data = objectBytes.asByteArray();
         
         //Print object contents to console
         String s = new String(data, StandardCharsets.UTF_8);
         System.out.println(s);
    }
    
    catch (S3Exception e) {
        System.err.println(e.awsErrorDetails().errorMessage());
        System.exit(1);
    }
}
```

------
#### [ SDK for Python ]

**Example**  
The following code example shows how to read data from an object in a directory bucket by using the AWS SDK for Python (Boto3).   

```
import boto3
from botocore.exceptions import ClientError
from botocore.response import StreamingBody

def get_object(s3_client: boto3.client, bucket_name: str, key_name: str) -> StreamingBody:
    """
    Gets the object.
    :param s3_client:
    :param bucket_name: The bucket that contains the object. 
    :param key_name: The key of the object to be downloaded.
    :return: The object data in bytes.
    """
    try:
        response = s3_client.get_object(Bucket=bucket_name, Key=key_name)
        body = response['Body'].read()
        print(f"Got object '{key_name}' from bucket '{bucket_name}'.")
    except ClientError:
        print(f"Couldn't get object '{key_name}' from bucket '{bucket_name}'.")
        raise
    else:
        return body
        
def main():
    s3_client = boto3.client('s3')
    resp = get_object(s3_client, 'doc-example-bucket--use1-az4--x-s3', 'sample.txt')
    print(resp)
    
if __name__ == "__main__":
     main()
```

------

## Using the AWS CLI
<a name="directory-download-object-cli"></a>

The following `get-object` example command shows how you can use the AWS CLI to download an object from Amazon S3. This command gets the object `KEY_NAME` from the directory bucket `bucket-base-name--zone-id--x-s3`. The object will be downloaded to a file named `LOCAL_FILE_NAME`. To run this command, replace the `user input placeholders` with your own information.

```
aws s3api get-object --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME LOCAL_FILE_NAME
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-object.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-object.html) in the *AWS CLI Command Reference*.

# Generating presigned URLs to share objects directory bucket
<a name="directory-buckets-objects-generate-presigned-url-Examples"></a>

 The following code examples show how to generate presigned URLs to share objects from an Amazon S3 directory bucket.

## Using the AWS CLI
<a name="directory-download-object-cli"></a>

The following example command shows how you can use the AWS CLI to generate a presigned URL for an object from Amazon S3. This command generates a presigned URL for an object `KEY_NAME` from the directory bucket `bucket-base-name--zone-id--x-s3`. To run this command, replace the `user input placeholders` with your own information.

```
aws s3 presign s3://bucket-base-name--zone-id--x-s3/KEY_NAME --expires-in 7200
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/presign.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/presign.html) in the *AWS CLI Command Reference*.

# Retrieving object metadata from directory buckets
<a name="directory-buckets-objects-HeadObjectExamples"></a>

The following AWS SDK and AWS CLI examples show how to use the `HeadObject` and `GetObjectAttributes` API operation to retrieve metadata from an object in an Amazon S3 directory bucket without returning the object itself. 

## Using the AWS SDKs
<a name="directory-bucket-copy-sdks"></a>

------
#### [ SDK for Java 2.x ]

**Example**  

```
public static void headObject(S3Client s3Client, String bucketName, String objectKey) {
     try {
         HeadObjectRequest headObjectRequest = HeadObjectRequest
                 .builder()
                 .bucket(bucketName)
                 .key(objectKey)
                 .build();
         HeadObjectResponse response = s3Client.headObject(headObjectRequest);
         System.out.format("Amazon S3 object: \"%s\" found in bucket: \"%s\" with ETag: \"%s\"", objectKey, bucketName, response.eTag());
     }
     catch (S3Exception e) {
         System.err.println(e.awsErrorDetails().errorMessage());
```

------

## Using the AWS CLI
<a name="directory-head-object-cli"></a>

The following `head-object` example command shows how you can use the AWS CLI to retrieve metadata from an object. To run this command, replace the `user input placeholders` with your own information.

```
aws s3api head-object --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/head-object.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/head-object.html) in the *AWS CLI Command Reference*.

The following `get-object-attributes` example command shows how you can use the AWS CLI to retrieve metadata from an object. To run this command, replace the `user input placeholders` with your own information.

```
aws s3api get-object-attributes --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME --object-attributes "StorageClass" "ETag" "ObjectSize"
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-object-attributes.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-object-attributes.html) in the *AWS CLI Command Reference*.

# Listing objects from a directory bucket
<a name="directory-buckets-objects-listobjectsExamples"></a>

 The following code examples show how to list objects in an Amazon S3 directory bucket by using the `ListObjectsV2` API operation. 

## Using the AWS CLI
<a name="directory-download-object-cli"></a>

The following `list-objects-v2` example command shows how you can use the AWS CLI to list objects from Amazon S3. This command lists objects from the directory bucket `bucket-base-name--zone-id--x-s3`. To run this command, replace the `user input placeholders` with your own information.

```
aws s3api list-objects-v2 --bucket bucket-base-name--zone-id--x-s3
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/list-objects-v2.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/list-objects-v2.html) in the *AWS CLI Command Reference*.