

# Work with Amazon S3
<a name="examples-s3"></a>

This section provides background information for working with Amazon S3 by using the AWS SDK for Java 2.x. This section compliments the [Amazon S3 Java v2 examples](java_s3_code_examples.md) presented in the *Code examples* section of this guide.

## S3 clients in the AWS SDK for Java 2.x
<a name="s3-clients"></a>

The AWS SDK for Java 2.x provides different types of S3 clients. The following table shows the differences and can help you decide what is best for your use cases.


**Different flavors of Amazon S3 clients**  

| S3 Client | Short description | When to use | Limitation/drawback | 
| --- | --- | --- | --- | 
|  **AWS CRT-based S3 client** Interface: [S3AsyncClient](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3AsyncClient.html) Builder: [S3CrtAsyncClientBuilder](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3AsyncClient.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-s3.html) See [Use a performant S3 client: AWS CRT-based S3 client](crt-based-s3-client.md).  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-s3.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-s3.html)  | 
|  **Java-based S3 async client *with* multipart enabled** Interface: [S3AsyncClient](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3AsyncClient.html) Builder: [S3AsyncClientBuilder](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3AsyncClientBuilder.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-s3.html) See [Configure the Java-based S3 async client to use parallel transfers](s3-async-client-multipart.md).  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-s3.html)  | Less performant than the AWS CRT-based S3 client. | 
|  **Java-based S3 async client *without* multipart enabled** Interface: [S3AsyncClient](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3AsyncClient.html) Builder: [S3AsyncClientBuilder](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3AsyncClientBuilder.html) |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-s3.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-s3.html)  |  No performance optimization.  | 
|  **Java-based S3 sync client** Interface: [S3Client](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html) Builder: [S3ClientBuilder](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3ClientBuilder.html) |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-s3.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-s3.html)  |  No performance optimization.  | 

**Note**  
From version 2.18.x and onward, the AWS SDK for Java 2.x uses [virtual hosted-style addressing](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#virtual-hosted-style-access) when including an endpoint override. This applies as long as the bucket name is a valid DNS label.   
Call the [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3BaseClientBuilder.html#forcePathStyle(java.lang.Boolean](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3BaseClientBuilder.html#forcePathStyle(java.lang.Boolean) method with `true` in your client builder to force the client to use path-style addressing for buckets.  
The following example shows a service client configured with an endpoint override and using path-style addressing.  

```
S3Client client = S3Client.builder()
                          .region(Region.US_WEST_2)
                          .endpointOverride(URI.create("https://s3.us-west-2.amazonaws.com"))
                          .forcePathStyle(true)
                          .build();
```

**Topics**
+ [S3 clients in the SDK](#s3-clients)
+ [Uploading streams to S3](best-practices-s3-uploads.md)
+ [Pre-signed URLs](examples-s3-presign.md)
+ [Cross-Region access](s3-cross-region.md)
+ [Data integrity protection with checksums](s3-checksums.md)
+ [Use a performant S3 client](crt-based-s3-client.md)
+ [Configure parallel transfer support](s3-async-client-multipart.md)
+ [Transfer files and directories](transfer-manager.md)
+ [S3 Event Notifications](examples-s3-event-notifications.md)

# Uploading streams to Amazon S3 using the AWS SDK for Java 2.x
<a name="best-practices-s3-uploads"></a>

When you use a stream to upload content to S3 using [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html#putObject(software.amazon.awssdk.services.s3.model.PutObjectRequest,software.amazon.awssdk.core.sync.RequestBody)](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html#putObject(software.amazon.awssdk.services.s3.model.PutObjectRequest,software.amazon.awssdk.core.sync.RequestBody)) or [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html#uploadPart(software.amazon.awssdk.services.s3.model.UploadPartRequest,software.amazon.awssdk.core.sync.RequestBody)](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html#uploadPart(software.amazon.awssdk.services.s3.model.UploadPartRequest,software.amazon.awssdk.core.sync.RequestBody)), you use a `RequestBody` factory class for the synchronous API to supply the stream. For the asynchronous API, the `AsyncRequestBody` is the equivalent factory class.

## Which methods upload streams?
<a name="s3-stream-upload-methods"></a>

For the synchronous API, you can use the following factory methods of `RequestBody` to supply the stream:
+ `[fromInputStream](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/sync/RequestBody.html#fromInputStream(java.io.InputStream,long))(InputStream inputStream, long contentLength)`

  `[fromContentProvider](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/sync/RequestBody.html#fromContentProvider(software.amazon.awssdk.http.ContentStreamProvider,long,java.lang.String))(ContentStreamProvider provider, long contentLength, String mimeType)`
  + The `ContentStreamProvider` has the `fromInputStream(InputStream inputStream)` factory method
+ `[fromContentProvider](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/sync/RequestBody.html#fromContentProvider(software.amazon.awssdk.http.ContentStreamProvider,java.lang.String))(ContentStreamProvider provider, String mimeType)`

For the asynchronous API, you can use the following factory methods of `AsyncRequestBody`:
+ `[fromInputStream](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/async/AsyncRequestBody.html#fromInputStream(java.io.InputStream,java.lang.Long,java.util.concurrent.ExecutorService))(InputStream inputStream, Long contentLength, ExecutorService executor)` 
+ `[fromInputStream](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/async/AsyncRequestBody.html#fromInputStream(software.amazon.awssdk.core.async.AsyncRequestBodyFromInputStreamConfiguration))(AsyncRequestBodyFromInputStreamConfiguration configuration)`
  + You use the AsyncRequestBodyFromInputStreamConfiguration.Builder to supply the stream
+ `[fromInputStream](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/async/AsyncRequestBody.html#fromInputStream(java.util.function.Consumer))(Consumer<AsyncRequestBodyFromInputStreamConfiguration.Builder> configuration)`
+ `[forBlockingInputStream](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/async/AsyncRequestBody.html#forBlockingInputStream(java.lang.Long))(Long contentLength)`
  + The resulting `[BlockingInputStreamAsyncRequestBody](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/core/async/BlockingInputStreamAsyncRequestBody.html)` contains the method `writeInputStream(InputStream inputStream)` that you can use to provide the stream

## Performing the upload
<a name="s3-upload-stream-perform"></a>

### If you know the length of the stream
<a name="s3-stream-upload-supply-content-length"></a>

As you can see from the signature of the methods shown previously, most methods accept a content length parameter. 

If you know the content length in bytes, provide the exact value:

```
// Always provide the exact content length when it's available.
long contentLength = 1024; // Exact size in bytes.
s3Client.putObject(req -> req
    .bucket("amzn-s3-demo-bucket")
    .key("my-key"),
RequestBody.fromInputStream(inputStream, contentLength));
```

**Warning**  
 When you upload from an input stream, if your specified content length doesn't match the actual byte count, you might experience:  
Truncated objects if your specified length is too small
Failed uploads or hanging connections if your specified length is too large

### If you don't know the length of the stream
<a name="s3-stream-upload-unknown-length"></a>

#### Using the synchronous API
<a name="s3-upload-unknown-sync-client"></a>

Use the `fromContentProvider(ContentStreamProvider provider, String mimeType)`:

```
public PutObjectResponse syncClient_stream_unknown_size(String bucketName, String key, InputStream inputStream) {

    S3Client s3Client = S3Client.create();

    RequestBody body = RequestBody.fromContentProvider(ContentStreamProvider.fromInputStream(inputStream), "text/plain");
    PutObjectResponse putObjectResponse = s3Client.putObject(b -> b.bucket(BUCKET_NAME).key(KEY_NAME), body);
    return putObjectResponse;
}
```

Because the SDK buffers the entire stream in memory to calculate the content length, you can run into memory issues with large streams. If you need to upload large streams with the synchronous client, consider using the multipart API:

##### Upload a stream using the synchronous client API and multipart API
<a name="sync-multipart-upload-stream"></a>

```
public static void uploadStreamToS3(String bucketName, String key, InputStream inputStream) {
    // Create S3 client
    S3Client s3Client = S3Client.create();
    try {
        // Step 1: Initiate the multipart upload
        CreateMultipartUploadRequest createMultipartUploadRequest = CreateMultipartUploadRequest.builder()
                .bucket(bucketName)
                .key(key)
                .build();

        CreateMultipartUploadResponse createResponse = s3Client.createMultipartUpload(createMultipartUploadRequest);
        String uploadId = createResponse.uploadId();
        System.out.println("Started multipart upload with ID: " + uploadId);

        // Step 2: Upload parts
        List<CompletedPart> completedParts = new ArrayList<>();
        int partNumber = 1;
        byte[] buffer = new byte[PART_SIZE];
        int bytesRead;

        try {
            while ((bytesRead = readFullyOrToEnd(inputStream, buffer)) > 0) {
                // Create request to upload a part
                UploadPartRequest uploadPartRequest = UploadPartRequest.builder()
                        .bucket(bucketName)
                        .key(key)
                        .uploadId(uploadId)
                        .partNumber(partNumber)
                        .build();

                // If we didn't read a full buffer, create a properly sized byte array
                RequestBody requestBody;
                if (bytesRead < PART_SIZE) {
                    byte[] lastPartBuffer = new byte[bytesRead];
                    System.arraycopy(buffer, 0, lastPartBuffer, 0, bytesRead);
                    requestBody = RequestBody.fromBytes(lastPartBuffer);
                } else {
                    requestBody = RequestBody.fromBytes(buffer);
                }

                // Upload the part and save the response's ETag
                UploadPartResponse uploadPartResponse = s3Client.uploadPart(uploadPartRequest, requestBody);
                CompletedPart part = CompletedPart.builder()
                        .partNumber(partNumber)
                        .eTag(uploadPartResponse.eTag())
                        .build();
                completedParts.add(part);

                System.out.println("Uploaded part " + partNumber + " with size " + bytesRead + " bytes");
                partNumber++;
            }

            // Step 3: Complete the multipart upload
            CompletedMultipartUpload completedMultipartUpload = CompletedMultipartUpload.builder()
                    .parts(completedParts)
                    .build();

            CompleteMultipartUploadRequest completeRequest = CompleteMultipartUploadRequest.builder()
                    .bucket(bucketName)
                    .key(key)
                    .uploadId(uploadId)
                    .multipartUpload(completedMultipartUpload)
                    .build();

            CompleteMultipartUploadResponse completeResponse = s3Client.completeMultipartUpload(completeRequest);
            System.out.println("Multipart upload completed. Object URL: " + completeResponse.location());

        } catch (Exception e) {
            // If an error occurs, abort the multipart upload
            System.err.println("Error during multipart upload: " + e.getMessage());
            AbortMultipartUploadRequest abortRequest = AbortMultipartUploadRequest.builder()
                    .bucket(bucketName)
                    .key(key)
                    .uploadId(uploadId)
                    .build();
            s3Client.abortMultipartUpload(abortRequest);
            System.err.println("Multipart upload aborted");
        } finally {
            try {
                inputStream.close();
            } catch (IOException e) {
                System.err.println("Error closing input stream: " + e.getMessage());
            }
        }
    } finally {
        s3Client.close();
    }
}

/**
 * Reads from the input stream into the buffer, attempting to fill the buffer completely
 * or until the end of the stream is reached.
 *
 * @param inputStream the input stream to read from
 * @param buffer      the buffer to fill
 * @return the number of bytes read, or -1 if the end of the stream is reached before any bytes are read
 * @throws IOException if an I/O error occurs
 */
private static int readFullyOrToEnd(InputStream inputStream, byte[] buffer) throws IOException {
    int totalBytesRead = 0;
    int bytesRead;

    while (totalBytesRead < buffer.length) {
        bytesRead = inputStream.read(buffer, totalBytesRead, buffer.length - totalBytesRead);
        if (bytesRead == -1) {
            break; // End of stream
        }
        totalBytesRead += bytesRead;
    }

    return totalBytesRead > 0 ? totalBytesRead : -1;
}
```

**Note**  
For most use cases, we recommend using the asynchronous client API for streams of unknown size. This approach enables parallel transfers and offers a simpler programming interface, because the SDK handles the stream segmentation into multipart chunks if the stream is large.   
Both the standard S3 asynchronous client with multipart enabled and the AWS CRT-based S3 client implement this approach. We show examples of this approach in the following section.

#### Using the asynchronous API
<a name="s3-stream-upload-unknown-async-client"></a>

You can provide `null` for the `contentLength` argument to the `fromInputStream(InputStream inputStream, Long contentLength, ExecutorService executor)`

**Example using the AWS CRT-based asynchronous client:**  

```
public PutObjectResponse crtClient_stream_unknown_size(String bucketName, String key, InputStream inputStream) {

    S3AsyncClient s3AsyncClient = S3AsyncClient.crtCreate();
    ExecutorService executor = Executors.newSingleThreadExecutor();
    AsyncRequestBody body = AsyncRequestBody.fromInputStream(inputStream, null, executor);  // 'null' indicates that the
                                                                                            // content length is unknown.
    CompletableFuture<PutObjectResponse> responseFuture =
            s3AsyncClient.putObject(r -> r.bucket(bucketName).key(key), body)
                    .exceptionally(e -> {
                        if (e != null){
                            logger.error(e.getMessage(), e);
                        }
                        return null;
                    });

    PutObjectResponse response = responseFuture.join(); // Wait for the response.
    executor.shutdown();
    return response;
}
```

**Example using the standard asynchronous client with multipart enabled:**  

```
public PutObjectResponse asyncClient_multipart_stream_unknown_size(String bucketName, String key, InputStream inputStream) {

    S3AsyncClient s3AsyncClient = S3AsyncClient.builder().multipartEnabled(true).build();
    ExecutorService executor = Executors.newSingleThreadExecutor();
    AsyncRequestBody body = AsyncRequestBody.fromInputStream(inputStream, null, executor); // 'null' indicates that the
                                                                                           // content length is unknown.
    CompletableFuture<PutObjectResponse> responseFuture =
            s3AsyncClient.putObject(r -> r.bucket(bucketName).key(key), body)
                    .exceptionally(e -> {
                        if (e != null) {
                            logger.error(e.getMessage(), e);
                        }
                        return null;
                    });

    PutObjectResponse response = responseFuture.join(); // Wait for the response.
    executor.shutdown();
    return response;
}
```

# Work with Amazon S3 pre-signed URLs
<a name="examples-s3-presign"></a>

Pre-signed URLs provide temporary access to private S3 objects without requiring users to have AWS credentials or permissions. 

For example, assume Alice has access to an S3 object, and she wants to temporarily share access to that object with Bob. Alice can generate a pre-signed GET request to share with Bob so that he can download the object without requiring access to Alice’s credentials. You can generate pre-signed URLs for HTTP GET and for HTTP PUT requests.

## Generate a pre-signed URL for an object, then download it (GET request)
<a name="get-presignedobject"></a>

The following example consists of two parts.
+ Part 1: Alice generates the pre-signed URL for an object.
+ Part 2: Bob downloads the object by using the pre-signed URL.

### Part 1: Generate the URL
<a name="get-presigned-object-part1"></a>

Alice already has an object in an S3 bucket. She uses the following code to generate a URL string that Bob can use in a subsequent GET request.

#### Imports
<a name="get-presigned-example-imports"></a>

```
import com.example.s3.util.PresignUrlUtils;
import org.slf4j.Logger;
import software.amazon.awssdk.http.HttpExecuteRequest;
import software.amazon.awssdk.http.HttpExecuteResponse;
import software.amazon.awssdk.http.SdkHttpClient;
import software.amazon.awssdk.http.SdkHttpMethod;
import software.amazon.awssdk.http.SdkHttpRequest;
import software.amazon.awssdk.http.apache.ApacheHttpClient;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.GetObjectRequest;
import software.amazon.awssdk.services.s3.model.S3Exception;
import software.amazon.awssdk.services.s3.presigner.S3Presigner;
import software.amazon.awssdk.services.s3.presigner.model.GetObjectPresignRequest;
import software.amazon.awssdk.services.s3.presigner.model.PresignedGetObjectRequest;
import software.amazon.awssdk.utils.IoUtils;

import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.net.HttpURLConnection;
import java.net.URISyntaxException;
import java.net.URL;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.nio.file.Paths;
import java.time.Duration;
import java.util.UUID;
```

```
    /* Create a pre-signed URL to download an object in a subsequent GET request. */
    public String createPresignedGetUrl(String bucketName, String keyName) {
        try (S3Presigner presigner = S3Presigner.create()) {

            GetObjectRequest objectRequest = GetObjectRequest.builder()
                    .bucket(bucketName)
                    .key(keyName)
                    .build();

            GetObjectPresignRequest presignRequest = GetObjectPresignRequest.builder()
                    .signatureDuration(Duration.ofMinutes(10))  // The URL will expire in 10 minutes.
                    .getObjectRequest(objectRequest)
                    .build();

            PresignedGetObjectRequest presignedRequest = presigner.presignGetObject(presignRequest);
            logger.info("Presigned URL: [{}]", presignedRequest.url().toString());
            logger.info("HTTP method: [{}]", presignedRequest.httpRequest().method());

            return presignedRequest.url().toExternalForm();
        }
    }
```

### Part 2: Download the object
<a name="get-presigned-object-part2"></a>

Bob uses one of the following three code options to download the object. Alternatively, he could use a browser to perform the GET request.

#### Use JDK `HttpURLConnection` (since v1.1)
<a name="get-presigned-example-useHttpUrlConnection"></a>

```
    /* Use the JDK HttpURLConnection (since v1.1) class to do the download. */
    public byte[] useHttpUrlConnectionToGet(String presignedUrlString) {
        ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream(); // Capture the response body to a byte array.

        try {
            URL presignedUrl = new URL(presignedUrlString);
            HttpURLConnection connection = (HttpURLConnection) presignedUrl.openConnection();
            connection.setRequestMethod("GET");
            // Download the result of executing the request.
            try (InputStream content = connection.getInputStream()) {
                IoUtils.copy(content, byteArrayOutputStream);
            }
            logger.info("HTTP response code is " + connection.getResponseCode());

        } catch (S3Exception | IOException e) {
            logger.error(e.getMessage(), e);
        }
        return byteArrayOutputStream.toByteArray();
    }
```

#### Use JDK `HttpClient` (since v11)
<a name="get-presigned-example-useHttpClient"></a>

```
    /* Use the JDK HttpClient (since v11) class to do the download. */
    public byte[] useHttpClientToGet(String presignedUrlString) {
        ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream(); // Capture the response body to a byte array.

        HttpRequest.Builder requestBuilder = HttpRequest.newBuilder();
        HttpClient httpClient = HttpClient.newHttpClient();
        try {
            URL presignedUrl = new URL(presignedUrlString);
            HttpResponse<InputStream> response = httpClient.send(requestBuilder
                            .uri(presignedUrl.toURI())
                            .GET()
                            .build(),
                    HttpResponse.BodyHandlers.ofInputStream());

            IoUtils.copy(response.body(), byteArrayOutputStream);

            logger.info("HTTP response code is " + response.statusCode());

        } catch (URISyntaxException | InterruptedException | IOException e) {
            logger.error(e.getMessage(), e);
        }
        return byteArrayOutputStream.toByteArray();
    }
```

#### Use `SdkHttpClient` from the SDK for Java
<a name="get-presigned-example-useSdkHttpClient"></a>

```
    /* Use the AWS SDK for Java SdkHttpClient class to do the download. */
    public byte[] useSdkHttpClientToGet(String presignedUrlString) {

        ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream(); // Capture the response body to a byte array.
        try {
            URL presignedUrl = new URL(presignedUrlString);
            SdkHttpRequest request = SdkHttpRequest.builder()
                    .method(SdkHttpMethod.GET)
                    .uri(presignedUrl.toURI())
                    .build();

            HttpExecuteRequest executeRequest = HttpExecuteRequest.builder()
                    .request(request)
                    .build();

            try (SdkHttpClient sdkHttpClient = ApacheHttpClient.create()) {
                HttpExecuteResponse response = sdkHttpClient.prepareRequest(executeRequest).call();
                response.responseBody().ifPresentOrElse(
                        abortableInputStream -> {
                            try {
                                IoUtils.copy(abortableInputStream, byteArrayOutputStream);
                            } catch (IOException e) {
                                throw new RuntimeException(e);
                            }
                        },
                        () -> logger.error("No response body."));

                logger.info("HTTP Response code is {}", response.httpResponse().statusCode());
            }
        } catch (URISyntaxException | IOException e) {
            logger.error(e.getMessage(), e);
        }
        return byteArrayOutputStream.toByteArray();
    }
```

See the [complete example](https://github.com/awsdocs/aws-doc-sdk-examples/blob/d73001daea05266eaa9e074ccb71b9383832369a/javav2/example_code/s3/src/main/java/com/example/s3/GeneratePresignedGetUrlAndRetrieve.java) and [test](https://github.com/awsdocs/aws-doc-sdk-examples/blob/d73001daea05266eaa9e074ccb71b9383832369a/javav2/example_code/s3/src/test/java/com/example/s3/presignurl/GeneratePresignedGetUrlTests.java) on GitHub.

## Generate a pre-signed URL for an upload, then upload a file (PUT request)
<a name="put-presignedobject"></a>

The following example consists of two parts.
+ Part 1: Alice generates the pre-signed URL to upload an object.
+ Part 2: Bob uploads a file by using the pre-signed URL.

### Part 1: Generate the URL
<a name="put-presigned-object-part1"></a>

Alice already has an S3 bucket. She uses the following code to generate a URL string that Bob can use in a subsequent PUT request.

#### Imports
<a name="put-presigned-example-imports"></a>

```
import com.example.s3.util.PresignUrlUtils;
import org.slf4j.Logger;
import software.amazon.awssdk.core.internal.sync.FileContentStreamProvider;
import software.amazon.awssdk.http.HttpExecuteRequest;
import software.amazon.awssdk.http.HttpExecuteResponse;
import software.amazon.awssdk.http.SdkHttpClient;
import software.amazon.awssdk.http.SdkHttpMethod;
import software.amazon.awssdk.http.SdkHttpRequest;
import software.amazon.awssdk.http.apache.ApacheHttpClient;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
import software.amazon.awssdk.services.s3.model.S3Exception;
import software.amazon.awssdk.services.s3.presigner.S3Presigner;
import software.amazon.awssdk.services.s3.presigner.model.PresignedPutObjectRequest;
import software.amazon.awssdk.services.s3.presigner.model.PutObjectPresignRequest;

import java.io.File;
import java.io.IOException;
import java.io.OutputStream;
import java.io.RandomAccessFile;
import java.net.HttpURLConnection;
import java.net.URISyntaxException;
import java.net.URL;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.time.Duration;
import java.util.Map;
import java.util.UUID;
```

```
    /* Create a presigned URL to use in a subsequent PUT request */
    public String createPresignedUrl(String bucketName, String keyName, Map<String, String> metadata) {
        try (S3Presigner presigner = S3Presigner.create()) {

            PutObjectRequest objectRequest = PutObjectRequest.builder()
                    .bucket(bucketName)
                    .key(keyName)
                    .metadata(metadata)
                    .build();

            PutObjectPresignRequest presignRequest = PutObjectPresignRequest.builder()
                    .signatureDuration(Duration.ofMinutes(10))  // The URL expires in 10 minutes.
                    .putObjectRequest(objectRequest)
                    .build();


            PresignedPutObjectRequest presignedRequest = presigner.presignPutObject(presignRequest);
            String myURL = presignedRequest.url().toString();
            logger.info("Presigned URL to upload a file to: [{}]", myURL);
            logger.info("HTTP method: [{}]", presignedRequest.httpRequest().method());

            return presignedRequest.url().toExternalForm();
        }
    }
```

### Part 2: Upload a file object
<a name="put-presigned-object-part2"></a>

Bob uses one of the following three code options to upload a file.

#### Use JDK `HttpURLConnection` (since v1.1)
<a name="put-presigned-example-useHttpUrlConnection"></a>

```
    /* Use the JDK HttpURLConnection (since v1.1) class to do the upload. */
    public void useHttpUrlConnectionToPut(String presignedUrlString, File fileToPut, Map<String, String> metadata) {
        logger.info("Begin [{}] upload", fileToPut.toString());
        try {
            URL presignedUrl = new URL(presignedUrlString);
            HttpURLConnection connection = (HttpURLConnection) presignedUrl.openConnection();
            connection.setDoOutput(true);
            metadata.forEach((k, v) -> connection.setRequestProperty("x-amz-meta-" + k, v));
            connection.setRequestMethod("PUT");
            OutputStream out = connection.getOutputStream();

            try (RandomAccessFile file = new RandomAccessFile(fileToPut, "r");
                 FileChannel inChannel = file.getChannel()) {
                ByteBuffer buffer = ByteBuffer.allocate(8192); //Buffer size is 8k

                while (inChannel.read(buffer) > 0) {
                    buffer.flip();
                    for (int i = 0; i < buffer.limit(); i++) {
                        out.write(buffer.get());
                    }
                    buffer.clear();
                }
            } catch (IOException e) {
                logger.error(e.getMessage(), e);
            }

            out.close();
            connection.getResponseCode();
            logger.info("HTTP response code is " + connection.getResponseCode());

        } catch (S3Exception | IOException e) {
            logger.error(e.getMessage(), e);
        }
    }
```

#### Use JDK `HttpClient` (since v11)
<a name="put-presigned-example-useHttpClient"></a>

```
    /* Use the JDK HttpClient (since v11) class to do the upload. */
    public void useHttpClientToPut(String presignedUrlString, File fileToPut, Map<String, String> metadata) {
        logger.info("Begin [{}] upload", fileToPut.toString());

        HttpRequest.Builder requestBuilder = HttpRequest.newBuilder();
        metadata.forEach((k, v) -> requestBuilder.header("x-amz-meta-" + k, v));

        HttpClient httpClient = HttpClient.newHttpClient();
        try {
            final HttpResponse<Void> response = httpClient.send(requestBuilder
                            .uri(new URL(presignedUrlString).toURI())
                            .PUT(HttpRequest.BodyPublishers.ofFile(Path.of(fileToPut.toURI())))
                            .build(),
                    HttpResponse.BodyHandlers.discarding());

            logger.info("HTTP response code is " + response.statusCode());

        } catch (URISyntaxException | InterruptedException | IOException e) {
            logger.error(e.getMessage(), e);
        }
    }
```

#### Use `SdkHttpClient` from the SDK for Java
<a name="put-presigned-example-useSdkHttpClient"></a>

```
    /* Use the AWS SDK for Java V2 SdkHttpClient class to do the upload. */
    public void useSdkHttpClientToPut(String presignedUrlString, File fileToPut, Map<String, String> metadata) {
        logger.info("Begin [{}] upload", fileToPut.toString());

        try {
            URL presignedUrl = new URL(presignedUrlString);

            SdkHttpRequest.Builder requestBuilder = SdkHttpRequest.builder()
                    .method(SdkHttpMethod.PUT)
                    .uri(presignedUrl.toURI());
            // Add headers
            metadata.forEach((k, v) -> requestBuilder.putHeader("x-amz-meta-" + k, v));
            // Finish building the request.
            SdkHttpRequest request = requestBuilder.build();

            HttpExecuteRequest executeRequest = HttpExecuteRequest.builder()
                    .request(request)
                    .contentStreamProvider(new FileContentStreamProvider(fileToPut.toPath()))
                    .build();

            try (SdkHttpClient sdkHttpClient = ApacheHttpClient.create()) {
                HttpExecuteResponse response = sdkHttpClient.prepareRequest(executeRequest).call();
                logger.info("Response code: {}", response.httpResponse().statusCode());
            }
        } catch (URISyntaxException | IOException e) {
            logger.error(e.getMessage(), e);
        }
    }
```

See the [complete example](https://github.com/awsdocs/aws-doc-sdk-examples/blob/d73001daea05266eaa9e074ccb71b9383832369a/javav2/example_code/s3/src/main/java/com/example/s3/GeneratePresignedUrlAndPutFileWithMetadata.java) and [test](https://github.com/awsdocs/aws-doc-sdk-examples/blob/d73001daea05266eaa9e074ccb71b9383832369a/javav2/example_code/s3/src/test/java/com/example/s3/presignurl/GeneratePresignedPutUrlTests.java) on GitHub.

# Cross-Region access for Amazon S3
<a name="s3-cross-region"></a>

When you work with Amazon Simple Storage Service (Amazon S3) buckets, you usually know the AWS Region for the bucket. The Region you work with is determined when you create the S3 client. 

However, sometimes you might need to work with a specific bucket, but you don't know if it's located in the same Region that's set for the S3 client. 

Instead of making more calls to determine the bucket Region, you can use the SDK to enable access to S3 buckets across different Regions.

## Setup
<a name="s3-cross-region-setup"></a>

Support for cross-Region access became available with version `2.20.111` of the SDK. Use this version or a later one in your Maven build file for the `s3` dependency as shown in the following snippet.

```
<dependency>
  <groupId>software.amazon.awssdk</groupId>
  <artifactId>s3</artifactId>
  <version>2.27.21</version>
</dependency>
```

Next, when you create your S3 client, enable cross-Region access as shown in the snippet. By default, access is not enabled.

```
S3AsyncClient client = S3AsyncClient.builder()
                                    .crossRegionAccessEnabled(true)
                                    .build();
```

## How the SDK provides cross-Region access
<a name="s3-cross-region-routing"></a>

When you reference an existing bucket in a request, such as when you use the `putObject` method, the SDK initiates a request to the Region configured for the client. 

If the bucket does not exist in that specific Region, the error response includes the actual Region where the bucket resides. The SDK then uses the correct Region in a second request.

To optimize future requests to the same bucket, the SDK caches this Region mapping in the client.

## Considerations
<a name="s3-cross-region-considerations"></a>

When you enable cross-Region bucket access, be aware that the first API call might result in increased latency if the bucket isn't in the client's configured Region. However, subsequent calls benefit from cached Region information, resulting in improved performance.

When you enable cross-Region access, access to the bucket is not affected. The user must be authorized to access the bucket in whatever Region it resides.

# Data integrity protection with checksums
<a name="s3-checksums"></a>

Amazon Simple Storage Service (Amazon S3) provides the ability to specify a checksum when you upload an object. When you specify a checksum, it is stored with the object and can be validated when the object is downloaded.

Checksums provide an additional layer of data integrity when you transfer files. With checksums, you can verify data consistency by confirming that the received file matches the original file. For more information about checksums with Amazon S3, see the [Amazon Simple Storage Service User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html) including the [supported algorithms](https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#using-additional-checksums).

You have the flexibility to choose the algorithm that best fits your needs and let the SDK calculate the checksum. Alternatively, you can provide a pre-computed checksum value by using one of the supported algorithms. 

**Note**  
Beginning with version 2.30.0 of the AWS SDK for Java 2.x, the SDK provides default integrity protections by automatically calculating a `CRC32` checksum for uploads. The SDK calculates this checksum if you don't provide a precalculated checksum value or if you don't specify an algorithm that the SDK should use to calculate a checksum.  
The SDK also provides global settings for data integrity protections that you can set externally, which you can read about in the [AWS SDKs and Tools Reference Guide](https://docs.aws.amazon.com/sdkref/latest/guide/feature-dataintegrity.html).

We discuss checksums in two request phases: uploading an object and downloading an object. 

## Upload an object
<a name="use-service-S3-checksum-upload"></a>

When you upload an object with the `putObject` method and provide a checksum algorithm, the SDK computes the checksum for the specified algorithm.

The following code snippet shows a request to upload an object with a `SHA256` checksum. When the SDK sends the request, it calculates the `SHA256` checksum and uploads the object. Amazon S3 validates the integrity of the content by calculating the checksum and comparing it to the checksum provided by the SDK. Amazon S3 then stores the checksum with the object.

```
public void putObjectWithChecksum() {
        s3Client.putObject(b -> b
                .bucket(bucketName)
                .key(key)
                .checksumAlgorithm(ChecksumAlgorithm.SHA256),
            RequestBody.fromString("This is a test"));
}
```

If you don't provide a checksum algorithm with the request, the checksum behavior varies depending on the version of the SDK that you use as shown in the following table.

**Checksum behavior when no checksum algorithm is provided**


| Java SDK version | Checksum behavior | 
| --- | --- | 
| earlier than 2.30.0 | The SDK doesn't automatically calculate a CRC-based checksum and provide it in the request. | 
| 2.30.0 or later | The SDK uses the `CRC32` algorithm to calculate the checksum and provides it in the request. Amazon S3 validates the integrity of the transfer by computing its own `CRC32` checksum and compares it to the checksum provided by the SDK. If the checksums match, the checksum is saved with the object. | 

### Use a pre-calculated checksum value
<a name="use-service-S3-checksum-upload-pre"></a>

A pre-calculated checksum value provided with the request disables automatic computation by the SDK and uses the provided value instead.

The following example shows a request with a pre-calculated SHA256 checksum.

```
    public void putObjectWithPrecalculatedChecksum(String filePath) {
        String checksum = calculateChecksum(filePath, "SHA-256");

        s3Client.putObject((b -> b
                .bucket(bucketName)
                .key(key)
                .checksumSHA256(checksum)),
            RequestBody.fromFile(Paths.get(filePath)));
    }
```

If Amazon S3 determines the checksum value is incorrect for the specified algorithm, the service returns an error response.

### Multipart uploads
<a name="use-service-S3-checksum-upload-multi"></a>

You can also use checksums with multipart uploads.

 The SDK for Java 2.x provides two options to use checksums with multipart uploads. The first option uses the `S3TransferManager`.

The following transfer manager example specifies the SHA1 algorithm for the upload.

```
    public void multipartUploadWithChecksumTm(String filePath) {
        S3TransferManager transferManager = S3TransferManager.create();
        UploadFileRequest uploadFileRequest = UploadFileRequest.builder()
            .putObjectRequest(b -> b
                .bucket(bucketName)
                .key(key)
                .checksumAlgorithm(ChecksumAlgorithm.SHA1))
            .source(Paths.get(filePath))
            .build();
        FileUpload fileUpload = transferManager.uploadFile(uploadFileRequest);
        fileUpload.completionFuture().join();
        transferManager.close();
    }
```

If you don't provide a checksum algorithm when using the transfer manager for uploads, the SDK automatically calculates and checksum based on the `CRC32` algorithm. The SDK performs this calculation for all versions of the SDK.

The second option uses the [`S3Client` API](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html) (or the [`S3AsyncClient` API](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3AsyncClient.html)) to perform the multipart upload. If you specify a checksum with this approach, you must specify the algorithm to use on the initiation of the upload. You must also specify the algorithm for each part request and provide the checksum calculated for each part after it is uploaded.

```
    public void multipartUploadWithChecksumS3Client(String filePath) {
        ChecksumAlgorithm algorithm = ChecksumAlgorithm.CRC32;

        // Initiate the multipart upload.
        CreateMultipartUploadResponse createMultipartUploadResponse = s3Client.createMultipartUpload(b -> b
            .bucket(bucketName)
            .key(key)
            .checksumAlgorithm(algorithm)); // Checksum specified on initiation.
        String uploadId = createMultipartUploadResponse.uploadId();

        // Upload the parts of the file.
        int partNumber = 1;
        List<CompletedPart> completedParts = new ArrayList<>();
        ByteBuffer bb = ByteBuffer.allocate(1024 * 1024 * 5); // 5 MB byte buffer

        try (RandomAccessFile file = new RandomAccessFile(filePath, "r")) {
            long fileSize = file.length();
            long position = 0;
            while (position < fileSize) {
                file.seek(position);
                long read = file.getChannel().read(bb);

                bb.flip(); // Swap position and limit before reading from the buffer.
                UploadPartRequest uploadPartRequest = UploadPartRequest.builder()
                    .bucket(bucketName)
                    .key(key)
                    .uploadId(uploadId)
                    .checksumAlgorithm(algorithm) // Checksum specified on each part.
                    .partNumber(partNumber)
                    .build();

                UploadPartResponse partResponse = s3Client.uploadPart(
                    uploadPartRequest,
                    RequestBody.fromByteBuffer(bb));

                CompletedPart part = CompletedPart.builder()
                    .partNumber(partNumber)
                    .checksumCRC32(partResponse.checksumCRC32()) // Provide the calculated checksum.
                    .eTag(partResponse.eTag())
                    .build();
                completedParts.add(part);

                bb.clear();
                position += read;
                partNumber++;
            }
        } catch (IOException e) {
            System.err.println(e.getMessage());
        }

        // Complete the multipart upload.
        s3Client.completeMultipartUpload(b -> b
            .bucket(bucketName)
            .key(key)
            .uploadId(uploadId)
            .multipartUpload(CompletedMultipartUpload.builder().parts(completedParts).build()));
    }
```

[Code for the complete examples](https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/main/java/com/example/s3/PerformMultiPartUpload.java) and [tests](https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/test/java/com/example/s3/PerformMultiPartUploadTests.java) are in the GitHub code examples repository.

## Download an object
<a name="use-service-S3-checksum-download"></a>

When you use the [getObject](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html#getObject(software.amazon.awssdk.services.s3.model.GetObjectRequest)) method to download an object, the SDK automatically validates the checksum  when the `checksumMode` method of the builder for the `GetObjectRequest` is set to `ChecksumMode.ENABLED`.  

The request in the following snippet directs the SDK to validate the checksum in the response by calculating the checksum and comparing the values.

```
    public GetObjectResponse getObjectWithChecksum() {
        return s3Client.getObject(b -> b
                        .bucket(bucketName)
                        .key(key)
                        .checksumMode(ChecksumMode.ENABLED))
                .response();
    }
```

**Note**  
If the object wasn't uploaded with a checksum, no validation takes place. 

## Other checksum calculation options
<a name="S3-checsum-calculation-options"></a>

**Note**  
To verify the data integrity of transmitted data and to identify any transmission errors, we encourage users to keep the SDK default settings for the checksum calculation options. By default, the SDK adds this important check for many S3 operations including `PutObject` and `GetObject`.

If your use of Amazon S3 requires minimal checksum validation, however, you can disable many checks by changing the default configuration settings. 

### Disable automatic checksum calculation unless it's required
<a name="S3-minimize-checksum-calc-global"></a>

You can disable automatic checksum calculation by the SDK for operations that support it, for example `PutObject` and `GetObject`. Some S3 operations, however, require a checksum calculation; you cannot disable checksum calculation for these operations.

The SDK provides separate settings for the calculation of a checksum for the payload of a request and for the payload of a response.

The following list describes the settings you can use to minimize checksum calculations at the different scopes.
+ **All applications scope**—By changing the settings in environment variables or in a profile in the shared AWS `config` and `credentials` files, all applications can use these settings. These settings affect all service clients in all AWS SDK applications unless overridden at the application or service client scope.
  + Add the settings in a profile:

    ```
    [default]
    request_checksum_calculation = WHEN_REQUIRED
    response_checksum_validation = WHEN_REQUIRED
    ```
  + Add environment variables:

    ```
    AWS_REQUEST_CHECKSUM_CALCULATION=WHEN_REQUIRED
    AWS_RESPONSE_CHECKSUM_VALIDATION=WHEN_REQUIRED
    ```
+ **Current application scope**—You can set the Java system property `aws.requestChecksumCalculation` to `WHEN_REQUIRED` to limit checksum calculation. The corresponding system property for responses is `aws.responseChecksumValidation`.

  These settings affect all SDK service clients in the application unless overridden during service client creation.

  Set the system property at the start of your application:

  ```
  import software.amazon.awssdk.core.SdkSystemSetting;
  import software.amazon.awssdk.core.checksums.RequestChecksumCalculation;
  import software.amazon.awssdk.core.checksums.ResponseChecksumValidation;
  import software.amazon.awssdk.services.s3.S3Client;
  
  class DemoClass {
      public static void main(String[] args) {
  
          System.setProperty(SdkSystemSetting.AWS_REQUEST_CHECKSUM_CALCULATION.property(), // Resolves to "aws.requestChecksumCalculation".
                  "WHEN_REQUIRED");
          System.setProperty(SdkSystemSetting.AWS_RESPONSE_CHECKSUM_VALIDATION.property(), // Resolves to "aws.responseChecksumValidation".
                  "WHEN_REQUIRED");
  
          S3Client s3Client = S3Client.builder().build();
  
          // Use s3Client.
      }
  }
  ```
+ **Single S3 service client scope**—You can configure a single S3 service client to calculate the minimum amount of checksums using builder methods:

  ```
  import software.amazon.awssdk.core.checksums.RequestChecksumCalculation;
  import software.amazon.awssdk.services.s3.S3Client;
  
  public class RequiredChecksums {
      public static void main(String[] args) {
          S3Client s3 = S3Client.builder()
                  .requestChecksumCalculation(RequestChecksumCalculation.WHEN_REQUIRED)
                  .responseChecksumValidation(ResponseChecksumValidation.WHEN_REQUIRED)
                  .build();
  
          // Use s3Client. 
      }
  // ...
  }
  ```

### Use the `LegacyMd5Plugin` for simplified MD5 compatibility
<a name="S3-checksum-legacy-md5"></a>

Along with the release of CRC32 checksum behavior with version 2.30.0, the SDK stopped calculating MD5 checksums on required operations.

If you need legacy MD5 checksum behavior for S3 operations, you can use the `LegacyMd5Plugin`, which was released with version 2.31.32 of the SDK.

The `LegacyMd5Plugin` is particularly useful when you need to maintain compatibility with applications that depend on the legacy MD5 checksum behavior, especially when working with third-party S3-compatible storage providers like those used with S3A filesystem connectors (Apache Spark, Iceberg).

To use the `LegacyMd5Plugin`, add it to your S3 client builder:

```
// For synchronous S3 client.
S3Client s3Client = S3Client.builder()
                           .addPlugin(LegacyMd5Plugin.create())
                           .build();

// For asynchronous S3 client.
S3AsyncClient asyncClient = S3AsyncClient.builder()
                                       .addPlugin(LegacyMd5Plugin.create())
                                       .build();
```

If you want to add MD5 checksums to the operations that require checksums and want to skip adding SDK default checksums for operations that support checksums but are not required, you can enable the `ClientBuilder` options `requestChecksumCalculation` and `responseChecksumValidation` as `WHEN_REQUIRED`. This will add SDK default checksums only to operations that require checksums:

```
// Use the `LegacyMd5Plugin` with `requestChecksumCalculation` and `responseChecksumValidation` set to WHEN_REQUIRED.
S3AsyncClient asyncClient = S3AsyncClient.builder()
                                       .addPlugin(LegacyMd5Plugin.create())
                                       .requestChecksumCalculation(RequestChecksumCalculation.WHEN_REQUIRED)
                                       .responseChecksumValidation(ResponseChecksumValidation.WHEN_REQUIRED)
                                       .build();
```

This configuration is particularly useful when working with third-party S3-compatible storage systems that may not fully support the newer checksum algorithms but still require MD5 checksums for certain operations.

# Use a performant S3 client: AWS CRT-based S3 client
<a name="crt-based-s3-client"></a>

The AWS CRT-based S3 client—built on top of the [AWS Common Runtime (CRT)](https://docs.aws.amazon.com/sdkref/latest/guide/common-runtime.html)—is an alternative S3 asynchronous client. It transfers objects to and from Amazon Simple Storage Service (Amazon S3) with enhanced performance and reliability by automatically using Amazon S3's [multipart upload API](https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html) and [byte-range fetches](https://docs.aws.amazon.com/AmazonS3/latest/userguide/optimizing-performance-guidelines.html#optimizing-performance-guidelines-get-range). 

The AWS CRT-based S3 client improves transfer reliability in case there is a network failure. Reliability is improved by retrying individual failed parts of a file transfer without restarting the transfer from the beginning.

In addition, the AWS CRT-based S3 client offers enhanced connection pooling and Domain Name System (DNS) load balancing, which also improves throughput.

You can use the AWS CRT-based S3 client in place of the SDK's standard S3 asynchronous client and take advantage of its improved throughput right away.

**Important**  
The AWS CRT-based S3 client does not currently support [SDK metrics collection](metrics.md) at the client level nor at the request level.

**AWS CRT-based components in the SDK**

The AWS CRT-based* S3* client, described in this topic, and the AWS CRT-based *HTTP* client are different components in the SDK. 

The **AWS CRT-based S3 client** is an implementation of the [S3AsyncClient](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3AsyncClient.html) interface and is used for working with the Amazon S3 service. It is an alternative to the Java-based implementation of the `S3AsyncClient` interface and offers several benefits.

The [AWS CRT-based HTTP client](http-configuration-crt.md) is an implementation of the [SdkAsyncHttpClient](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/http/async/SdkAsyncHttpClient.html) interface and is used for general HTTP communication. It is an alternative to the Netty implementation of the `SdkAsyncHttpClient` interface and offers several advantages.

Although both components use libraries from the [AWS Common Runtime](https://docs.aws.amazon.com/sdkref/latest/guide/common-runtime.html), the AWS CRT-based S3 client uses the [aws-c-s3 library](https://github.com/awslabs/aws-c-s3) and supports the [S3 multipart upload API](https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html) features. Since the AWS CRT-based HTTP client is meant for general purpose use, it does not support the S3 multipart upload API features.

## Add dependencies to use the AWS CRT-based S3 client
<a name="crt-based-s3-client-depend"></a>

To use the AWS CRT-based S3 client, add the following two dependencies to your Maven project file. The example shows the minimum versions to use. Search the Maven central repository for the most recent versions of the [s3](https://central.sonatype.com/artifact/software.amazon.awssdk/s3) and [aws-crt](https://central.sonatype.com/artifact/software.amazon.awssdk.crt/aws-crt) artifacts.

```
<dependency>
  <groupId>software.amazon.awssdk</groupId>
  <artifactId>s3</artifactId>
  <version>2.27.21</version>
</dependency>
<dependency>
  <groupId>software.amazon.awssdk.crt</groupId>
  <artifactId>aws-crt</artifactId>
  <version>0.30.11</version>
</dependency>
```

## Create an instance of the AWS CRT-based S3 client
<a name="crt-based-s3-client-create"></a>

 Create an instance of the AWS CRT-based S3 client with default settings as shown in the following code snippet.

```
S3AsyncClient s3AsyncClient = S3AsyncClient.crtCreate();
```

To configure the client, use the AWS CRT client builder. You can switch from the standard S3 asynchronous client to AWS CRT-based client by changing the builder method.

```
import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3AsyncClient;


S3AsyncClient s3AsyncClient = 
        S3AsyncClient.crtBuilder()
                     .credentialsProvider(DefaultCredentialsProvider.create())
                     .region(Region.US_WEST_2)
                     .targetThroughputInGbps(20.0)
                     .minimumPartSizeInBytes(8 * 1025 * 1024L)
                     .build();
```

**Note**  
Some of the settings in the standard builder might not be currently supported in the AWS CRT client builder. Get the standard builder by calling `S3AsyncClient#builder()`.

## Use the AWS CRT-based S3 client
<a name="crt-based-s3-client-use"></a>

Use the AWS CRT-based S3 client to call Amazon S3 API operations. The following example demonstrates the [PutObject](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3AsyncClient.html#putObject(java.util.function.Consumer,software.amazon.awssdk.core.async.AsyncRequestBody)) and [GetObject](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3AsyncClient.html#getObject(java.util.function.Consumer,software.amazon.awssdk.core.async.AsyncResponseTransformer)) operations available through the AWS SDK for Java.

```
import software.amazon.awssdk.core.async.AsyncRequestBody;
import software.amazon.awssdk.core.async.AsyncResponseTransformer;
import software.amazon.awssdk.services.s3.S3AsyncClient;
import software.amazon.awssdk.services.s3.model.GetObjectResponse;
import software.amazon.awssdk.services.s3.model.PutObjectResponse;


S3AsyncClient s3Client = S3AsyncClient.crtCreate();

// Upload a local file to Amazon S3.
PutObjectResponse putObjectResponse = 
      s3Client.putObject(req -> req.bucket(<BUCKET_NAME>)
                                   .key(<KEY_NAME>),
                        AsyncRequestBody.fromFile(Paths.get(<FILE_NAME>)))
              .join();

// Download an object from Amazon S3 to a local file.
GetObjectResponse getObjectResponse = 
      s3Client.getObject(req -> req.bucket(<BUCKET_NAME>)
                                   .key(<KEY_NAME>),
                        AsyncResponseTransformer.toFile(Paths.get(<FILE_NAME>)))
              .join();
```

## Uploading streams of unknown size
<a name="crt-stream-unknown-size"></a>

One significant advantage of the AWS AWS CRT-based S3 client is its ability to handle input streams of unknown size efficiently. This is particularly useful when you need to upload data from a source where the total size cannot be determined in advance.

```
public PutObjectResponse crtClient_stream_unknown_size(String bucketName, String key, InputStream inputStream) {

    S3AsyncClient s3AsyncClient = S3AsyncClient.crtCreate();
    ExecutorService executor = Executors.newSingleThreadExecutor();
    AsyncRequestBody body = AsyncRequestBody.fromInputStream(inputStream, null, executor);  // 'null' indicates that the
                                                                                            // content length is unknown.
    CompletableFuture<PutObjectResponse> responseFuture =
            s3AsyncClient.putObject(r -> r.bucket(bucketName).key(key), body)
                    .exceptionally(e -> {
                        if (e != null){
                            logger.error(e.getMessage(), e);
                        }
                        return null;
                    });

    PutObjectResponse response = responseFuture.join(); // Wait for the response.
    executor.shutdown();
    return response;
}
```

This capability helps avoid common issues with traditional uploads where an incorrect content length specification can lead to truncated objects or failed uploads.

## Configuration limitations
<a name="crt-based-s3-client-limitations"></a>

The AWS CRT-based S3 client and Java-based S3 async client [provide comparable features](examples-s3.md#s3-clients), with the AWS CRT-based S3 client offering a performance edge. However, the AWS CRT-based S3 client lacks configuration settings that the Java-based S3 async client has. These settings include:
+ *Client-level configuration:* API call attempt timeout, compression execution interceptors, metric publishers, custom execution attributes, custom advanced options, custom scheduled executor service, custom headers
+ *Request-level configuration:* custom signers, API call attempt timeout

For a full listing of the configuration differences, see the API reference.


| Java-based S3 async client | AWS CRT-based S3 client | 
| --- | --- | 
| Client-level configurations[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/crt-based-s3-client.html)Request-level configurations[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/crt-based-s3-client.html) | Client-level configurations[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/crt-based-s3-client.html)Request-level configurations[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/crt-based-s3-client.html) | 

# Configure the Java-based S3 async client to use parallel transfers
<a name="s3-async-client-multipart"></a>

Since version 2.27.5, the standard Java-based S3 async client supports automatic parallel transfers (multipart uploads and downloads). You configure support for parallel transfers when you create the Java-based S3 async client. 

This section shows how to enable parallel transfers and how to customize the configuration.

## Create an instance of `S3AsyncClient`
<a name="s3-async-client-multipart-create"></a>

When you create an `S3AsyncClient` instance without calling any of the `multipart*` methods on the [builder](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3AsyncClientBuilder.html), parallel transfers are not enabled. Each of following statements create a Java-based S3 async client without support for multipart uploads and downloads.

### Create *without* multipart support
<a name="s3-async-client-mp-off"></a>

**Example**  

```
import software.amazon.awssdk.auth.credentials.ProcessCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3AsyncClient;


S3AsyncClient s3Client = S3AsyncClient.create();

S3AsyncClient s3Client2 = S3AsyncClient.builder().build();

S3AsyncClient s3Client3 = S3AsyncClient.builder()
        .credentialsProvider(ProcessCredentialsProvider.builder().build())
        .region(Region.EU_NORTH_1)
        .build();
```

### Create *with* multipart support
<a name="s3-async-client-mp-on"></a>

To enable parallel transfers with default settings, call the `multipartEnabled` on the builder and pass in `true` as shown in the following example.

**Example**  

```
S3AsyncClient s3AsyncClient2 = S3AsyncClient.builder()
        .multipartEnabled(true)
        .build();
```

The default value is 8 MiB for `thresholdInBytes` and `minimumPartSizeInBytes` settings.

If you customize the multipart settings, parallel transfers are automatically enabled as shown in the following.

**Example**  

```
import software.amazon.awssdk.services.s3.S3AsyncClient;
import static software.amazon.awssdk.transfer.s3.SizeConstant.MB;


S3AsyncClient s3AsyncClient2 = S3AsyncClient.builder()
        .multipartConfiguration(b -> b
                .thresholdInBytes(16 * MB)
                .minimumPartSizeInBytes(10 * MB))
        .build();
```

## Uploading streams of unknown size
<a name="java-async-client-stream-unknown-size"></a>

The Java-based S3 asynchronous client with multipart enabled can efficiently handle input streams where the total size is not known in advance:

```
public PutObjectResponse asyncClient_multipart_stream_unknown_size(String bucketName, String key, InputStream inputStream) {

    S3AsyncClient s3AsyncClient = S3AsyncClient.builder().multipartEnabled(true).build();
    ExecutorService executor = Executors.newSingleThreadExecutor();
    AsyncRequestBody body = AsyncRequestBody.fromInputStream(inputStream, null, executor); // 'null' indicates that the
                                                                                           // content length is unknown.
    CompletableFuture<PutObjectResponse> responseFuture =
            s3AsyncClient.putObject(r -> r.bucket(bucketName).key(key), body)
                    .exceptionally(e -> {
                        if (e != null) {
                            logger.error(e.getMessage(), e);
                        }
                        return null;
                    });

    PutObjectResponse response = responseFuture.join(); // Wait for the response.
    executor.shutdown();
    return response;
}
```

This approach prevents issues that can occur when manually specifying an incorrect content length, such as truncated objects or failed uploads.

# Transfer files and directories with the Amazon S3 Transfer Manager
<a name="transfer-manager"></a>

The Amazon S3 Transfer Manager is an open source, high level file transfer utility for the AWS SDK for Java 2.x. Use it to transfer files and directories to and from Amazon Simple Storage Service (Amazon S3). 

When built on top of the [AWS CRT-based S3 client](crt-based-s3-client.md) or the [standard Java-based S3 async client with multipart enabled](s3-async-client-multipart.md), the S3 Transfer Manager can take advantage of performance improvements such as the [multipart upload API](https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html) and [byte-range fetches](https://docs.aws.amazon.com/AmazonS3/latest/userguide/optimizing-performance-guidelines.html#optimizing-performance-guidelines-get-range). 

With the S3 Transfer Manager, you can also monitor a transfer's progress in real time and pause the transfer for later execution.

## Get started
<a name="transfer-manager-prerequisites"></a>

### Add dependencies to your build file
<a name="transfer-manager-add-dependency"></a>

To use the S3 Transfer Manager with enhanced multipart performance, configure your build file with necessary dependencies.

------
#### [ Use the AWS CRT-based S3 client ]

```
<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>software.amazon.awssdk</groupId>
            <artifactId>bom</artifactId>
            <version>2.27.211</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
<dependencies>
    <dependency>
        <groupId>software.amazon.awssdk</groupId>
        <artifactId>s3-transfer-manager</artifactId>
    </dependency>
    <dependency>
        <groupId>software.amazon.awssdk.crt</groupId>
        <artifactId>aws-crt</artifactId>
        <version>0.29.1432</version>
    </dependency>
</dependencies>
```

1 [Latest version](https://central.sonatype.com/artifact/software.amazon.awssdk/bom). 2[Latest version](https://central.sonatype.com/artifact/software.amazon.awssdk.crt/aws-crt).

------
#### [ Use the Java-based S3 async client ]

```
<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>software.amazon.awssdk</groupId>
            <artifactId>bom</artifactId>
            <version>2.27.211</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
<dependencies>
    <dependency>
        <groupId>software.amazon.awssdk</groupId>
        <artifactId>s3-transfer-manager</artifactId>
    </dependency>
</dependencies>
```

1 [Latest version](https://central.sonatype.com/artifact/software.amazon.awssdk/bom).

------

### Create an instance of the S3 Transfer Manager
<a name="transfer-manager-create"></a>

To enable parallel transfer, you must pass in a AWS CRT-based S3 client OR a Java-based S3 async client with multipart enabled. The following examples shows how to configure a S3 Transfer Manager with custom settings. 

------
#### [ Use the AWS CRT-based S3 client ]

```
        S3AsyncClient s3AsyncClient = S3AsyncClient.crtBuilder()
                .credentialsProvider(DefaultCredentialsProvider.create())
                .region(Region.US_EAST_1)
                .targetThroughputInGbps(20.0)
                .minimumPartSizeInBytes(8 * MB)
                .build();

        S3TransferManager transferManager = S3TransferManager.builder()
                .s3Client(s3AsyncClient)
                .build();
```

------
#### [ Use the Java-based S3 async client ]

If the `aws-crt` dependency is not included in the build file, the S3 Transfer Manager is built on top of the standard Java-based S3 async client used in the SDK for Java 2.x. 

**Custom configuration of S3 client - requires multipart enabled**

```
        S3AsyncClient s3AsyncClient = S3AsyncClient.builder()
                .multipartEnabled(true)
                .credentialsProvider(DefaultCredentialsProvider.create())
                .region(Region.US_EAST_1)
                .build();

        S3TransferManager transferManager = S3TransferManager.builder()
                .s3Client(s3AsyncClient)
                .build();
```

**No configuration of S3 client - multipart support automatically enabled**

```
S3TransferManager transferManager = S3TransferManager.create();
```

------

## Upload a file to an S3 bucket
<a name="transfer-manager-upload"></a>

The following example shows a file upload example along with the optional use of a [LoggingTransferListener](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/progress/LoggingTransferListener.html), which logs the progress of the upload.

To upload a file to Amazon S3 using the S3 Transfer Manager, pass an [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/model/UploadFileRequest.html](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/model/UploadFileRequest.html) object to the `S3TransferManager`'s [uploadFile](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/S3TransferManager.html#uploadFile(software.amazon.awssdk.transfer.s3.model.UploadFileRequest)) method.

The [FileUpload](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/model/FileUpload.html) object returned from the `uploadFile` method represents the upload process. After the request finishes, the [CompletedFileUpload](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/model/CompletedFileUpload.html) object contains information about the upload.

```
    public void trackUploadFile(S3TransferManager transferManager, String bucketName,
                             String key, URI filePathURI) {
        UploadFileRequest uploadFileRequest = UploadFileRequest.builder()
                .putObjectRequest(b -> b.bucket(bucketName).key(key))
                .addTransferListener(LoggingTransferListener.create())  // Add listener.
                .source(Paths.get(filePathURI))
                .build();

        FileUpload fileUpload = transferManager.uploadFile(uploadFileRequest);

        fileUpload.completionFuture().join();
        /*
            The SDK provides a LoggingTransferListener implementation of the TransferListener interface.
            You can also implement the interface to provide your own logic.

            Configure log4J2 with settings such as the following.
                <Configuration status="WARN">
                    <Appenders>
                        <Console name="AlignedConsoleAppender" target="SYSTEM_OUT">
                            <PatternLayout pattern="%m%n"/>
                        </Console>
                    </Appenders>

                    <Loggers>
                        <logger name="software.amazon.awssdk.transfer.s3.progress.LoggingTransferListener" level="INFO" additivity="false">
                            <AppenderRef ref="AlignedConsoleAppender"/>
                        </logger>
                    </Loggers>
                </Configuration>

            Log4J2 logs the progress. The following is example output for a 21.3 MB file upload.
                Transfer initiated...
                |                    | 0.0%
                |====                | 21.1%
                |============        | 60.5%
                |====================| 100.0%
                Transfer complete!
        */
    }
```

### Imports
<a name="transfer-manager-upload-imports"></a>

```
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import software.amazon.awssdk.transfer.s3.S3TransferManager;
import software.amazon.awssdk.transfer.s3.model.CompletedFileUpload;
import software.amazon.awssdk.transfer.s3.model.FileUpload;
import software.amazon.awssdk.transfer.s3.model.UploadFileRequest;
import software.amazon.awssdk.transfer.s3.progress.LoggingTransferListener;
import java.net.URI;
import java.net.URISyntaxException;
import java.net.URL;
import java.nio.file.Paths;
import java.util.UUID;
```

## Download a file from an S3 bucket
<a name="transfer-manager-download"></a>

The following example shows a download example along with the optional use of a [LoggingTransferListener](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/progress/LoggingTransferListener.html), which logs the progress of the download.

To download an object from an S3 bucket using the S3 Transfer Manager, build a [DownloadFileRequest](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/model/DownloadFileRequest.html) object and pass it to the [downloadFile](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/S3TransferManager.html#downloadFile(software.amazon.awssdk.transfer.s3.model.DownloadFileRequest)) method.

The [FileDownload](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/model/FileDownload.html) object returned by the `S3TransferManager`'s `downloadFile` method represents the file transfer. After the download completes, the [CompletedFileDownload](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/model/CompletedFileDownload.html) contains access to information about the download.

```
    public void trackDownloadFile(S3TransferManager transferManager, String bucketName,
                             String key, String downloadedFileWithPath) {
        DownloadFileRequest downloadFileRequest = DownloadFileRequest.builder()
                .getObjectRequest(b -> b.bucket(bucketName).key(key))
                .addTransferListener(LoggingTransferListener.create())  // Add listener.
                .destination(Paths.get(downloadedFileWithPath))
                .build();

        FileDownload downloadFile = transferManager.downloadFile(downloadFileRequest);

        CompletedFileDownload downloadResult = downloadFile.completionFuture().join();
        /*
            The SDK provides a LoggingTransferListener implementation of the TransferListener interface.
            You can also implement the interface to provide your own logic.

            Configure log4J2 with settings such as the following.
                <Configuration status="WARN">
                    <Appenders>
                        <Console name="AlignedConsoleAppender" target="SYSTEM_OUT">
                            <PatternLayout pattern="%m%n"/>
                        </Console>
                    </Appenders>

                    <Loggers>
                        <logger name="software.amazon.awssdk.transfer.s3.progress.LoggingTransferListener" level="INFO" additivity="false">
                            <AppenderRef ref="AlignedConsoleAppender"/>
                        </logger>
                    </Loggers>
                </Configuration>

            Log4J2 logs the progress. The following is example output for a 21.3 MB file download.
                Transfer initiated...
                |=======             | 39.4%
                |===============     | 78.8%
                |====================| 100.0%
                Transfer complete!
        */
    }
```

### Imports
<a name="transfer-manager-download-import"></a>

```
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import software.amazon.awssdk.core.exception.SdkException;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.transfer.s3.S3TransferManager;
import software.amazon.awssdk.transfer.s3.model.CompletedFileDownload;
import software.amazon.awssdk.transfer.s3.model.DownloadFileRequest;
import software.amazon.awssdk.transfer.s3.model.FileDownload;
import software.amazon.awssdk.transfer.s3.progress.LoggingTransferListener;

import java.io.IOException;
import java.net.URISyntaxException;
import java.net.URL;
import java.nio.file.Files;
import java.nio.file.NoSuchFileException;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.UUID;
```

## Copy an Amazon S3 object to another bucket
<a name="transfer-manager-copy"></a>

The following example shows how to copy an object with the S3 Transfer Manager.

To begin the copy of an object from an S3 bucket to another bucket, create a basic [CopyObjectRequest](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/CopyObjectRequest.html) instance.

Next, wrap the basic `CopyObjectRequest` in a [CopyRequest](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/model/CopyRequest.html) that can be used by the S3 Transfer Manager. 

The `Copy` object returned by the `S3TransferManager`'s `copy` method represents the copy process. After the copy process completes, the [CompletedCopy](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/model/CompletedCopy.html) object contains details about the response.

```
    public String copyObject(S3TransferManager transferManager, String bucketName,
            String key, String destinationBucket, String destinationKey) {
        CopyObjectRequest copyObjectRequest = CopyObjectRequest.builder()
                .sourceBucket(bucketName)
                .sourceKey(key)
                .destinationBucket(destinationBucket)
                .destinationKey(destinationKey)
                .build();

        CopyRequest copyRequest = CopyRequest.builder()
                .copyObjectRequest(copyObjectRequest)
                .build();

        Copy copy = transferManager.copy(copyRequest);

        CompletedCopy completedCopy = copy.completionFuture().join();
        return completedCopy.response().copyObjectResult().eTag();
    }
```

**Note**  
To perform a cross-Region copy with the S3 Transfer Manager, enable `crossRegionAccessEnabled` on the AWS CRT-based S3 client builder as shown in the following snippet.  

```
S3AsyncClient s3AsyncClient = S3AsyncClient.crtBuilder()
                .crossRegionAccessEnabled(true)
                .build();

S3TransferManager transferManager = S3TransferManager.builder()
                .s3Client(s3AsyncClient)
                .build();
```

### Imports
<a name="transfer-manager-copy-import"></a>

```
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.services.s3.model.CopyObjectRequest;
import software.amazon.awssdk.transfer.s3.S3TransferManager;
import software.amazon.awssdk.transfer.s3.model.CompletedCopy;
import software.amazon.awssdk.transfer.s3.model.Copy;
import software.amazon.awssdk.transfer.s3.model.CopyRequest;

import java.util.UUID;
```

## Upload a local directory to an S3 bucket
<a name="transfer-manager-upload_directory"></a>

The following example demonstrates how you can upload a local directory to S3.

Start by calling the [uploadDirectory](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/S3TransferManager.html#uploadDirectory(software.amazon.awssdk.transfer.s3.model.UploadDirectoryRequest)) method of the `S3TransferManager` instance, passing in an [UploadDirectoryRequest](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/model/UploadDirectoryRequest.html).

The [DirectoryUpload](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/model/DirectoryUpload.html) object represents the upload process, which generates a [CompletedDirectoryUpload](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/model/CompletedDirectoryUpload.html) when the request completes. The `CompleteDirectoryUpload` object contains information about the results of the transfer, including which files failed to transfer.

```
    public Integer uploadDirectory(S3TransferManager transferManager,
            URI sourceDirectory, String bucketName) {
        DirectoryUpload directoryUpload = transferManager.uploadDirectory(UploadDirectoryRequest.builder()
                .source(Paths.get(sourceDirectory))
                .bucket(bucketName)
                .build());

        CompletedDirectoryUpload completedDirectoryUpload = directoryUpload.completionFuture().join();
        completedDirectoryUpload.failedTransfers()
                .forEach(fail -> logger.warn("Object [{}] failed to transfer", fail.toString()));
        return completedDirectoryUpload.failedTransfers().size();
    }
```

### Imports
<a name="transfer-manager-upload_directory-import"></a>

```
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import software.amazon.awssdk.services.s3.model.ObjectIdentifier;
import software.amazon.awssdk.transfer.s3.S3TransferManager;
import software.amazon.awssdk.transfer.s3.model.CompletedDirectoryUpload;
import software.amazon.awssdk.transfer.s3.model.DirectoryUpload;
import software.amazon.awssdk.transfer.s3.model.UploadDirectoryRequest;

import java.net.URI;
import java.net.URISyntaxException;
import java.net.URL;
import java.nio.file.Paths;
import java.util.UUID;
```

## Download S3 bucket objects to a local directory
<a name="transfer-manager-download_directory"></a>

You can download the objects in an S3 bucket to a local directory as shown in the following example.

To download the objects in an S3 bucket to a local directory, begin by calling the [downloadDirectory](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/S3TransferManager.html#downloadDirectory(software.amazon.awssdk.transfer.s3.model.DownloadDirectoryRequest)) method of the Transfer Manager, passing in a [DownloadDirectoryRequest](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/model/DownloadDirectoryRequest.html).

The [DirectoryDownload](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/model/DirectoryDownload.html) object represents the download process, which generates a [CompletedDirectoryDownload](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/transfer/s3/model/CompletedDirectoryDownload.html) when the request completes. The `CompleteDirectoryDownload` object contains information about the results of the transfer, including which files failed to transfer.

```
    public Integer downloadObjectsToDirectory(S3TransferManager transferManager,
            URI destinationPathURI, String bucketName) {
        DirectoryDownload directoryDownload = transferManager.downloadDirectory(DownloadDirectoryRequest.builder()
                .destination(Paths.get(destinationPathURI))
                .bucket(bucketName)
                .build());
        CompletedDirectoryDownload completedDirectoryDownload = directoryDownload.completionFuture().join();

        completedDirectoryDownload.failedTransfers()
                .forEach(fail -> logger.warn("Object [{}] failed to transfer", fail.toString()));
        return completedDirectoryDownload.failedTransfers().size();
    }
```

### Imports
<a name="transfer-manager-download_directory-import"></a>

```
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.services.s3.model.ObjectIdentifier;
import software.amazon.awssdk.transfer.s3.S3TransferManager;
import software.amazon.awssdk.transfer.s3.model.CompletedDirectoryDownload;
import software.amazon.awssdk.transfer.s3.model.DirectoryDownload;
import software.amazon.awssdk.transfer.s3.model.DownloadDirectoryRequest;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.HashSet;
import java.util.Set;
import java.util.UUID;
import java.util.stream.Collectors;
```

## See complete examples
<a name="transfer-manager-example-location"></a>

[GitHub contains the complete](https://github.com/awsdocs/aws-doc-sdk-examples/tree/d73001daea05266eaa9e074ccb71b9383832369a/javav2/example_code/s3/src/main/java/com/example/s3/transfermanager) code for all examples on this page.

# Work with S3 Event Notifications
<a name="examples-s3-event-notifications"></a>

To help you monitor activity in your buckets, Amazon S3 can send notifications when certain events happen. The Amazon S3 User Guide provides information on the [notifications that a bucket can send out](https://docs.aws.amazon.com/AmazonS3/latest/userguide/EventNotifications.html#notification-how-to-overview). 

You can set up a bucket to send events to four possible destinations using the SDK for Java: 
+ Amazon Simple Notification Service topics
+ Amazon Simple Queue Service queues
+ AWS Lambda functions
+ Amazon EventBridge

When you setup up a bucket to send events to EventBridge, you have the ability to configure an EventBridge rule to fanout the same event to multiple destinations. When you configure your bucket to send directly to one of the first three destinations, only one destination type can be specified for each event.

In the next section, you'll see how to configure a bucket using the SDK for Java to send S3 Event Notifications in two ways: directly to an Amazon SQS queue and to EventBridge.

The last section shows you how to use the S3 Event Notifications API to work with notifications in an object-oriented way.

## Configure a bucket to send directly to a destination
<a name="s3-event-conf-bucket-direct"></a>

The following example configures a bucket to send notifications when *object create* events or *object tagging* events occur against a bucket.

```
static void processS3Events(String bucketName, String queueArn) {
    // Configure the bucket to send Object Created and Object Tagging notifications to an existing SQS queue.
    s3Client.putBucketNotificationConfiguration(b -> b
            .notificationConfiguration(ncb -> ncb
                    .queueConfigurations(qcb -> qcb
                            .events(Event.S3_OBJECT_CREATED, Event.S3_OBJECT_TAGGING)
                            .queueArn(queueArn)))
                    .bucket(bucketName)
    );
}
```

The code shown above sets up one queue to receive two types of events. Conveniently, the `queueConfigurations` method allows you to set multiple queue destinations if needed. Also, in the `notificationConfiguration` method you can set additional destinations, such as one or more Amazon SNS topics or one or more Lambda functions. The following snippet shows an example with two queues and three types of destinations.

```
s3Client.putBucketNotificationConfiguration(b -> b
                .notificationConfiguration(ncb -> ncb
                        .queueConfigurations(qcb -> qcb
                                .events(Event.S3_OBJECT_CREATED, Event.S3_OBJECT_TAGGING)
                                .queueArn(queueArn), 
                                qcb2 -> qcb2.<...>)
                        .topicConfigurations(tcb -> tcb.<...>)
                        .lambdaFunctionConfigurations(lfcb -> lfcb.<...>))
                        .bucket(bucketName)
        );
```

The Code Examples GitHub repository contains the [complete example](https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/main/java/com/example/s3/ProcessS3EventNotification.java) to send S3 event notifications directly to a queue.

## Configure a bucket to send to EventBridge
<a name="s3-event-conf-bucket-eventbridge"></a>

The following example configures a bucket to send notifications to EventBridge.

```
public static String setBucketNotificationToEventBridge(String bucketName) {
    // Enable bucket to emit S3 Event notifications to EventBridge.
    s3Client.putBucketNotificationConfiguration(b -> b
            .bucket(bucketName)
            .notificationConfiguration(b1 -> b1
                    .eventBridgeConfiguration(SdkBuilder::build))
    .build());
```

When you configure a bucket to send events to EventBridge, you simply indicate the EventBridge destination, not the types of events nor the ultimate destination that EventBridge will dispatch to. You configure the ultimate targets and event types by using the Java SDK's EventBridge client.

The following code shows how to configure EventBridge to fan out *object created* events to a topic and a queue.

```
   public static String configureEventBridge(String topicArn, String queueArn) {
        try {
            // Create an EventBridge rule to route Object Created notifications.
            PutRuleRequest putRuleRequest = PutRuleRequest.builder()
                    .name(RULE_NAME)
                    .eventPattern("""
                            {
                              "source": ["aws.s3"],
                              "detail-type": ["Object Created"],
                              "detail": {
                                "bucket": {
                                  "name": ["%s"]
                                }
                              }
                            }
                            """.formatted(bucketName))
                    .build();

            // Add the rule to the default event bus.
            PutRuleResponse putRuleResponse = eventBridgeClient.putRule(putRuleRequest)
                    .whenComplete((r, t) -> {
                        if (t != null) {
                            logger.error("Error creating event bus rule: " + t.getMessage(), t);
                            throw new RuntimeException(t.getCause().getMessage(), t);
                        }
                        logger.info("Event bus rule creation request sent successfully. ARN is: {}", r.ruleArn());
                    }).join();

            // Add the existing SNS topic and SQS queue as targets to the rule.
            eventBridgeClient.putTargets(b -> b
                    .eventBusName("default")
                    .rule(RULE_NAME)
                    .targets(List.of (
                            Target.builder()
                                    .arn(queueArn)
                                    .id("Queue")
                                    .build(),
                            Target.builder()
                                    .arn(topicArn)
                                    .id("Topic")
                                    .build())
                            )
                    ).join();
            return putRuleResponse.ruleArn();
        } catch (S3Exception e) {
            System.err.println(e.awsErrorDetails().errorMessage());
            System.exit(1);
        }
        return null;
    }
```

To work with EventBridge in your Java code, add a dependency on the `eventbridge` artifact to your Maven `pom.xml` file.

```
<dependency>
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>eventbridge</artifactId>
</dependency>
```

The Code Examples GitHub repository contains the [complete example](https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/main/java/com/example/s3/PutBucketS3EventNotificationEventBridge.java) to send S3 event notifications to EventBridge and then to a topic and queue.

## Use the S3 Event Notifications API to process events
<a name="s3-event-notification-read"></a>

After a destination receives S3 notification events, you can process them in an object-oriented way by using the S3 Event Notifications API. You can use the S3 Event Notifications API to work with event notifications that are dispatched directly to a target (as shown in the [first example](#s3-event-conf-bucket-direct)), but not with notifications routed through EventBridge. S3 event notifications sent by buckets to EventBridge contain a [different structure](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ev-events.html#ev-events-list) that the S3 Event Notifications API does not currently handle.

### Add dependency
<a name="s3-event-notifications-dep"></a>

The S3 Event Notifications API was released with version 2.25.11 of the SDK for Java 2.x.

To use the S3 Event Notifications API, add the required dependency element to your Maven `pom.xml` as shown in the following snippet.

```
<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>software.amazon.awssdk</groupId>
            <artifactId>bom</artifactId>
            <version>2.X.X1</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>
<dependencies>
    <dependency>
        <groupId>software.amazon.awssdk</groupId>
        <artifactId>s3-event-notifications</artifactId>
    </dependency>
</dependencies>
```

1 [Latest version](https://central.sonatype.com/artifact/software.amazon.awssdk/bom).

### Use the `S3EventNotification` class
<a name="s3-event-notifications-use"></a>

#### Create an `S3EventNotification` instance from a JSON string
<a name="s3-event-notifications-use-from-json"></a>

To convert a JSON string into an `S3EventNotification` object, use the static methods of the `S3EventNotification` class as shown in the following example.

```
import software.amazon.awssdk.eventnotifications.s3.model.S3EventNotification
import software.amazon.awssdk.eventnotifications.s3.model.S3EventNotificationRecord
import software.amazon.awssdk.services.sqs.model.Message; 

public class S3EventNotificationExample {
    ...
    
    void receiveMessage(Message message) {
       // Message received from SQSClient.
       String sqsEventBody = message.body();
       S3EventNotification s3EventNotification = S3EventNotification.fromJson(sqsEventBody);
    
       // Use getRecords() to access all the records in the notification.                                                                                                       
       List<S3EventNotificationRecord> records = s3EventNotification.getRecords();   
    
        S3EventNotificationRecord record = records.stream().findFirst();
        // Use getters on the record to access individual attributes.
        String awsRegion = record.getAwsRegion();
        String eventName = record.getEventName();
        String eventSource = record.getEventSource();                                                                                                   
    }
}
```

In this example, the `fromJson` method converts the JSON string into an `S3EventNotification` object. Missing fields in the JSON string will result in `null` values in the corresponding Java object fields and any extra fields in the JSON will be ignored.

Other APIs for an event notification record can be found in API reference for `[S3EventNotificationRecord](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/eventnotifications/s3/model/S3EventNotificationRecord.html)`.

#### Convert an `S3EventNotification` instance to a JSON string
<a name="s3-event-notifications-use-to-json"></a>

Use the `toJson` (or `toJsonPretty`) method to convert an `S3EventNotification` object into a JSON string as shown in the following example.

```
import software.amazon.awssdk.eventnotifications.s3.model.S3EventNotification

public class S3EventNotificationExample {
    ...

    void toJsonString(S3EventNotification event) {

        String json = event.toJson();
        String jsonPretty = event.toJsonPretty();

        System.out.println("JSON: " + json);
        System.out.println("Pretty JSON: " + jsonPretty);
    }
}
```

Fields for `GlacierEventData`, `ReplicationEventData`, `IntelligentTieringEventData`, and `LifecycleEventData` are excluded from the JSON if they are `null`. Other `null` fields will be serialized as `null`.

The following shows example output of the `toJsonPretty` method for an S3 object tagging event.

```
{
  "Records" : [ {
    "eventVersion" : "2.3",
    "eventSource" : "aws:s3",
    "awsRegion" : "us-east-1",
    "eventTime" : "2024-07-19T20:09:18.551Z",
    "eventName" : "ObjectTagging:Put",
    "userIdentity" : {
      "principalId" : "AWS:XXXXXXXXXXX"
    },
    "requestParameters" : {
      "sourceIPAddress" : "XXX.XX.XX.XX"
    },
    "responseElements" : {
      "x-amz-request-id" : "XXXXXXXXXXXX",
      "x-amz-id-2" : "XXXXXXXXXXXXX"
    },
    "s3" : {
      "s3SchemaVersion" : "1.0",
      "configurationId" : "XXXXXXXXXXXXX",
      "bucket" : {
        "name" : "amzn-s3-demo-bucket",
        "ownerIdentity" : {
          "principalId" : "XXXXXXXXXXX"
        },
        "arn" : "arn:aws:s3:::XXXXXXXXXX"
      },
      "object" : {
        "key" : "akey",
        "size" : null,
        "eTag" : "XXXXXXXXXX",
        "versionId" : null,
        "sequencer" : null
      }
    }
  } ]
}
```

A [complete example](https://github.com/awsdocs/aws-doc-sdk-examples/blob/75c3daadf750406156fc87fa30ee499a206b4a36/javav2/example_code/s3/src/main/java/com/example/s3/ProcessS3EventNotification.java#L117) is available in GitHub that shows how to use the API to work with notifications received by an Amazon SQS queue.

## Process S3 Events in Lambda with Java Libraries: AWS SDK for Java 2.x and `aws-lambda-java-events`
<a name="s3-event-notif-processing-options"></a>

Instead of using the SDK for Java 2.x to process Amazon S3 event notifications in a Lambda function, you can use the `[aws-lambda-java-events](https://github.com/aws/aws-lambda-java-libs/tree/main/aws-lambda-java-events)` library at version 3.x.x. AWS maintains the `aws-lambda-java-events` library independently, and it has its own dependency requirements. The `aws-lambda-java-events` library works only with S3 events in Lambda functions, whereas the SDK for Java 2.x works with S3 events in Lambda functions, Amazon SNS, and Amazon SQS.

Both approaches model the JSON event notification payload in an object-oriented way with similar APIs. The following table shows the notable differences between using the two approaches.


****  

|  | AWS SDK for Java | aws-lambda-java-events library | 
| --- | --- | --- | 
| Package naming |  `software.amazon.awssdk.eventnotifications.s3.model.S3EventNotification`  | com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification | 
| RequestHandler parameter |  Write your Lambda function's `RequestHandler` implementation to receive a JSON String: <pre>import com.amazonaws.services.lambda.runtime.Context;<br />import com.amazonaws.services.lambda.runtime.RequestHandler;<br />import software.amazon.awssdk.eventnotifications.s3.model.S3EventNotification;<br /><br />public class Handler implements RequestHandler<String, String> {<br /><br />    @Override<br />        public String handleRequest(String jsonS3Event, Context context) {<br />            S3EventNotification s3Event = S3EventNotification<br />                                             .fromJson(jsonS3Event);<br />            // Work with the s3Event object.        <br />            ...<br />    }<br />}</pre>  | Write your Lambda function's RequestHandler implementation to receive an S3Event object:<pre>import com.amazonaws.services.lambda.runtime.Context;<br />import com.amazonaws.services.lambda.runtime.RequestHandler;<br />import com.amazonaws.services.lambda.runtime.events.S3Event;<br /><br />public class Handler implements RequestHandler<S3Event, String> {<br /><br />    @Override<br />        public String handleRequest(S3Event s3event, Context context) {<br />            // Work with the s3Event object.        <br />            ...<br />    }<br />}</pre> | 
| Maven dependencies |  <pre><dependencyManagement><br />    <dependencies><br />        <dependency><br />            <groupId>software.amazon.awssdk</groupId><br />            <artifactId>bom</artifactId><br />            <version>2.X.X</version><br />            <type>pom</type><br />            <scope>import</scope><br />        </dependency><br />    </dependencies><br /></dependencyManagement><br /><dependencies><br />    <dependency><br />        <groupId>software.amazon.awssdk</groupId><br />        <artifactId>s3-event-notifications</artifactId><br />    </dependency><br />    <!-- Add other SDK dependencies that you need. --><br /></dependencies></pre>  |  <pre><dependencyManagement><br />    <dependencies><br />        <dependency><br />            <groupId>software.amazon.awssdk</groupId><br />            <artifactId>bom</artifactId><br />            <version>2.X.X</version><br />            <type>pom</type><br />            <scope>import</scope><br />        </dependency><br />    </dependencies><br /></dependencyManagement><br /><dependencies><br />    <!-- The following two dependencies are for the <br />         aws-lambda-java-events library. --><br />    <dependency><br />        <groupId>com.amazonaws</groupId><br />        <artifactId>aws-lambda-java-core</artifactId><br />        <version>1.2.3</version>     <br />    </dependency><br />    <dependency><br />        <groupId>com.amazonaws</groupId><br />        <artifactId>aws-lambda-java-events</artifactId><br />        <version>3.15.0</version><br />    </dependency><br />    <!-- Add other SDK dependencies that you need. --><br /></dependencies></pre>  | 