

# Amazon S3 examples using the AWS SDK for PHP Version 3
<a name="s3-examples"></a>

Amazon Simple Storage Service (Amazon S3) is a web service that provides highly scalable cloud storage. Amazon S3 provides easy to use object storage, with a simple web service interface to store and retrieve any amount of data from anywhere on the web.

All the example code for the AWS SDK for PHP is available [here on GitHub](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/php/example_code).

## Credentials
<a name="examplecredentials"></a>

Before running the example code, configure your AWS credentials, as described in [Authenticating with AWS using AWS SDK for PHP Version 3](credentials.md). Then import the AWS SDK for PHP, as described in [Installing the AWS SDK for PHP Version 3](getting-started_installation.md).

**Topics**
+ [Credentials](#examplecredentials)
+ [Creating and using Amazon S3 buckets](s3-examples-creating-buckets.md)
+ [Managing Amazon S3 bucket access permissions](s3-examples-access-permissions.md)
+ [Configuring Amazon S3 buckets](s3-examples-configuring-a-bucket.md)
+ [Amazon S3 multipart uploads](s3-multipart-upload.md)
+ [Amazon S3 pre-signed URL](s3-presigned-url.md)
+ [Creating S3 pre-signed POSTs](s3-presigned-post.md)
+ [Using an Amazon S3 bucket as a static web host](s3-examples-static-web-host.md)
+ [Working with Amazon S3 bucket policies](s3-examples-bucket-policies.md)
+ [Using S3 access point ARNs](s3-examples-access-point-arn.md)
+ [Use Multi-Region Access Points](s3-multi-region-access-points.md)

# Creating and using Amazon S3 buckets with the AWS SDK for PHP Version 3
<a name="s3-examples-creating-buckets"></a>

The following examples show how to:
+ Return a list of buckets owned by the authenticated sender of the request using [ListBuckets](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#listbuckets).
+ Create a new bucket using [CreateBucket](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#createbucket).
+ Add an object to a bucket using [PutObject](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#putobject).

All the example code for the AWS SDK for PHP is available [here on GitHub](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/php/example_code).

## Credentials
<a name="examplecredentials"></a>

Before running the example code, configure your AWS credentials, as described in [Authenticating with AWS using AWS SDK for PHP Version 3](credentials.md). Then import the AWS SDK for PHP, as described in [Installing the AWS SDK for PHP Version 3](getting-started_installation.md).

 **Imports** 

```
require 'vendor/autoload.php';

use Aws\S3\S3Client;
```

## List buckets
<a name="list-buckets"></a>

Create a PHP file with the following code. First create an AWS.S3 client service that specifies the AWS Region and version. Then call the `listBuckets` method, which returns all Amazon S3 buckets owned by the sender of the request as an array of Bucket structures.

 **Sample Code** 

```
$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-west-2',
    'version' => '2006-03-01'
]);

//Listing all S3 Bucket
$buckets = $s3Client->listBuckets();
foreach ($buckets['Buckets'] as $bucket) {
    echo $bucket['Name'] . "\n";
}
```

## Create a bucket
<a name="create-a-bucket"></a>

Create a PHP file with the following code. First create an AWS.S3 client service that specifies the AWS Region and version. Then call the `createBucket` method with an array as the parameter. The only required field is the key ‘Bucket’, with a string value for the bucket name to create. However, you can specify the AWS Region with the ‘CreateBucketConfiguration’ field. If successful, this method returns the ‘Location’ of the bucket.

 **Sample Code** 

```
function createBucket($s3Client, $bucketName)
{
    try {
        $result = $s3Client->createBucket([
            'Bucket' => $bucketName,
        ]);
        return 'The bucket\'s location is: ' .
            $result['Location'] . '. ' .
            'The bucket\'s effective URI is: ' .
            $result['@metadata']['effectiveUri'];
    } catch (AwsException $e) {
        return 'Error: ' . $e->getAwsErrorMessage();
    }
}

function createTheBucket()
{
    $s3Client = new S3Client([
        'profile' => 'default',
        'region' => 'us-east-1',
        'version' => '2006-03-01'
    ]);

    echo createBucket($s3Client, 'amzn-s3-demo-bucket');
}

// Uncomment the following line to run this code in an AWS account.
// createTheBucket();
```

## Put an object in a bucket
<a name="put-an-object-in-a-bucket"></a>

To add files to your new bucket, create a PHP file with the following code.

In your command line, execute this file and pass in the name of the bucket where you want to upload your file as a string, followed by the full file path to the file to upload.

 **Sample Code** 

```
$USAGE = "\n" .
    "To run this example, supply the name of an S3 bucket and a file to\n" .
    "upload to it.\n" .
    "\n" .
    "Ex: php PutObject.php <bucketname> <filename>\n";

if (count($argv) <= 2) {
    echo $USAGE;
    exit();
}

$bucket = $argv[1];
$file_Path = $argv[2];
$key = basename($argv[2]);

try {
    //Create a S3Client
    $s3Client = new S3Client([
        'profile' => 'default',
        'region' => 'us-west-2',
        'version' => '2006-03-01'
    ]);
    $result = $s3Client->putObject([
        'Bucket' => $bucket,
        'Key' => $key,
        'SourceFile' => $file_Path,
    ]);
} catch (S3Exception $e) {
    echo $e->getMessage() . "\n";
}
```

# Managing Amazon S3 bucket access permissions with the AWS SDK for PHP Version 3
<a name="s3-examples-access-permissions"></a>

Access control lists (ACLs) are one of the resource-based access policy options you can use to manage access to your buckets and objects. You can use ACLs to grant basic read/write permissions to other AWS accounts. To learn more, see [Managing Access with ACLs](https://docs.aws.amazon.com/AmazonS3/latest/dev/S3_ACLs_UsingACLs.html).

The following example shows how to:
+ Get the access control policy for a bucket using [GetBucketAcl](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#getbucketacl).
+ Set the permissions on a bucket using ACLs, using [PutBucketAcl](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#putbucketacl).

All the example code for the AWS SDK for PHP is available [here on GitHub](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/php/example_code).

## Credentials
<a name="examplecredentials"></a>

Before running the example code, configure your AWS credentials, as described in [Authenticating with AWS using AWS SDK for PHP Version 3](credentials.md). Then import the AWS SDK for PHP, as described in [Installing the AWS SDK for PHP Version 3](getting-started_installation.md).

## Get and set an access control list policy
<a name="get-and-set-an-access-control-list-policy"></a>

 **Imports** 

```
require 'vendor/autoload.php';

use Aws\S3\S3Client;  
use Aws\Exception\AwsException;
```

 **Sample Code** 

```
// Create a S3Client 
$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-west-2',
    'version' => '2006-03-01'
]);

// Gets the access control policy for a bucket
$bucket = 'amzn-s3-demo-bucket';
try {
    $resp = $s3Client->getBucketAcl([
        'Bucket' => $bucket
    ]);
    echo "Succeed in retrieving bucket ACL as follows: \n";
    var_dump($resp);
} catch (AwsException $e) {
    // output error message if fails
    echo $e->getMessage();
    echo "\n";
}

// Sets the permissions on a bucket using access control lists (ACL).
$params = [
    'ACL' => 'public-read',
    'AccessControlPolicy' => [
        // Information can be retrieved from `getBucketAcl` response
        'Grants' => [
            [
                'Grantee' => [
                    'DisplayName' => '<string>',
                    'EmailAddress' => '<string>',
                    'ID' => '<string>',
                    'Type' => 'CanonicalUser',
                    'URI' => '<string>',
                ],
                'Permission' => 'FULL_CONTROL',
            ],
            // ...
        ],
        'Owner' => [
            'DisplayName' => '<string>',
            'ID' => '<string>',
        ],
    ],
    'Bucket' => $bucket,
];

try {
    $resp = $s3Client->putBucketAcl($params);
    echo "Succeed in setting bucket ACL.\n";
} catch (AwsException $e) {
    // Display error message
    echo $e->getMessage();
    echo "\n";
}
```

# Configuring Amazon S3 buckets with the AWS SDK for PHP Version 3
<a name="s3-examples-configuring-a-bucket"></a>

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support in Amazon S3, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.

For more information about using CORS configuration with an Amazon S3 bucket, see [Cross-Origin Resource Sharing (CORS)](https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html).

The following examples show how to:
+ Get the CORS configuration for a bucket using [GetBucketCors](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#getbucketcors).
+ Set the CORS configuration for a bucket using [PutBucketCors](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#putbucketcors).

All the example code for the AWS SDK for PHP is available [here on GitHub](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/php/example_code).

## Credentials
<a name="examplecredentials"></a>

Before running the example code, configure your AWS credentials, as described in [Authenticating with AWS using AWS SDK for PHP Version 3](credentials.md). Then import the AWS SDK for PHP, as described in [Installing the AWS SDK for PHP Version 3](getting-started_installation.md).

## Get the CORS configuration
<a name="get-the-cors-configuration"></a>

Create a PHP file with the following code. First create an AWS.S3 client service, then call the `getBucketCors` method and specify the bucket whose CORS configuration you want.

The only parameter required is the name of the selected bucket. If the bucket currently has a CORS configuration, that configuration is returned by Amazon S3 as a [CORSRules object](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#shape-corsrule).

 **Imports** 

```
require 'vendor/autoload.php';

use Aws\Exception\AwsException;
use Aws\S3\S3Client;
```

 **Sample Code** 

```
$client = new S3Client([
    'profile' => 'default',
    'region' => 'us-west-2',
    'version' => '2006-03-01'
]);

try {
    $result = $client->getBucketCors([
        'Bucket' => $bucketName, // REQUIRED
    ]);
    var_dump($result);
} catch (AwsException $e) {
    // output error message if fails
    error_log($e->getMessage());
}
```

## Set the CORS configuration
<a name="set-the-cors-configuration"></a>

Create a PHP file with the following code. First create an AWS.S3 client service. Then call the `putBucketCors` method and specify the bucket whose CORS configuration to set, and the CORSConfiguration as a [CORSRules JSON object](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#shape-corsrule).

 **Imports** 

```
require 'vendor/autoload.php';

use Aws\Exception\AwsException;
use Aws\S3\S3Client;
```

 **Sample Code** 

```
$client = new S3Client([
    'profile' => 'default',
    'region' => 'us-west-2',
    'version' => '2006-03-01'
]);

try {
    $result = $client->putBucketCors([
        'Bucket' => $bucketName, // REQUIRED
        'CORSConfiguration' => [ // REQUIRED
            'CORSRules' => [ // REQUIRED
                [
                    'AllowedHeaders' => ['Authorization'],
                    'AllowedMethods' => ['POST', 'GET', 'PUT'], // REQUIRED
                    'AllowedOrigins' => ['*'], // REQUIRED
                    'ExposeHeaders' => [],
                    'MaxAgeSeconds' => 3000
                ],
            ],
        ]
    ]);
    var_dump($result);
} catch (AwsException $e) {
    // output error message if fails
    error_log($e->getMessage());
}
```

# Using Amazon S3 multipart uploads with AWS SDK for PHP Version 3
<a name="s3-multipart-upload"></a>

With a single `PutObject` operation, you can upload objects up to 5 GB in size. However, by using the multipart upload methods (for example, `CreateMultipartUpload`, `UploadPart`, `CompleteMultipartUpload`, `AbortMultipartUpload`), you can upload objects from 5 MB to 5 TB in size.

The following example shows how to:
+ Upload an object to Amazon S3, using [ObjectUploader](https://docs.aws.amazon.com/aws-sdk-php/v3/api/class-Aws.S3.ObjectUploader.html).
+ Create a multipart upload for an Amazon S3 object using [MultipartUploader](https://docs.aws.amazon.com/aws-sdk-php/v3/api/class-Aws.S3.MultipartUploader.html).
+ Copy objects from one Amazon S3 location to another using [ObjectCopier](https://docs.aws.amazon.com/aws-sdk-php/v3/api/class-Aws.S3.ObjectCopier.html).

All the example code for the AWS SDK for PHP is available [here on GitHub](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/php/example_code).

## Credentials
<a name="examplecredentials"></a>

Before running the example code, configure your AWS credentials, as described in [Authenticating with AWS using AWS SDK for PHP Version 3](credentials.md). Then import the AWS SDK for PHP, as described in [Installing the AWS SDK for PHP Version 3](getting-started_installation.md).

## Object uploader
<a name="object-uploader"></a>

If you’re not sure whether `PutObject` or `MultipartUploader` is best for the task, use `ObjectUploader`. `ObjectUploader` uploads a large file to Amazon S3 using either `PutObject` or `MultipartUploader`, based on the payload size.

```
require 'vendor/autoload.php';

use Aws\Exception\MultipartUploadException;
use Aws\S3\MultipartUploader;
use Aws\S3\ObjectUploader;
use Aws\S3\S3Client;
```

 **Sample Code** 

```
// Create an S3Client.
$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-east-2',
    'version' => '2006-03-01'
]);

$bucket = 'your-bucket';
$key = 'my-file.zip';

// Use a stream instead of a file path.
$source = fopen('/path/to/large/file.zip', 'rb');

$uploader = new ObjectUploader(
    $s3Client,
    $bucket,
    $key,
    $source
);

do {
    try {
        $result = $uploader->upload();
        if ($result["@metadata"]["statusCode"] == '200') {
            print('<p>File successfully uploaded to ' . $result["ObjectURL"] . '.</p>');
        }
        print($result);
        // If the SDK chooses a multipart upload, try again if there is an exception.
        // Unlike PutObject calls, multipart upload calls are not automatically retried.
    } catch (MultipartUploadException $e) {
        rewind($source);
        $uploader = new MultipartUploader($s3Client, $source, [
            'state' => $e->getState(),
        ]);
    }
} while (!isset($result));

fclose($source);
```

### Configuration
<a name="object-uploader-configuration"></a>

The `ObjectUploader` object constructor accepts the following arguments:

**`$client`**  
The `Aws\ClientInterface` object to use for performing the transfers. This should be an instance of `Aws\S3\S3Client`.

**`$bucket`**  
(`string`, *required*) Name of the bucket to which the object is being uploaded.

**`$key`**  
(`string`, *required*) Key to use for the object being uploaded.

**`$body`**  
(`mixed`, *required*) Object data to upload. Can be a `StreamInterface`, a PHP stream resource, or a string of data to upload.

**`$acl`**  
(`string`) Access control list (ACL) to set on the object being upload. Objects are private by default.

**`$options`**  
An associative array of configuration options for the multipart upload. The following configuration options are valid:    
**`add_content_md5`**  
(`bool`) Set to true to automatically calculate the MD5 checksum for the upload.  
**`mup_threshold`**  
(`int`, *default*: `int(16777216)`) The number of bytes for the file size. If the file size exceeds this limit, a multipart upload is used.  
**`before_complete`**  
(`callable`) Callback to invoke before the `CompleteMultipartUpload` operation. The callback should have a function signature similar to: `function (Aws\Command $command) {...}`. See the [CompleteMultipartUpload API reference](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#completemultipartupload) for the parameters that you can add to the `CommandInterface` object.  
**`before_initiate`**  
(`callable`) Callback to invoke before the `CreateMultipartUpload` operation. The callback should have a function signature similar to: `function (Aws\Command $command) {...}`. The SDK invokes this callback if the file size exceeds the `mup_threshold` value. See the [CreateMultipartUpload API reference](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#createmultipartupload) for the parameters that you can add to the `CommandInterface` object.  
**`before_upload`**  
(`callable`) Callback to invoke before any `PutObject` or `UploadPart` operations. The callback should have a function signature similar to: `function (Aws\Command $command) {...}`. The SDK invokes this callback if the file size is less than or equal to the `mup_threshold` value. See the [PutObject API reference](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#putobject) for the parameters that you can apply to the `PutObject` request. For parameters that apply to an `UploadPart` request, see the [UploadPart API reference](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#uploadpart). The SDK ignores any parameter that is not applicable to the operation represented by the `CommandInterface` object.  
**`concurrency`**  
(`int`, *default*: `int(3)`) Maximum number of concurrent `UploadPart` operations allowed during the multipart upload.  
**`part_size`**  
(`int`, *default*: `int(5242880)`) Part size, in bytes, to use when doing a multipart upload. The value must between 5 MB and 5 GB, inclusive.  
**`state`**  
(`Aws\Multipart\UploadState`) An object that represents the state of the multipart upload and that is used to resume a previous upload. When this option is provided, the `$bucket` and `$key` arguments and the `part_size` option are ignored.  
**`params`**  
An associative array that provides configuration options for each subcommand. For example:  

```
new ObjectUploader($bucket, $key, $body, $acl, ['params' => ['CacheControl' => <some_value>])
```

## MultipartUploader
<a name="multipartuploader"></a>

Multipart uploads are designed to improve the upload experience for larger objects. They enable you to upload objects in parts independently, in any order, and in parallel.

Amazon S3 customers are encouraged to use multipart uploads for objects greater than 100 MB.

## MultipartUploader object
<a name="multipartuploader-object"></a>

The SDK has a special `MultipartUploader` object that simplifies the multipart upload process.

 **Imports** 

```
require 'vendor/autoload.php';

use Aws\Exception\MultipartUploadException;
use Aws\S3\MultipartUploader;
use Aws\S3\S3Client;
```

 **Sample Code** 

```
$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-west-2',
    'version' => '2006-03-01'
]);

// Use multipart upload
$source = '/path/to/large/file.zip';
$uploader = new MultipartUploader($s3Client, $source, [
    'bucket' => 'your-bucket',
    'key' => 'my-file.zip',
]);

try {
    $result = $uploader->upload();
    echo "Upload complete: {$result['ObjectURL']}\n";
} catch (MultipartUploadException $e) {
    echo $e->getMessage() . "\n";
}
```

The uploader creates a generator of part data, based on the provided source and configuration, and attempts to upload all parts. If some part uploads fail, the uploader continues to upload later parts until the entire source data is read. Afterwards, the uploader retries to upload the failed parts or throws an exception containing information about the parts that failed to upload.

## Customizing a multipart upload
<a name="customizing-a-multipart-upload"></a>

You can set custom options on the `CreateMultipartUpload`, `UploadPart`, and `CompleteMultipartUpload` operations executed by the multipart uploader via callbacks passed to its constructor.

 **Imports** 

```
require 'vendor/autoload.php';

use Aws\S3\MultipartUploader;
use Aws\S3\S3Client;
```

 **Sample Code** 

```
// Create an S3Client
$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-west-2',
    'version' => '2006-03-01'
]);

// Customizing a multipart upload
$source = '/path/to/large/file.zip';
$uploader = new MultipartUploader($s3Client, $source, [
    'bucket' => 'your-bucket',
    'key' => 'my-file.zip',
    'before_initiate' => function (Command $command) {
        // $command is a CreateMultipartUpload operation
        $command['CacheControl'] = 'max-age=3600';
    },
    'before_upload' => function (Command $command) {
        // $command is an UploadPart operation
        $command['RequestPayer'] = 'requester';
    },
    'before_complete' => function (Command $command) {
        // $command is a CompleteMultipartUpload operation
        $command['RequestPayer'] = 'requester';
    },
]);
```

### Manual garbage collection between part uploads
<a name="manual-garbage-collection-between-part-uploads"></a>

If you are hitting the memory limit with large uploads, this may be due to cyclic references generated by the SDK not yet having been collected by the [PHP garbage collector](https://www.php.net/manual/en/features.gc.php) when your memory limit was hit. Manually invoking the collection algorithm between operations may allow the cycles to be collected before hitting that limit. The following example invokes the collection algorithm using a callback before each part upload. Note that invoking the garbage collector does come with a performance cost, and optimal usage will depend on your use case and environment.

```
$uploader = new MultipartUploader($client, $source, [
   'bucket' => 'your-bucket',
   'key' => 'your-key',
   'before_upload' => function(\Aws\Command $command) {
      gc_collect_cycles();
   }
]);
```

## Recovering from errors
<a name="recovering-from-errors"></a>

When an error occurs during the multipart upload process, a `MultipartUploadException` is thrown. This exception provides access to the `UploadState` object, which contains information about the multipart upload’s progress. The `UploadState` can be used to resume an upload that failed to complete.

 **Imports** 

```
require 'vendor/autoload.php';

use Aws\Exception\MultipartUploadException;
use Aws\S3\MultipartUploader;
use Aws\S3\S3Client;
```

 **Sample Code** 

```
// Create an S3Client
$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-west-2',
    'version' => '2006-03-01'
]);

$source = '/path/to/large/file.zip';
$uploader = new MultipartUploader($s3Client, $source, [
    'bucket' => 'your-bucket',
    'key' => 'my-file.zip',
]);

//Recover from errors
do {
    try {
        $result = $uploader->upload();
    } catch (MultipartUploadException $e) {
        $uploader = new MultipartUploader($s3Client, $source, [
            'state' => $e->getState(),
        ]);
    }
} while (!isset($result));

//Abort a multipart upload if failed
try {
    $result = $uploader->upload();
} catch (MultipartUploadException $e) {
    // State contains the "Bucket", "Key", and "UploadId"
    $params = $e->getState()->getId();
    $result = $s3Client->abortMultipartUpload($params);
}
```

Resuming an upload from an `UploadState` attempts to upload parts that are not already uploaded. The state object tracks the missing parts, even if they are not consecutive. The uploader reads or seeks through the provided source file to the byte ranges that belong to the parts that still need to be uploaded.

 `UploadState` objects are serializable, so you can also resume an upload in a different process. You can also get the `UploadState` object, even when you’re not handling an exception, by calling `$uploader->getState()`.

**Important**  
Streams passed in as a source to a `MultipartUploader` are not automatically rewound before uploading. If you’re using a stream instead of a file path in a loop similar to the previous example, reset the `$source` variable inside of the `catch` block.

 **Imports** 

```
require 'vendor/autoload.php';

use Aws\Exception\MultipartUploadException;
use Aws\S3\MultipartUploader;
use Aws\S3\S3Client;
```

 **Sample Code** 

```
// Create an S3Client
$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-west-2',
    'version' => '2006-03-01'
]);

//Using stream instead of file path
$source = fopen('/path/to/large/file.zip', 'rb');
$uploader = new MultipartUploader($s3Client, $source, [
    'bucket' => 'your-bucket',
    'key' => 'my-file.zip',
]);

do {
    try {
        $result = $uploader->upload();
    } catch (MultipartUploadException $e) {
        rewind($source);
        $uploader = new MultipartUploader($s3Client, $source, [
            'state' => $e->getState(),
        ]);
    }
} while (!isset($result));
fclose($source);
```

### Aborting a multipart upload
<a name="aborting-a-multipart-upload"></a>

A multipart upload can be aborted by retrieving the `UploadId` contained in the `UploadState` object and passing it to `abortMultipartUpload`.

```
try {
    $result = $uploader->upload();
} catch (MultipartUploadException $e) {
    // State contains the "Bucket", "Key", and "UploadId"
    $params = $e->getState()->getId();
    $result = $s3Client->abortMultipartUpload($params);
}
```

## Asynchronous multipart uploads
<a name="asynchronous-multipart-uploads"></a>

Calling `upload()` on the `MultipartUploader` is a blocking request. If you are working in an asynchronous context, you can get a [promise](guide_promises.md) for the multipart upload.

```
require 'vendor/autoload.php';

use Aws\S3\MultipartUploader;
use Aws\S3\S3Client;
```

 **Sample Code** 

```
// Create an S3Client
$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-west-2',
    'version' => '2006-03-01'
]);

$source = '/path/to/large/file.zip';
$uploader = new MultipartUploader($s3Client, $source, [
    'bucket' => 'your-bucket',
    'key' => 'my-file.zip',
]);

$promise = $uploader->promise();
```

### Configuration
<a name="asynchronous-multipart-uploads-configuration"></a>

The `MultipartUploader` object constructor accepts the following arguments:

** `$client` **  
The `Aws\ClientInterface` object to use for performing the transfers. This should be an instance of `Aws\S3\S3Client`.

** `$source` **  
The source data being uploaded. This can be a path or URL (for example, `/path/to/file.jpg`), a resource handle (for example, `fopen('/path/to/file.jpg', 'r)`), or an instance of a [PSR-7 stream](https://docs.aws.amazon.com/aws-sdk-php/v3/api/class-Psr.Http.Message.StreamInterface.html).

** `$config` **  
An associative array of configuration options for the multipart upload.  
The following configuration options are valid:    
** `acl` **  
(`string`) Access control list (ACL) to set on the object being upload. Objects are private by default.  
** `before_complete` **  
(`callable`) Callback to invoke before the `CompleteMultipartUpload` operation. The callback should have a function signature like `function (Aws\Command $command) {...}`.  
** `before_initiate` **  
(`callable`) Callback to invoke before the `CreateMultipartUpload` operation. The callback should have a function signature like `function (Aws\Command $command) {...}`.  
** `before_upload` **  
(`callable`) Callback to invoke before any `UploadPart` operations. The callback should have a function signature like `function (Aws\Command $command) {...}`.  
** `bucket` **  
(`string`, *required*) Name of the bucket to which the object is being uploaded.  
** `concurrency` **  
(`int`, *default*: `int(5)`) Maximum number of concurrent `UploadPart` operations allowed during the multipart upload.  
** `key` **  
(`string`, *required*) Key to use for the object being uploaded.  
** `part_size` **  
(`int`, *default*: `int(5242880)`) Part size, in bytes, to use when doing a multipart upload. This must between 5 MB and 5 GB, inclusive.  
** `state` **  
(`Aws\Multipart\UploadState`) An object that represents the state of the multipart upload and that is used to resume a previous upload. When this option is provided, the `bucket`, `key`, and `part_size` options are ignored.  
**`add_content_md5`**  
(`boolean`) Set to true to automatically calculate the MD5 checksum for the upload.  
**`params`**  
An associative array that provides configuration options for each subcommand. For example:  

```
new MultipartUploader($client, $source, ['params' => ['CacheControl' => <some_value>]])
```

## Multipart copies
<a name="multipart-copies"></a>

The AWS SDK for PHP also includes a `MultipartCopy` object that is used in a similar way to the `MultipartUploader`, but is designed for copying objects between 5 GB and 5 TB in size within Amazon S3.

```
require 'vendor/autoload.php';

use Aws\Exception\MultipartUploadException;
use Aws\S3\MultipartCopy;
use Aws\S3\S3Client;
```

 **Sample Code** 

```
// Create an S3Client
$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-west-2',
    'version' => '2006-03-01'
]);

//Copy objects within S3
$copier = new MultipartCopy($s3Client, '/bucket/key?versionId=foo', [
    'bucket' => 'your-bucket',
    'key' => 'my-file.zip',
]);

try {
    $result = $copier->copy();
    echo "Copy complete: {$result['ObjectURL']}\n";
} catch (MultipartUploadException $e) {
    echo $e->getMessage() . "\n";
}
```

# Amazon S3 pre-signed URL with AWS SDK for PHP Version 3
<a name="s3-presigned-url"></a>

You can authenticate certain types of requests by passing the required information as query-string parameters instead of using the Authorization HTTP header. This is useful for enabling direct third-party browser access to your private Amazon S3 data, without proxying the request. The idea is to construct a “pre-signed” request and encode it as a URL that another user can use. Additionally, you can limit a pre-signed request by specifying an expiration time.

## Create a pre-signed URL for an HTTP GET request
<a name="s3-presigned-url-get"></a>

The following code example shows how to create a pre-signed URL for an HTTP GET request by using the SDK for PHP.

```
<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$s3Client = new S3Client([
    'region' => 'us-west-2',
]);

// Supply a CommandInterface object and an expires parameter to the `createPresignedRequest` method.
$request = $s3Client->createPresignedRequest(
    $s3Client->getCommand('GetObject', [
        'Bucket' => 'amzn-s3-demo-bucket',
        'Key' => 'demo-key',
    ]),
    '+1 hour'
);

// From the resulting RequestInterface object, you can get the URL.
$presignedUrl = (string) $request->getUri();

echo $presignedUrl;
```

The [API reference for the `createPresignedRequest`](https://docs.aws.amazon.com/aws-sdk-php/v3/api/class-Aws.S3.S3Client.html#method_createPresignedRequest)method provides more details.

Someone else can use the `$presignedUrl` value to retrieve the object within the next hour. When the HTTP GET request is made—using a browser, for example—it appears to the S3 service that the call is coming from the user who created the pres-signed URL.

## Create a pre-signed URL for an HTTP PUT request
<a name="s3-presigned-url-put"></a>

The following code example shows how to create a pre-signed URL for an HTTP PUT request by using the SDK for PHP.

```
<?php

require 'vendor/autoload.php';

use Aws\S3\S3Client;

$s3Client = new S3Client([
    'region' => 'us-west-2',
]);

$request = $s3Client->createPresignedRequest(
    $s3Client->getCommand('PutObject', [
        'Bucket' => 'amzn-s3-demo-bucket',
        'Key' => 'demo-key',
    ]),
    '+1 hour'
);

// From the resulting RequestInterface object, you can get the URL.
$presignedUrl = (string) $request->getUri();
```

Someone else can now use the pre-signed URL in an HTTP PUT request to upload a file:

```
use GuzzleHttp\Psr7\Request;
use GuzzleHttp\Psr7\Response;

// ...

function uploadWithPresignedUrl($presignedUrl, $filePath, $s3Client): ?Response
{
    // Get the HTTP handler from the S3 client.
    $handler = $s3Client->getHandlerList()->resolve();
    
    // Create a stream from the file.
    $fileStream = new Stream(fopen($filePath, 'r'));
    
    // Create the request.
    $request = new Request(
        'PUT',
        $presignedUrl,
        [
            'Content-Type' => mime_content_type($filePath),
            'Content-Length' => filesize($filePath)
        ],
        $fileStream
    );
    
    // Send the request using the handler.
    try {
        $promise = $handler($request, []);
        $response = $promise->wait();
        return $response;
    } catch (Exception $e) {
        echo "Error uploading file: " . $e->getMessage() . "\n";
        return null;
    }
}
```

# Amazon S3 pre-signed POSTs with AWS SDK for PHP Version 3
<a name="s3-presigned-post"></a>

Much like pre-signed URLs, pre-signed POSTs enable you to give write access to a user without giving them AWS credentials. Pre-signed POST forms can be created with the help of an instance of [AwsS3PostObjectV4](https://docs.aws.amazon.com/aws-sdk-php/v3/api/class-Aws.S3.PostObjectV4.html).

The following examples show how to:
+ Get data for an S3 Object POST upload form using [PostObjectV4](https://docs.aws.amazon.com/aws-sdk-php/v3/api/class-Aws.S3.PostObjectV4.html).

All the example code for the AWS SDK for PHP is available [here on GitHub](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/php/example_code).

## Credentials
<a name="examplecredentials"></a>

**Note**  
`PostObjectV4` does not work with credentials sourced from AWS IAM Identity Center.

Before running the example code, configure your AWS credentials, as described in [Authenticating with AWS using AWS SDK for PHP Version 3](credentials.md). Then import the AWS SDK for PHP, as described in [Installing the AWS SDK for PHP Version 3](getting-started_installation.md).

## Create PostObjectV4
<a name="create-postobjectv4"></a>

To create an instance of `PostObjectV4`, you must provide the following:
+ instance of `Aws\S3\S3Client` 
+ bucket
+ associative array of form input fields
+ array of policy conditions (see [Policy Construction](https://docs.aws.amazon.com/AmazonS3/latest/dev/HTTPPOSTForms.html) in the Amazon Simple Storage Service User Guide)
+ expiration time string for the policy (optional, one hour by default).

 **Imports** 

```
require '../vendor/autoload.php';

use Aws\S3\PostObjectV4;
use Aws\S3\S3Client;
```

 **Sample Code** 

```
require '../vendor/autoload.php';

use Aws\S3\PostObjectV4;
use Aws\S3\S3Client;

$client = new S3Client([
    'profile' => 'default',
    'region' => 'us-east-1',
]);
$bucket = 'amzn-s3-demo-bucket10';
$starts_with = 'user/eric/';
$client->listBuckets();

// Set defaults for form input fields.
$formInputs = ['acl' => 'public-read'];

// Construct an array of conditions for policy.
$options = [
    ['acl' => 'public-read'],
    ['bucket' => $bucket],
    ['starts-with', '$key', $starts_with],
];

// Set an expiration time (optional).
$expires = '+2 hours';

$postObject = new PostObjectV4(
    $client,
    $bucket,
    $formInputs,
    $options,
    $expires
);

// Get attributes for the HTML form, for example, action, method, enctype.
$formAttributes = $postObject->getFormAttributes();

// Get attributes for the HTML form values.
$formInputs = $postObject->getFormInputs();
?>
<!DOCTYPE html>
<html lang="en">
<head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
    <title>PHP</title>
</head>
<body>
<form action="<?php echo $formAttributes['action'] ?>" method="<?php echo $formAttributes['method'] ?>"
      enctype="<?php echo $formAttributes['enctype'] ?>">
    <label id="key">
        <input hidden type="text" name="key" value="<?php echo $starts_with ?><?php echo $formInputs['key'] ?>"/>
    </label>
    <h3>$formInputs:</h3>
    acl: <label id="acl">
        <input readonly type="text" name="acl" value="<?php echo $formInputs['acl'] ?>"/>
    </label><br/>
    X-Amz-Credential: <label id="credential">
        <input readonly type="text" name="X-Amz-Credential" value="<?php echo $formInputs['X-Amz-Credential'] ?>"/>
    </label><br/>
    X-Amz-Algorithm: <label id="algorithm">
        <input readonly type="text" name="X-Amz-Algorithm" value="<?php echo $formInputs['X-Amz-Algorithm'] ?>"/>
    </label><br/>
    X-Amz-Date: <label id="date">
        <input readonly type="text" name="X-Amz-Date" value="<?php echo $formInputs['X-Amz-Date'] ?>"/>
    </label><br/><br/><br/>
    Policy: <label id="policy">
        <input readonly type="text" name="Policy" value="<?php echo $formInputs['Policy'] ?>"/>
    </label><br/>
    X-Amz-Signature: <label id="signature">
        <input readonly type="text" name="X-Amz-Signature" value="<?php echo $formInputs['X-Amz-Signature'] ?>"/>
    </label><br/><br/>
    <h3>Choose file:</h3>
    <input type="file" name="file"/> <br/><br/>
    <h3>Upload file:</h3>
    <input type="submit" name="submit" value="Upload to Amazon S3"/>
</form>
</body>
</html>
```

# Using an Amazon S3 bucket as a static web host with AWS SDK for PHP Version 3
<a name="s3-examples-static-web-host"></a>

You can host a static website on Amazon S3. To learn more, see [Hosting a Static Website on Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html).

The following example shows how to:
+ Get the website configuration for a bucket using [GetBucketWebsite](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#getbucketwebsite).
+ Set the website configuration for a bucket using [PutBucketWebsite](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#putbucketwebsite).
+ Remove the website configuration from a bucket using [DeleteBucketWebsite](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#deletebucketwebsite).

All the example code for the AWS SDK for PHP Version 3 is available [here on GitHub](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/php/example_code).

## Credentials
<a name="credentials-s3-examples-static-web-host"></a>

Before running the example code, configure your AWS credentials. See [Credentials for the AWS SDK for PHP Version 3](guide_credentials.md).

## Get, set, and delete the website configuration for a bucket
<a name="get-set-and-delete-the-website-configuration-for-a-bucket"></a>

 **Imports** 

```
require 'vendor/autoload.php';

use Aws\Exception\AwsException;
use Aws\S3\S3Client;
```

 **Sample Code** 

```
$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-west-2',
    'version' => '2006-03-01'
]);

// Retrieving the Bucket Website Configuration
$bucket = 'amzn-s3-demo-bucket';
try {
    $resp = $s3Client->getBucketWebsite([
        'Bucket' => $bucket
    ]);
    echo "Succeed in retrieving website configuration for bucket: " . $bucket . "\n";
} catch (AwsException $e) {
    // output error message if fails
    echo $e->getMessage();
    echo "\n";
}

// Setting a Bucket Website Configuration
$params = [
    'Bucket' => $bucket,
    'WebsiteConfiguration' => [
        'ErrorDocument' => [
            'Key' => 'foo',
        ],
        'IndexDocument' => [
            'Suffix' => 'bar',
        ],
    ]
];

try {
    $resp = $s3Client->putBucketWebsite($params);
    echo "Succeed in setting bucket website configuration.\n";
} catch (AwsException $e) {
    // Display error message
    echo $e->getMessage();
    echo "\n";
}

// Deleting a Bucket Website Configuration
try {
    $resp = $s3Client->deleteBucketWebsite([
        'Bucket' => $bucket
    ]);
    echo "Succeed in deleting policy for bucket: " . $bucket . "\n";
} catch (AwsException $e) {
    // output error message if fails
    echo $e->getMessage();
    echo "\n";
}
```

# Working with Amazon S3 bucket policies with the AWS SDK for PHP Version 3
<a name="s3-examples-bucket-policies"></a>

You can use a bucket policy to grant permission to your Amazon S3 resources. To learn more, see [Using Bucket Policies and User Policies](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-iam-policies.html).

The following example shows how to:
+ Return the policy for a specified bucket using [GetBucketPolicy](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#getbucketpolicy).
+ Replace a policy on a bucket using [PutBucketPolicy](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#putbucketpolicy).
+ Delete a policy from a bucket using [DeleteBucketPolicy](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#deletebucketpolicy).

All the example code for the AWS SDK for PHP is available [here on GitHub](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/php/example_code).

## Credentials
<a name="examplecredentials"></a>

Before running the example code, configure your AWS credentials, as described in [Authenticating with AWS using AWS SDK for PHP Version 3](credentials.md). Then import the AWS SDK for PHP, as described in [Installing the AWS SDK for PHP Version 3](getting-started_installation.md).

## Get, delete, and replace a policy on a bucket
<a name="get-delete-and-replace-a-policy-on-a-bucket"></a>

 **Imports** 

```
require "vendor/autoload.php";

use Aws\Exception\AwsException;
use Aws\S3\S3Client;
```

 **Sample Code** 

```
$s3Client = new S3Client([
    'profile' => 'default',
    'region' => 'us-west-2',
    'version' => '2006-03-01'
]);

$bucket = 'amzn-s3-demo-bucket';

// Get the policy of a specific bucket
try {
    $resp = $s3Client->getBucketPolicy([
        'Bucket' => $bucket
    ]);
    echo "Succeed in receiving bucket policy:\n";
    echo $resp->get('Policy');
    echo "\n";
} catch (AwsException $e) {
    // Display error message
    echo $e->getMessage();
    echo "\n";
}

// Deletes the policy from the bucket
try {
    $resp = $s3Client->deleteBucketPolicy([
        'Bucket' => $bucket
    ]);
    echo "Succeed in deleting policy of bucket: " . $bucket . "\n";
} catch (AwsException $e) {
    // Display error message
    echo $e->getMessage();
    echo "\n";
}

// Replaces a policy on the bucket
try {
    $resp = $s3Client->putBucketPolicy([
        'Bucket' => $bucket,
        'Policy' => 'foo policy',
    ]);
    echo "Succeed in put a policy on bucket: " . $bucket . "\n";
} catch (AwsException $e) {
    // Display error message
    echo $e->getMessage();
    echo "\n";
}
```

# Using S3 access point ARNs the AWS SDK for PHP Version 3
<a name="s3-examples-access-point-arn"></a>

S3 introduced access points, a new way to interact with S3 buckets. Access Points can have unique policies and configuration applied to them instead of directly to the bucket. The AWS SDK for PHP allows you to use access point ARNs in the bucket field for API operations instead of specifying bucket name explicitly. More details on how S3 access points and ARNs work can be found [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/using-access-points.html). The following examples show how to:
+ Use [GetObject](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#getobject) with an access point ARN to fetch an object from a bucket.
+ Use [PutObject](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#putobject) with an access point ARN to add an object to a bucket.
+ Configure the S3 client to use the ARN region instead of the client region.

All the example code for the AWS SDK for PHP is available [here on GitHub](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/php/example_code).

## Credentials
<a name="examplecredentials"></a>

Before running the example code, configure your AWS credentials, as described in [Authenticating with AWS using AWS SDK for PHP Version 3](credentials.md). Then import the AWS SDK for PHP, as described in [Installing the AWS SDK for PHP Version 3](getting-started_installation.md).

 **Imports** 

```
require 'vendor/autoload.php';

use Aws\S3\S3Client;
```

## Get object
<a name="get-object"></a>

First create an AWS.S3 client service that specifies the AWS region and version. Then call the `getObject` method with your key and an S3 access point ARN in the `Bucket` field, which will fetch the object from the bucket associated with that access point.

 **Sample Code** 

```
$s3 = new S3Client([
    'version'     => 'latest',
    'region'      => 'us-west-2',
]);
$result = $s3->getObject([
    'Bucket' => 'arn:aws:s3:us-west-2:123456789012:accesspoint:endpoint-name',
    'Key' => 'MyKey'
]);
```

## Put an object in a bucket
<a name="put-an-object-in-a-bucket"></a>

First create an AWS.S3 client service that specifies the AWS Region and version. Then call the `putObject` method with the desired key, the body or source file, and an S3 access point ARN in the `Bucket` field, which will put the object in the bucket associated with that access point.

 **Sample Code** 

```
$s3 = new S3Client([
    'version'     => 'latest',
    'region'      => 'us-west-2',
]);
$result = $s3->putObject([
    'Bucket' => 'arn:aws:s3:us-west-2:123456789012:accesspoint:endpoint-name',
    'Key' => 'MyKey',
    'Body' => 'MyBody'
]);
```

## Configure the S3 client to use the ARN region instead of the client region
<a name="configure-the-s3-client-to-use-the-arn-region-instead-of-the-client-region"></a>

When using an S3 access point ARN in an S3 client operation, by default the client will make sure that the ARN region matches the client region, throwing an exception if it does not. This behavior can be changed to accept the ARN region over the client region by setting the `use_arn_region` configuration option to `true`. By default, the option is set to `false`.

 **Sample Code** 

```
$s3 = new S3Client([
    'version'        => 'latest',
    'region'         => 'us-west-2',
    'use_arn_region' => true
]);
```

The client will also check an environment variable and a config file option, in the following order of priority:

1. The client option `use_arn_region`, as in the above example.

1. The environment variable `AWS_S3_USE_ARN_REGION` 

```
export AWS_S3_USE_ARN_REGION=true
```

1. The config variable `s3_use_arn_region` in the AWS shared configuration file (by default in `~/.aws/config`).

```
[default]
s3_use_arn_region = true
```

# Use Amazon S3 Multi-Region Access Points with the AWS SDK for PHP Version 3
<a name="s3-multi-region-access-points"></a>

[Amazon Simple Storage Service (S3) Multi-Region Access Points](https://docs.aws.amazon.com//AmazonS3/latest/userguide/MultiRegionAccessPoints.html) provide a global endpoint for routing Amazon S3 request traffic between AWS Regions.

You can create Multi-Region Access Points [using the SDK for PHP](https://docs.aws.amazon.com//aws-sdk-php/v3/api/api-s3control-2018-08-20.html#createmultiregionaccesspoint), another AWS SDK, the [S3 console, or AWS CLI](https://docs.aws.amazon.com//AmazonS3/latest/userguide/multi-region-access-point-create-examples.html),

**Important**  
To use Multi-Region Access Points with the SDK for PHP, your PHP environment must have the [AWS Common Runtime (AWS CRT) extension](guide_crt.md) installed.

When you create a Multi-Region Access Point, Amazon S3 generates an Amazon Resource Name (ARN) that has the following format: 

`arn:aws:s3::account-id:accesspoint/MultiRegionAccessPoint_alias`

You can use the generated ARN in place of a bucket name for `[getObject()](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#getobject)` and `[putObject()](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html#putobject)` methods.

```
<?php
require './vendor/autoload.php';

use Aws\S3\S3Client;

// Assign the Multi-Region Access Point to a variable and use it place of a bucket name.
$mrap = 'arn:aws:s3::123456789012:accesspoint/mfzwi23gnjvgw.mrap';
$key = 'my-key';

$s3Client = new S3Client([
    'region' => 'us-east-1'
]);

$s3Client->putObject([
    'Bucket' => $mrap,
    'Key' => $key,
    'Body' => 'Hello World!'
]);

$result = $s3Client->getObject([
    'Bucket' => $mrap,
    'Key' => $key
]);

echo $result['Body'] . "\n";

// Clean up.
$result = $s3Client->deleteObject([
    'Bucket' => $mrap,
    'Key' => $key
]);

$s3Client->waitUntil('ObjectNotExists', ['Bucket' => $mrap, 'Key' => $key]);

echo "Object deleted\n";
```