Using Amazon S3 multipart uploads with AWS SDK for PHP Version 3
With a single PutObject
operation, you can upload objects up to 5 GB in size. However, by
using the multipart upload methods (for example, CreateMultipartUpload
,
UploadPart
, CompleteMultipartUpload
, AbortMultipartUpload
), you
can upload objects from 5 MB to 5 TB in size.
The following example shows how to:
-
Upload an object to Amazon S3, using ObjectUploader.
-
Create a multipart upload for an Amazon S3 object using MultipartUploader.
-
Copy objects from one Amazon S3 location to another using ObjectCopier.
All the example code for the AWS SDK for PHP is available here on
GitHub
Credentials
Before running the example code, configure your AWS credentials, as described in Credentials. Then import the AWS SDK for PHP, as described in Basic usage.
Object uploader
If you’re not sure whether PutObject
or MultipartUploader
is
best for the task, use ObjectUploader
. ObjectUploader
uploads a
large file to Amazon S3 using either PutObject
or MultipartUploader
,
based on the payload size.
require 'vendor/autoload.php'; use Aws\Exception\MultipartUploadException; use Aws\S3\MultipartUploader; use Aws\S3\ObjectUploader; use Aws\S3\S3Client;
Sample Code
// Create an S3Client. $s3Client = new S3Client([ 'profile' => 'default', 'region' => 'us-east-2', 'version' => '2006-03-01' ]); $bucket = 'your-bucket'; $key = 'my-file.zip'; // Use a stream instead of a file path. $source = fopen('/path/to/large/file.zip', 'rb'); $uploader = new ObjectUploader( $s3Client, $bucket, $key, $source ); do { try { $result = $uploader->upload(); if ($result["@metadata"]["statusCode"] == '200') { print('<p>File successfully uploaded to ' . $result["ObjectURL"] . '.</p>'); } print($result); // If the SDK chooses a multipart upload, try again if there is an exception. // Unlike PutObject calls, multipart upload calls are not automatically retried. } catch (MultipartUploadException $e) { rewind($source); $uploader = new MultipartUploader($s3Client, $source, [ 'state' => $e->getState(), ]); } } while (!isset($result)); fclose($source);
Configuration
The ObjectUploader
object constructor accepts the following arguments:
$client
-
The
Aws\ClientInterface
object to use for performing the transfers. This should be an instance ofAws\S3\S3Client
. $bucket
-
(
string
, required) Name of the bucket to which the object is being uploaded. $key
-
(
string
, required) Key to use for the object being uploaded. $body
-
(
mixed
, required) Object data to upload. Can be aStreamInterface
, a PHP stream resource, or a string of data to upload. $acl
-
(
string
) Access control list (ACL) to set on the object being upload. Objects are private by default. $options
-
An associative array of configuration options for the multipart upload. The following configuration options are valid:
add_content_md5
-
(
bool
) Set to true to automatically calculate the MD5 checksum for the upload. mup_threshold
-
(
int
, default:int(16777216)
) The number of bytes for the file size. If the file size exceeds this limit, a multipart upload is used. before_complete
-
(
callable
) Callback to invoke before theCompleteMultipartUpload
operation. The callback should have a function signature similar to:function (Aws\Command $command) {...}
. See the CompleteMultipartUpload API reference for the parameters that you can add to theCommandInterface
object. before_initiate
-
(
callable
) Callback to invoke before theCreateMultipartUpload
operation. The callback should have a function signature similar to:function (Aws\Command $command) {...}
. The SDK invokes this callback if the file size exceeds themup_threshold
value. See the CreateMultipartUpload API reference for the parameters that you can add to theCommandInterface
object. before_upload
-
(
callable
) Callback to invoke before anyPutObject
orUploadPart
operations. The callback should have a function signature similar to:function (Aws\Command $command) {...}
. The SDK invokes this callback if the file size is less than or equal to themup_threshold
value. See the PutObject API reference for the parameters that you can apply to thePutObject
request. For parameters that apply to anUploadPart
request, see the UploadPart API reference. The SDK ignores any parameter that is not applicable to the operation represented by theCommandInterface
object. concurrency
-
(
int
, default:int(3)
) Maximum number of concurrentUploadPart
operations allowed during the multipart upload. part_size
-
(
int
, default:int(5242880)
) Part size, in bytes, to use when doing a multipart upload. The value must between 5 MB and 5 GB, inclusive. state
-
(
Aws\Multipart\UploadState
) An object that represents the state of the multipart upload and that is used to resume a previous upload. When this option is provided, the$bucket
and$key
arguments and thepart_size
option are ignored. params
-
An associative array that provides configuration options for each subcommand. For example:
new ObjectUploader($bucket, $key, $body, $acl, ['params' => ['CacheControl' =>
<some_value>
])
MultipartUploader
Multipart uploads are designed to improve the upload experience for larger objects. They enable you to upload objects in parts independently, in any order, and in parallel.
Amazon S3 customers are encouraged to use multipart uploads for objects greater than 100 MB.
MultipartUploader object
The SDK has a special MultipartUploader
object that simplifies the multipart upload
process.
Imports
require 'vendor/autoload.php'; use Aws\Exception\MultipartUploadException; use Aws\S3\MultipartUploader; use Aws\S3\S3Client;
Sample Code
$s3Client = new S3Client([ 'profile' => 'default', 'region' => 'us-west-2', 'version' => '2006-03-01' ]); // Use multipart upload $source = '/path/to/large/file.zip'; $uploader = new MultipartUploader($s3Client, $source, [ 'bucket' => 'your-bucket', 'key' => 'my-file.zip', ]); try { $result = $uploader->upload(); echo "Upload complete: {$result['ObjectURL']}\n"; } catch (MultipartUploadException $e) { echo $e->getMessage() . "\n"; }
The uploader creates a generator of part data, based on the provided source and configuration, and attempts to upload all parts. If some part uploads fail, the uploader continues to upload later parts until the entire source data is read. Afterwards, the uploader retries to upload the failed parts or throws an exception containing information about the parts that failed to upload.
Customizing a multipart upload
You can set custom options on the CreateMultipartUpload
, UploadPart
, and
CompleteMultipartUpload
operations executed by the multipart uploader via callbacks
passed to its constructor.
Imports
require 'vendor/autoload.php'; use Aws\S3\MultipartUploader; use Aws\S3\S3Client;
Sample Code
// Create an S3Client $s3Client = new S3Client([ 'profile' => 'default', 'region' => 'us-west-2', 'version' => '2006-03-01' ]); // Customizing a multipart upload $source = '/path/to/large/file.zip'; $uploader = new MultipartUploader($s3Client, $source, [ 'bucket' => 'your-bucket', 'key' => 'my-file.zip', 'before_initiate' => function (Command $command) { // $command is a CreateMultipartUpload operation $command['CacheControl'] = 'max-age=3600'; }, 'before_upload' => function (Command $command) { // $command is an UploadPart operation $command['RequestPayer'] = 'requester'; }, 'before_complete' => function (Command $command) { // $command is a CompleteMultipartUpload operation $command['RequestPayer'] = 'requester'; }, ]);
Manual garbage collection between part uploads
If you are hitting the memory limit with large uploads, this may be due to cyclic references
generated by the SDK not yet having been collected by the PHP garbage collector
$uploader = new MultipartUploader($client, $source, [ 'bucket' => 'your-bucket', 'key' => 'your-key', 'before_upload' => function(\Aws\Command $command) { gc_collect_cycles(); } ]);
Recovering from errors
When an error occurs during the multipart upload process, a MultipartUploadException
is
thrown. This exception provides access to the UploadState
object, which contains
information about the multipart upload’s progress. The UploadState
can be used to resume an
upload that failed to complete.
Imports
require 'vendor/autoload.php'; use Aws\Exception\MultipartUploadException; use Aws\S3\MultipartUploader; use Aws\S3\S3Client;
Sample Code
// Create an S3Client $s3Client = new S3Client([ 'profile' => 'default', 'region' => 'us-west-2', 'version' => '2006-03-01' ]); $source = '/path/to/large/file.zip'; $uploader = new MultipartUploader($s3Client, $source, [ 'bucket' => 'your-bucket', 'key' => 'my-file.zip', ]); //Recover from errors do { try { $result = $uploader->upload(); } catch (MultipartUploadException $e) { $uploader = new MultipartUploader($s3Client, $source, [ 'state' => $e->getState(), ]); } } while (!isset($result)); //Abort a multipart upload if failed try { $result = $uploader->upload(); } catch (MultipartUploadException $e) { // State contains the "Bucket", "Key", and "UploadId" $params = $e->getState()->getId(); $result = $s3Client->abortMultipartUpload($params); }
Resuming an upload from an UploadState
attempts to upload parts that are not already
uploaded. The state object tracks the missing parts, even if they are not consecutive. The uploader
reads or seeks through the provided source file to the byte ranges that belong to the parts that still
need to be uploaded.
UploadState
objects are serializable, so you can also resume an upload in a different
process. You can also get the UploadState
object, even when you’re not handling an
exception, by calling $uploader->getState()
.
Important
Streams passed in as a source to a MultipartUploader
are not automatically rewound
before uploading. If you’re using a stream instead of a file path in a loop similar to the previous
example, reset the $source
variable inside of the catch
block.
Imports
require 'vendor/autoload.php'; use Aws\Exception\MultipartUploadException; use Aws\S3\MultipartUploader; use Aws\S3\S3Client;
Sample Code
// Create an S3Client $s3Client = new S3Client([ 'profile' => 'default', 'region' => 'us-west-2', 'version' => '2006-03-01' ]); //Using stream instead of file path $source = fopen('/path/to/large/file.zip', 'rb'); $uploader = new MultipartUploader($s3Client, $source, [ 'bucket' => 'your-bucket', 'key' => 'my-file.zip', ]); do { try { $result = $uploader->upload(); } catch (MultipartUploadException $e) { rewind($source); $uploader = new MultipartUploader($s3Client, $source, [ 'state' => $e->getState(), ]); } } while (!isset($result)); fclose($source);
Aborting a multipart upload
A multipart upload can be aborted by retrieving the UploadId
contained in the
UploadState
object and passing it to abortMultipartUpload
.
try { $result = $uploader->upload(); } catch (MultipartUploadException $e) { // State contains the "Bucket", "Key", and "UploadId" $params = $e->getState()->getId(); $result = $s3Client->abortMultipartUpload($params); }
Asynchronous multipart uploads
Calling upload()
on the MultipartUploader
is a blocking request. If you
are working in an asynchronous context, you can get a promise for
the multipart upload.
require 'vendor/autoload.php'; use Aws\S3\MultipartUploader; use Aws\S3\S3Client;
Sample Code
// Create an S3Client $s3Client = new S3Client([ 'profile' => 'default', 'region' => 'us-west-2', 'version' => '2006-03-01' ]); $source = '/path/to/large/file.zip'; $uploader = new MultipartUploader($s3Client, $source, [ 'bucket' => 'your-bucket', 'key' => 'my-file.zip', ]); $promise = $uploader->promise();
Configuration
The MultipartUploader
object constructor accepts the following arguments:
-
$client
-
The
Aws\ClientInterface
object to use for performing the transfers. This should be an instance ofAws\S3\S3Client
. -
$source
-
The source data being uploaded. This can be a path or URL (for example,
/path/to/file.jpg
), a resource handle (for example,fopen('/path/to/file.jpg', 'r)
), or an instance of a PSR-7 stream. -
$config
-
An associative array of configuration options for the multipart upload.
The following configuration options are valid:
-
acl
-
(
string
) Access control list (ACL) to set on the object being upload. Objects are private by default. -
before_complete
-
(
callable
) Callback to invoke before theCompleteMultipartUpload
operation. The callback should have a function signature likefunction (Aws\Command $command) {...}
. -
before_initiate
-
(
callable
) Callback to invoke before theCreateMultipartUpload
operation. The callback should have a function signature likefunction (Aws\Command $command) {...}
. -
before_upload
-
(
callable
) Callback to invoke before anyUploadPart
operations. The callback should have a function signature likefunction (Aws\Command $command) {...}
. -
bucket
-
(
string
, required) Name of the bucket to which the object is being uploaded. -
concurrency
-
(
int
, default:int(5)
) Maximum number of concurrentUploadPart
operations allowed during the multipart upload. -
key
-
(
string
, required) Key to use for the object being uploaded. -
part_size
-
(
int
, default:int(5242880)
) Part size, in bytes, to use when doing a multipart upload. This must between 5 MB and 5 GB, inclusive. -
state
-
(
Aws\Multipart\UploadState
) An object that represents the state of the multipart upload and that is used to resume a previous upload. When this option is provided, thebucket
,key
, andpart_size
options are ignored. add_content_md5
-
(
boolean
) Set to true to automatically calculate the MD5 checksum for the upload. params
-
An associative array that provides configuration options for each subcommand. For example:
new MultipartUploader($client, $source, ['params' => ['CacheControl' =>
<some_value>
]])
-
Multipart copies
The AWS SDK for PHP also includes a MultipartCopy
object that is used in a similar way to the
MultipartUploader
, but is designed for copying objects between 5 GB and 5 TB in size
within Amazon S3.
require 'vendor/autoload.php'; use Aws\Exception\MultipartUploadException; use Aws\S3\MultipartCopy; use Aws\S3\S3Client;
Sample Code
// Create an S3Client $s3Client = new S3Client([ 'profile' => 'default', 'region' => 'us-west-2', 'version' => '2006-03-01' ]); //Copy objects within S3 $copier = new MultipartCopy($s3Client, '/bucket/key?versionId=foo', [ 'bucket' => 'your-bucket', 'key' => 'my-file.zip', ]); try { $result = $copier->copy(); echo "Copy complete: {$result['ObjectURL']}\n"; } catch (MultipartUploadException $e) { echo $e->getMessage() . "\n"; }