Creates an Amazon S3 object using the multipart upload APIs. It is analogous to create_object()
.
While each individual part of a multipart upload can hold up to 5 GB of data, this method limits the
part size to a maximum of 500 MB. The combined size of all parts can not exceed 5 TB of data. When an
object is stored in Amazon S3, the data is streamed to multiple storage servers in multiple data
centers. This ensures the data remains available in the event of internal network or hardware failure.
Amazon S3 charges for storage as well as requests to the service. Smaller part sizes (and more
requests) allow for faster failures and better upload reliability. Larger part sizes (and fewer
requests) costs slightly less but has lower upload reliability.
In certain cases with large objects, it’s possible for this method to attempt to open more file system
connections than allowed by the OS. In this case, either
increase the number of connections
allowed or increase the value of the partSize
parameter to use a larger part size.
Access
Parameters
Parameter |
Type |
Required |
Description |
$bucket
|
string
|
Required
|
The name of the bucket to use. |
$filename
|
string
|
Required
|
The file name for the object. |
$opt
|
array
|
Optional
|
An associative array of parameters that can have the following keys:
fileUpload - string|resource - Required - The URL/path for the file to upload, or an open resource.acl - string - Optional - The ACL settings for the specified object. [Allowed values: AmazonS3::ACL_PRIVATE , AmazonS3::ACL_PUBLIC , AmazonS3::ACL_OPEN , AmazonS3::ACL_AUTH_READ , AmazonS3::ACL_OWNER_READ , AmazonS3::ACL_OWNER_FULL_CONTROL ]. The default value is ACL_PRIVATE .contentType - string - Optional - The type of content that is being sent in the body. The default value is application/octet-stream .headers - array - Optional - Standard HTTP headers to send along in the request. Accepts an associative array of key-value pairs.length - integer - Optional - The size of the object in bytes. For more information, see RFC 2616, section 14.13. The value can also be passed to the header option as Content-Length .limit - integer - Optional - The maximum number of concurrent uploads done by cURL. Gets passed to CFBatchRequest .meta - array - Optional - An associative array of key-value pairs. Any header starting with x-amz-meta-: is considered user metadata. It will be stored with the object and returned when you retrieve the object. The total size of the HTTP request, not including the body, must be less than 4 KB.partSize - integer - Optional - The size of an individual part. The size may not be smaller than 5 MB or larger than 500 MB. The default value is 50 MB.redirectTo - string - Optional - The URI to send an HTTP 301 redirect to when accessing this object. Value must be prefixed either / , http:// or https:// .seekTo - integer - Optional - The starting position in bytes for the first piece of the file/stream to upload.storage - string - Optional - Whether to use Standard or Reduced Redundancy storage. [Allowed values: AmazonS3::STORAGE_STANDARD , AmazonS3::STORAGE_REDUCED ]. The default value is STORAGE_STANDARD .uploadId - string - Optional - An upload ID identifying an existing multipart upload to use. If this option is not set, one will be created automatically.curlopts - array - Optional - A set of values to pass directly into curl_setopt() , where the key is a pre-defined CURLOPT_* constant.returnCurlHandle - boolean - Optional - A private toggle specifying that the cURL handle be returned rather than actually completing the request. This toggle is useful for manually managed batch requests. |
Returns
Examples
Create a new object using the multipart upload APIs.
// Define a mebibyte
define('MB', 1024 * 1024);
// Instantiate the class
$s3 = new AmazonS3();
$bucket = 'my-bucket' . strtolower($s3->key);
// Create a new multipart object. Set several optional parameters as well.
$response = $s3->create_mpu_object($bucket, 'tv_episode.webm', array(
'fileUpload' => 'tv_episode.webm',
'partSize' => 40*MB,
'contentType' => 'video/webm',
'acl' => AmazonS3::ACL_PRIVATE,
'storage' => AmazonS3::STORAGE_STANDARD,
'meta' => array(
'resolution' => '720p',
'rating' => 'US TV-14',
'runtime' => '42:42'
)
));
// Success?
var_dump($response->isOK());
Result:
bool(true)
Related Methods
See Also
Source
Method defined in services/s3.class.php | Toggle source view (151 lines) | View on GitHub
public function create_mpu_object($bucket, $filename, $opt = null)
{
if ($this->use_batch_flow)
{
throw new S3_Exception(__FUNCTION__ . '() cannot be batch requested');
}
if (!$opt) $opt = array();
// Handle content length. Can also be passed as an HTTP header.
if (isset($opt['length']))
{
$opt['headers']['Content-Length'] = $opt['length'];
unset($opt['length']);
}
// URI to redirect to. Can also be passed as an HTTP header.
if (isset($opt['redirectTo']))
{
$opt['headers']['x-amz-website-redirect-location'] = $opt['redirectTo'];
unset($opt['redirectTo']);
}
if (!isset($opt['fileUpload']))
{
throw new S3_Exception('The `fileUpload` option is required in ' . __FUNCTION__ . '().');
}
elseif (is_resource($opt['fileUpload']))
{
$opt['limit'] = 1; // We can only read from this one resource.
$upload_position = isset($opt['seekTo']) ? (integer) $opt['seekTo'] : ftell($opt['fileUpload']);
$upload_filesize = isset($opt['headers']['Content-Length']) ? (integer) $opt['headers']['Content-Length'] : null;
if (!isset($upload_filesize) && $upload_position !== false)
{
$stats = fstat($opt['fileUpload']);
if ($stats && $stats['size'] >= 0)
{
$upload_filesize = $stats['size'] - $upload_position;
}
}
}
else
{
$upload_position = isset($opt['seekTo']) ? (integer) $opt['seekTo'] : 0;
if (isset($opt['headers']['Content-Length']))
{
$upload_filesize = (integer) $opt['headers']['Content-Length'];
}
else
{
$upload_filesize = filesize($opt['fileUpload']);
if ($upload_filesize !== false)
{
$upload_filesize -= $upload_position;
}
}
}
if ($upload_position === false || !isset($upload_filesize) || $upload_filesize === false || $upload_filesize < 0)
{
throw new S3_Exception('The size of `fileUpload` cannot be determined in ' . __FUNCTION__ . '().');
}
// Handle part size
if (isset($opt['partSize']))
{
// If less that 5 MB...
if ((integer) $opt['partSize'] < 5242880)
{
$opt['partSize'] = 5242880; // 5 MB
}
// If more than 500 MB...
elseif ((integer) $opt['partSize'] > 524288000)
{
$opt['partSize'] = 524288000; // 500 MB
}
}
else
{
$opt['partSize'] = 52428800; // 50 MB
}
// If the upload size is smaller than the piece size, failover to create_object().
if ($upload_filesize < $opt['partSize'] && !isset($opt['uploadId']))
{
return $this->create_object($bucket, $filename, $opt);
}
// Initiate multipart upload
if (isset($opt['uploadId']))
{
$upload_id = $opt['uploadId'];
}
else
{
// Compose options for initiate_multipart_upload().
$_opt = array();
foreach (array('contentType', 'acl', 'storage', 'headers', 'meta') as $param)
{
if (isset($opt[$param]))
{
$_opt[$param] = $opt[$param];
}
}
$upload = $this->initiate_multipart_upload($bucket, $filename, $_opt);
if (!$upload->isOK())
{
return $upload;
}
// Fetch the UploadId
$upload_id = (string) $upload->body->UploadId;
}
// Get the list of pieces
$pieces = $this->get_multipart_counts($upload_filesize, (integer) $opt['partSize']);
// Queue batch requests
$batch = new CFBatchRequest(isset($opt['limit']) ? (integer) $opt['limit'] : null);
foreach ($pieces as $i => $piece)
{
$this->batch($batch)->upload_part($bucket, $filename, $upload_id, array(
'expect' => '100-continue',
'fileUpload' => $opt['fileUpload'],
'partNumber' => ($i + 1),
'seekTo' => $upload_position + (integer) $piece['seekTo'],
'length' => (integer) $piece['length'],
));
}
// Send batch requests
$batch_responses = $this->batch($batch)->send();
if (!$batch_responses->areOK())
{
return $batch_responses;
}
// Compose completion XML
$parts = array();
foreach ($batch_responses as $i => $response)
{
$parts[] = array('PartNumber' => ($i + 1), 'ETag' => $response->header['etag']);
}
return $this->complete_multipart_upload($bucket, $filename, $upload_id, $parts);
}