Amazon S3 stream wrapper with AWS SDK for PHP Version 3
The Amazon S3 stream wrapper enables you to store and retrieve data from Amazon S3
using built-in PHP functions, such as file_get_contents
, fopen
,
copy
, rename
, unlink
, mkdir
, and rmdir
.
You need to register the Amazon S3 stream wrapper to use it.
$client = new Aws\S3\S3Client([/** options **/]); // Register the stream wrapper from an S3Client object $client->registerStreamWrapper();
This enables you to access buckets and objects stored in Amazon S3 using the
s3://
protocol. The Amazon S3 stream wrapper accepts strings that contain a
bucket name followed by a forward slash and an optional object key or prefix:
s3://<bucket>[/<key-or-prefix>]
.
Note
The stream wrapper is designed for working with objects and buckets on which
you have at least read permission. This means that your user should have
permission to execute ListBucket
on any buckets and GetObject
on any
object with which the user needs to interact. For use cases where you don’t have
this permission level, we recommended that you use Amazon S3 client operations
directly.
Download data
You can grab the contents of an object by using file_get_contents
. However, be careful
with this function; it loads the entire contents of the object into
memory.
// Download the body of the "key" object in the "bucket" bucket $data = file_get_contents('s3://bucket/key');
Use fopen()
when working with larger files or if you need to stream data
from Amazon S3.
// Open a stream in read-only mode if ($stream = fopen('s3://bucket/key', 'r')) { // While the stream is still open while (!feof($stream)) { // Read 1,024 bytes from the stream echo fread($stream, 1024); } // Be sure to close the stream resource when you're done with it fclose($stream); }
Note
File write errors are only returned when a call to fflush
is made.
These errors are not returned when an unflushed fclose
is called.
The return value for fclose
will be true
if it closes the stream,
regardless of any errors in response to its internal fflush
. These errors are also not
returned when calling file_put_contents
because of how PHP implements it.
Open seekable streams
Streams opened in “r” mode only allow data to be read from the stream, and are
not seekable by default. This is so that data can be downloaded from Amazon S3
in a truly streaming manner, where previously read bytes do not need to be
buffered into memory. If you need a stream to be seekable, you can pass
seekable
into the stream context options
$context = stream_context_create([ 's3' => ['seekable' => true] ]); if ($stream = fopen('s3://bucket/key', 'r', false, $context)) { // Read bytes from the stream fread($stream, 1024); // Seek back to the beginning of the stream fseek($stream, 0); // Read the same bytes that were previously read fread($stream, 1024); fclose($stream); }
Opening seekable streams enables you to seek bytes that were previously
read. You can’t skip ahead to bytes that have not yet been read from the
remote server. To allow previously read data to recalled, data is
buffered in a PHP temp stream using a stream decorator. When the amount of
cached data exceeds 2 MB, the data in the temp stream transfers from memory
to disk. Keep this in mind when downloading large files from Amazon S3 using
the seekable
stream context setting.
Upload data
You can upload data to Amazon S3 using file_put_contents()
.
file_put_contents('s3://bucket/key', 'Hello!');
You can upload larger files by streaming data using fopen()
and a “w”, “x”,
or “a” stream access mode. The Amazon S3 stream wrapper does not support
simultaneous read and write streams (e.g. “r+”, “w+”, etc). This is because the
HTTP protocol doesn’t allow simultaneous reading and writing.
$stream = fopen('s3://bucket/key', 'w'); fwrite($stream, 'Hello!'); fclose($stream);
Note
Amazon S3 requires a Content-Length header to be specified before
the payload of a request is sent. Therefore, the data to be uploaded in a PutObject
operation is internally buffered using a PHP temp stream until the stream
is flushed or closed.
Note
File write errors are returned only when a call to fflush
is made.
These errors are not returned when an unflushed fclose
is called.
The return value for fclose
will be true
if it closes the stream,
regardless of any errors in response to its internal fflush
.
These errors are also not returned when calling file_put_contents
because of how PHP implements it.
fopen modes
PHP’s fopen()$mode
option. The mode option specifies whether
data can be read or written to a stream, and whether the file must exist when opening a
stream.
The Amazon S3 stream wrapper supports the following modes for streams that target Amazon S3 objects.
r |
A read-only stream where the object must already exist. |
w |
A write-only stream. If the object already exists, it is overwritten. |
a |
A write-only stream. If the object already exists, it is downloaded to a temporary stream and any writes to the stream is appended to any previously uploaded data. |
x |
A write-only stream. An error is raised if the object already exist. |
Other object functions
Stream wrappers allow many different built-in PHP functions to work with a custom system such as Amazon S3. Here are some of the functions that the Amazon S3 stream wrapper enables you to perform with objects stored in Amazon S3.
unlink() |
Delete an object from a bucket.
You can pass in any options available to the
|
filesize() |
Get the size of an object.
|
is_file() |
Checks if a URL is a file.
|
file_exists() |
Checks if an object exists.
|
filetype() |
Checks if a URL maps to a file or bucket (dir). |
file() |
Load the contents of an object in an array of lines. You can
pass in any options available to the |
filemtime() |
Get the last modified date of an object. |
rename() |
Rename an object by copying the object then deleting the
original. You can pass in options available to the
|
Note
Although copy
generally works with the Amazon S3 stream wrapper, some errors
might not be properly reported due to the internals of the copy
function
in PHP. We recommend that you use an instance of AwsS3ObjectCopier
instead.
Work with buckets and folders
Use mkdir()
to work with
buckets
You can create and browse Amazon S3 buckets similarly to how PHP allows you to create and traverse directories on your file system.
Here’s an example that creates a bucket.
mkdir('s3://amzn-s3-demo-bucket');
Note
In April 2023, Amazon S3 automatically enabled S3 Block Public Access and disabled
access control lists for all newly created buckets. This change also affects how the
StreamWrapper
's mkdir
function works with permissions
and ACLs. More information is available in this What's New with AWS article
You can pass in stream context options to the mkdir()
method to modify
how the bucket is created using the parameters available to the CreateBucket
operation.
// Create a bucket in the EU (Ireland) Region mkdir('s3://amzn-s3-demo-bucket', 0500, true, stream_context_create([ 's3' => ['LocationConstraint' => 'eu-west-1'] ]));
You can delete buckets using the rmdir()
function.
// Delete a bucket rmdir('s3://amzn-s3-demo-bucket);
Note
A bucket can only be deleted if it is empty.
Use mkdir()
to work with
folders
After you create a bucket, you can use mkdir()
to create objects that
function as folders as in a file system.
The following code snippet adds a folder object named 'my-folder' to the existing
bucket named 'amzn-s3-demo-bucket'. Use the forward slash (/
) character to
separate a folder object name from the bucket name and any additional folder
name.
mkdir('s3://amzn-s3-demo-bucket/my-folder')
The previous note about permission
changes after April 2023 also come into play when you create folder objects. This blog post
Use the rmdir()
function to delete an empty folder object as shown in
the following snippet.
rmdir('s3://amzn-s3-demo-bucket/my-folder')
List the contents of a bucket
You can use the opendir()opendir()
function to
modify how objects are listed.
$dir = "s3://bucket/"; if (is_dir($dir) && ($dh = opendir($dir))) { while (($file = readdir($dh)) !== false) { echo "filename: {$file} : filetype: " . filetype($dir . $file) . "\n"; } closedir($dh); }
You can recursively list each object and prefix in a bucket using PHP’s
RecursiveDirectoryIterator
$dir = 's3://bucket'; $iterator = new RecursiveIteratorIterator(new RecursiveDirectoryIterator($dir)); foreach ($iterator as $file) { echo $file->getType() . ': ' . $file . "\n"; }
Another way to list the contents of a bucket recursively that incurs fewer
HTTP requests is to use the Aws\recursive_dir_iterator($path, $context = null)
function.
<?php require 'vendor/autoload.php'; $iter = Aws\recursive_dir_iterator('s3://bucket/key'); foreach ($iter as $filename) { echo $filename . "\n"; }
Stream context options
You can customize the client used by the stream wrapper, or the cache used to cache previously loaded information about buckets and keys, by passing in custom stream context options.
The stream wrapper supports the following stream context options on every operation.
-
client
-
The
Aws\AwsClientInterface
object to use to execute commands. -
cache
-
An instance of
Aws\CacheInterface
to use to cache previously obtained file stats. By default, the stream wrapper uses an in-memory LRU cache.