

# Organizing, listing, and working with your objects
<a name="organizing-objects"></a>

In Amazon S3, you can use prefixes to organize your storage. A prefix is a logical grouping of the objects in a bucket. The prefix value is similar to a directory name that enables you to store similar data under the same directory in a bucket. When you programmatically upload objects, you can use prefixes to organize your data.

In the Amazon S3 console, prefixes are called folders. You can view all your objects and folders in the S3 console by navigating to a bucket. You can also view information about each object, including object properties.

For more information about listing and organizing your data in Amazon S3, see the following topics.

**Topics**
+ [Organizing objects using prefixes](using-prefixes.md)
+ [Listing object keys programmatically](ListingKeysUsingAPIs.md)
+ [Organizing objects in the Amazon S3 console by using folders](using-folders.md)
+ [Viewing object properties in the Amazon S3 console](view-object-properties.md)
+ [Categorizing your objects using tags](object-tagging.md)

# Organizing objects using prefixes
<a name="using-prefixes"></a>

You can use prefixes to organize the data that you store in Amazon S3 buckets. A prefix is a string of characters at the beginning of the object key name. A prefix can be any length, subject to the maximum length of the object key name (1,024 bytes). You can think of prefixes as a way to organize your data in a similar way to directories. However, prefixes are not directories.

Searching by prefix limits the results to only those keys that begin with the specified prefix. The delimiter causes a list operation to roll up all the keys that share a common prefix into a single summary list result. 

The purpose of the prefix and delimiter parameters is to help you organize and then browse your keys hierarchically. To do this, first pick a delimiter for your bucket, such as slash (/), that doesn't occur in any of your anticipated key names. You can use another character as a delimiter. There is nothing unique about the slash (/) character, but it is a very common prefix delimiter. Next, construct your key names by concatenating all containing levels of the hierarchy, separating each level with the delimiter. 

For example, if you were storing information about cities, you might naturally organize them by continent, then by country, then by province or state. Because these names don't usually contain punctuation, you might use slash (/) as the delimiter. The following examples use a slash (/) delimiter.
+ Europe/France/Nouvelle-Aquitaine/Bordeaux
+ North America/Canada/Quebec/Montreal
+ North America/USA/Washington/Bellevue
+ North America/USA/Washington/Seattle

If you stored data for every city in the world in this manner, it would become awkward to manage a flat key namespace. By using `Prefix` and `Delimiter` with the list operation, you can use the hierarchy that you've created to list your data. For example, to list all the states in USA, set `Delimiter='/'` and `Prefix='North America/USA/'`. To list all the provinces in Canada for which you have data, set `Delimiter='/'` and `Prefix='North America/Canada/'`.

For more information about delimiters, prefixes, and nested folders, see [Difference between prefixes and nested folders](https://repost.aws/knowledge-center/s3-prefix-nested-folders-difference).

## Listing objects using prefixes and delimiters
<a name="prefixes-list-example"></a>

If you issue a list request with a delimiter, you can browse your hierarchy at only one level, skipping over and summarizing the (possibly millions of) keys nested at deeper levels. For example, assume that you have a bucket (*amzn-s3-demo-bucket*) with the following keys:

`sample.jpg` 

`photos/2006/January/sample.jpg` 

`photos/2006/February/sample2.jpg` 

`photos/2006/February/sample3.jpg` 

`photos/2006/February/sample4.jpg` 

The sample bucket has only the `sample.jpg` object at the root level. To list only the root level objects in the bucket, you send a GET request on the bucket with the slash (`/`) delimiter character. In response, Amazon S3 returns the `sample.jpg` object key because it does not contain the `/` delimiter character. All other keys contain the delimiter character. Amazon S3 groups these keys and returns a single `CommonPrefixes` element with the prefix value `photos/`, which is a substring from the beginning of these keys to the first occurrence of the specified delimiter.

**Example**  

```
 1. <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
 2.   <Name>amzn-s3-demo-bucket</Name>
 3.   <Prefix></Prefix>
 4.   <Marker></Marker>
 5.   <MaxKeys>1000</MaxKeys>
 6.   <Delimiter>/</Delimiter>
 7.   <IsTruncated>false</IsTruncated>
 8.   <Contents>
 9.     <Key>sample.jpg</Key>
10.     <LastModified>2011-07-24T19:39:30.000Z</LastModified>
11.     <ETag>&quot;d1a7fb5eab1c16cb4f7cf341cf188c3d&quot;</ETag>
12.     <Size>6</Size>
13.     <Owner>
14.       <ID>75cc57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID>
15.     </Owner>
16.     <StorageClass>STANDARD</StorageClass>
17.   </Contents>
18.   <CommonPrefixes>
19.     <Prefix>photos/</Prefix>
20.   </CommonPrefixes>
21. </ListBucketResult>
```

For more information about listing object keys programmatically, see [Listing object keys programmatically](ListingKeysUsingAPIs.md).

# Listing object keys programmatically
<a name="ListingKeysUsingAPIs"></a>

In Amazon S3, keys can be listed by prefix. You can choose a common prefix for the names of related keys and mark these keys with a special character that delimits hierarchy. You can then use the list operation to select and browse keys hierarchically. This is similar to how files are stored in directories within a file system. 

Amazon S3 exposes a list operation that lets you enumerate the keys contained in a bucket. Keys are selected for listing by bucket and prefix. For example, consider a bucket named "`dictionary`" that contains a key for every English word. You might make a call to list all the keys in that bucket that start with the letter "q". List results are always returned in UTF-8 binary order. 

 Both the SOAP and REST list operations return an XML document that contains the names of matching keys and information about the object identified by each key. 

**Note**  
 SOAP APIs for Amazon S3 are not available for new customers, and are approaching End of Life (EOL) on August 31, 2025. We recommend that you use either the REST API or the AWS SDKs. 

Groups of keys that share a prefix terminated by a special delimiter can be rolled up by that common prefix for the purposes of listing. This enables applications to organize and browse their keys hierarchically, much like how you would organize your files into directories in a file system. 

For example, to extend the dictionary bucket to contain more than just English words, you might form keys by prefixing each word with its language and a delimiter, such as "`French/logical`". Using this naming scheme and the hierarchical listing feature, you could retrieve a list of only French words. You could also browse the top-level list of available languages without having to iterate through all the lexicographically intervening keys. For more information about this aspect of listing, see [Organizing objects using prefixes](using-prefixes.md). 

**REST API**  
If your application requires it, you can send REST requests directly. You can send a GET request to return some or all of the objects in a bucket or you can use selection criteria to return a subset of the objects in a bucket. For more information, see [GET Bucket (List Objects) Version 2](https://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html) in the *Amazon Simple Storage Service API Reference*.

**List implementation efficiency**  
List performance is not substantially affected by the total number of keys in your bucket. It's also not affected by the presence or absence of the `prefix`, `marker`, `maxkeys`, or `delimiter` arguments. 

**Iterating through multipage results**  
As buckets can contain a virtually unlimited number of keys, the complete results of a list query can be extremely large. To manage large result sets, the Amazon S3 API supports pagination to split them into multiple responses. Each list keys response returns a page of up to 1,000 keys with an indicator indicating if the response is truncated. You send a series of list keys requests until you have received all the keys. AWS SDK wrapper libraries provide the same pagination. 

## Examples
<a name="ListingKeysUsingAPIs_examples"></a>

When you list all of the objects in your bucket, note that you must have the `s3:ListBucket` permission.

------
#### [ CLI ]

**list-objects**  
The following example uses the `list-objects` command to display the names of all the objects in the specified bucket:  

```
aws s3api list-objects --bucket text-content --query 'Contents[].{Key: Key, Size: Size}'
```
The example uses the `--query` argument to filter the output of `list-objects` down to the key value and size for each object  
For more information about objects, see [Working with objects in Amazon S3](uploading-downloading-objects.md).  
+  For API details, see [ListObjects](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/list-objects.html) in *AWS CLI Command Reference*. 

**ls**  
The following example lists all objects and prefixes in a bucket by using the `ls` command.  
To use this example command, replace **amzn-s3-demo-bucket** with the name of your bucket.  

```
$ aws s3 ls s3://amzn-s3-demo-bucket
```
+  For more information about the high-level command `ls`, see [List buckets and objects](https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html#using-s3-commands-listing-buckets) in *AWS Command Line Interface User Guide*. 

------
#### [ PowerShell ]

**Tools for PowerShell V4**  
**Example 1: This command retrieves the information about all of the items in the bucket "test-files".**  

```
Get-S3Object -BucketName amzn-s3-demo-bucket
```
**Example 2: This command retrieves the information about the item "sample.txt" from bucket "test-files".**  

```
Get-S3Object -BucketName amzn-s3-demo-bucket -Key sample.txt
```
**Example 3: This command retrieves the information about all items with the prefix "sample" from bucket "test-files".**  

```
Get-S3Object -BucketName amzn-s3-demo-bucket -KeyPrefix sample
```
+  For API details, see [ListObjects](https://docs.aws.amazon.com/powershell/v4/reference) in *AWS Tools for PowerShell Cmdlet Reference (V4)*. 

**Tools for PowerShell V5**  
**Example 1: This command retrieves the information about all of the items in the bucket "test-files".**  

```
Get-S3Object -BucketName amzn-s3-demo-bucket
```
**Example 2: This command retrieves the information about the item "sample.txt" from bucket "test-files".**  

```
Get-S3Object -BucketName amzn-s3-demo-bucket -Key sample.txt
```
**Example 3: This command retrieves the information about all items with the prefix "sample" from bucket "test-files".**  

```
Get-S3Object -BucketName amzn-s3-demo-bucket -KeyPrefix sample
```
+  For API details, see [ListObjects](https://docs.aws.amazon.com/powershell/v5/reference) in *AWS Tools for PowerShell Cmdlet Reference (V5)*. 

------

# Organizing objects in the Amazon S3 console by using folders
<a name="using-folders"></a>

In Amazon S3 general purpose buckets, objects are the primary resources, and objects are stored in buckets. Amazon S3 general purpose buckets have a flat structure instead of a hierarchy like you would see in a file system. However, for the sake of organizational simplicity, the Amazon S3 console supports the *folder* concept as a means of grouping objects. The console does this by using a shared name *prefix* for the grouped objects. In other words, the grouped objects have names that begin with a common string. This common string, or shared prefix, is the folder name. Object names are also referred to as *key names*.

For example, you can create a folder in a general purpose bucket in the console named `photos` and store an object named `myphoto.jpg` in it. The object is then stored with the key name `photos/myphoto.jpg`, where `photos/` is the prefix.

Here are two more examples: 
+ If you have three objects in your general purpose bucket—`logs/date1.txt`, `logs/date2.txt`, and `logs/date3.txt`—the console will show a folder named `logs`. If you open the folder in the console, you will see three objects: `date1.txt`, `date2.txt`, and `date3.txt`.
+ If you have an object named `photos/2017/example.jpg`, the console shows you a folder named `photos` that contains the folder `2017`. The folder `2017` contains the object `example.jpg`.

You can have folders within folders, but not buckets within buckets. You can upload and copy objects directly into a folder. Folders can be created, deleted, and made public, but they can't be renamed. Objects can be copied from one folder to another. 

**Important**  
When you create a folder in Amazon S3 console, S3 creates a 0-byte object. This object key is set to the folder name that you provided plus a trailing forward slash (`/`) character. For example, in Amazon S3 console, if you create a folder named `photos` in your bucket, the Amazon S3 console creates a 0-byte object with the key `photos/`. The console creates this object to support the idea of folders.   
Also, any pre-existing object that's named with a trailing forward slash character (`/`) appears as a folder in the Amazon S3 console. For example, an object with the key name `examplekeyname/` appears as a folder in Amazon S3 console and not as an object. Otherwise, it behaves like any other object and can be viewed and manipulated through the AWS Command Line Interface (AWS CLI), AWS SDKs, or REST API. Additionally, you can't upload an object that has a key name with a trailing forward slash character (`/`) by using the Amazon S3 console. However, you can upload objects that are named with a trailing forward slash (`/`) character by using the AWS Command Line Interface (AWS CLI), AWS SDKs, or REST API.   
Moreover, the Amazon S3 console doesn't display the content and metadata for folder objects like it does for other objects. When you use the console to copy an object named with a trailing forward slash character (`/`), a new folder is created in the destination location, but the object's data and metadata aren't copied. Also, a forward slash (`/`) in object key names might require special handling. For more information, see [Naming Amazon S3 objects](object-keys.md).

To create folders in directory buckets, upload a folder. For more information, see [Uploading objects to a directory bucket](directory-buckets-objects-upload.md).

**Topics**
+ [Creating a folder](#create-folder)
+ [Making folders public](#public-folders)
+ [Calculating folder size](#calculate-folder)
+ [Deleting folders](#delete-folders)

## Creating a folder
<a name="create-folder"></a>

This section describes how to use the Amazon S3 console to create a folder.

**Important**  
If your bucket policy prevents uploading objects to this bucket without tags, metadata, or access control list (ACL) grantees, you can't create a folder by using the following procedure. Instead, upload an empty folder and specify the following settings in the upload configuration.

**To create a folder**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to create a folder in.

1. On the **Objects** tab, choose **Create folder**.

1. Enter a name for the folder (for example, **favorite-pics**). 
**Note**  
Folder names are subject to certain limitations and guidelines, and are considered part of an object's object key name, which is limited to 1,024 bytes. For more information, see [Naming Amazon S3 objects](object-keys.md).

1. (Optional) If your bucket policy requires objects to be encrypted with a specific encryption key, under **Server-side encryption**, you must choose **Specify an encryption key** and specify the same encryption key when you create a folder. Otherwise, folder creation will fail.

1. Choose **Create folder**.

## Making folders public
<a name="public-folders"></a>

We recommend blocking all public access to your Amazon S3 folders and buckets unless you specifically require a public folder or bucket. When you make a folder public, anyone on the internet can view all the objects that are grouped in that folder. 

In the Amazon S3 console, you can make a folder public. You can also make a folder public by creating a bucket policy that limits data access by prefix. For more information, see [Identity and Access Management for Amazon S3](security-iam.md). 

**Warning**  
After you make a folder public in the Amazon S3 console, you can't make it private again. Instead, you must set permissions on each individual object in the public folder so that the objects have no public access. For more information, see [Configuring ACLs](managing-acls.md).

**Topics**
+ [Creating a folder](#create-folder)
+ [Making folders public](#public-folders)
+ [Calculating folder size](#calculate-folder)
+ [Deleting folders](#delete-folders)

## Calculating folder size
<a name="calculate-folder"></a>

This section describes how to use the Amazon S3 console to calculate a folder's size.

**To calculate a folder's size**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the **General purpose buckets** list, choose the name of the bucket in which your folder is stored.

1. In the **Objects** list, select the checkbox next to the name of the folder.

1. Choose **Actions**, and then choose **Calculate total size**.

**Note**  
When you navigate away from the page, the folder information (including the total size) is no longer available. You must calculate the total size again if you want to see it again. 

**Important**  
When you use the **Calculate total size** action on specified objects or folders within your bucket, Amazon S3 calculates the total number of objects and the total storage size. However, incomplete or in-progress multipart uploads and previous or noncurrent versions aren't calculated in the total number of objects or the total size. This action calculates only the total number of objects and the total size for the current or newest version of each object that's stored in the bucket.  
For example, if there are two versions of an object in your bucket, then the storage calculator in Amazon S3 counts them as only one object. As a result, the total number of objects that's calculated in the Amazon S3 console can differ from the **Object Count** metric shown in S3 Storage Lens and from the number reported by the Amazon CloudWatch metric, `NumberOfObjects`. Likewise, the total storage size can also differ from the **Total Storage** metric shown in S3 Storage Lens and from the `BucketSizeBytes` metric shown in CloudWatch.

## Deleting folders
<a name="delete-folders"></a>

This section explains how to use the Amazon S3 console to delete folders from an S3 bucket. 

For information about Amazon S3 features and pricing, see [Amazon S3](https://aws.amazon.com/s3/).



**To delete folders from an S3 bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the **General purpose buckets** list, choose the name of the bucket that you want to delete folders from.

1. In the **Objects** list, select the checkboxes next to the folders and objects that you want to delete.

1. Choose **Delete**.

1. On the **Delete objects** page, verify that the names of the folders and objects that you selected for deletion are listed under **Specified objects**.

1. In the **Delete objects** box, enter **delete**, and choose **Delete objects**.

**Warning**  
This action deletes all specified objects. When deleting folders, wait for the delete action to finish before adding new objects to the folder. Otherwise, new objects might be deleted as well.

# Viewing object properties in the Amazon S3 console
<a name="view-object-properties"></a>

You can use the Amazon S3 console to view the properties of an object, including storage class, encryption settings, tags, and metadata.

**To view the properties of an object**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets** or **Directory buckets**.

1. In the bucket list, choose the name of the bucket that contains the object.

1. In the **Objects** list, choose the name of the object you want to view properties for.

   The **Object overview** for your object opens. You can scroll down to view the object properties.

1. On the **Object overview** page, you can view or configure the following properties for the object.
**Note**  
If you change the **Storage Class**, ** Encryption**, or **Metadata** properties, a new object is created to replace the old one. If S3 Versioning is enabled, a new version of the object is created, and the existing object becomes an older version. The role that changes the property also becomes the owner of the new object or (object version).
If you change the **Storage Class**, **Encryption**, or **Metadata** properties for an object that has user-defined tags, you must have the `s3:GetObjectTagging` permission. If you're changing these properties for an object that doesn't have user-defined tags but is over 16 MB in size, you must also have the `s3:GetObjectTagging` permission.  
If the destination bucket policy denies the `s3:GetObjectTagging` action, these properties for the object will be updated, but the user-defined tags will be removed from the object, and you will receive an error. 

   1. **Storage class** – Each object in Amazon S3 has a storage class associated with it. The storage class that you choose to use depends on how frequently you access the object. The default storage class for S3 objects in general purpose buckets is STANDARD. The default storage class for S3 objects in directory buckets is S3 Express One Zone. You choose which storage class to use when you upload an object. For more information about storage classes, see [Understanding and managing Amazon S3 storage classes](storage-class-intro.md).

      To change the storage class after you upload an object to a general purpose bucket, choose **Storage class**. Choose the storage class that you want, and then choose **Save**.
**Note**  
Storage class of objects in a directory bucket cannot be changed.

   1. **Server-side encryption settings** – You can use server-side encryption to encrypt your S3 objects. For more information, see [Specifying server-side encryption with AWS KMS (SSE-KMS)](specifying-kms-encryption.md) or [Specifying server-side encryption with Amazon S3 managed keys (SSE-S3)](specifying-s3-encryption.md). 

   1. **Metadata** – Each object in Amazon S3 has a set of name-value pairs that represents its metadata. For information about adding metadata to an S3 object, see [Editing object metadata in the Amazon S3 console](add-object-metadata.md).

   1. **Tags** – You categorize storage by adding tags to an S3 object in a general purpose bucket. For more information, see [Categorizing your objects using tags](object-tagging.md).

   1. **Object lock legal hold and retention** – You can prevent an object in a general purpose bucket from being deleted. For more information, see [Locking objects with Object Lock](object-lock.md).

# Categorizing your objects using tags
<a name="object-tagging"></a>

Use object tagging to categorize storage. Each tag is a key-value pair.

You can add tags to new objects when you upload them, or you can add them to existing objects. 
+ You can associate up to 10 tags with an object. Tags that are associated with an object must have unique tag keys.
+ A tag key can be up to 128 Unicode characters in length, and tag values can be up to 256 Unicode characters in length. Amazon S3 object tags are internally represented in UTF-16. Note that in UTF-16, characters consume either 1 or 2 character positions.
+ The key and values are case sensitive.
+ For more information about tag restrictions, see [User-defined tag restrictions](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/custom-tags.html#allocation-tag-restrictions) in the *AWS Billing and Cost Management User Guide*. For basic tag restrictions, see [Tag restrictions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#tag-restrictions) in the *Amazon EC2 User Guide*.

**Examples**  
Consider the following tagging examples:

**Example PHI information**  
Suppose that an object contains protected health information (PHI) data. You might tag the object using the following key-value pair.  

```
PHI=True
```
or  

```
Classification=PHI
```

**Example Project files**  
Suppose that you store project files in your S3 bucket. You might tag these objects with a key named `Project` and a value, as shown following.  

```
Project=Blue
```

**Example Multiple tags**  
You can add multiple tags to an object, as shown following.  

```
Project=x
Classification=confidential
```

**Key name prefixes and tags**  
Object key name prefixes also enable you to categorize storage. However, prefix-based categorization is one-dimensional. Consider the following object key names:

```
photos/photo1.jpg
project/projectx/document.pdf
project/projecty/document2.pdf
```

These key names have the prefixes `photos/`, `project/projectx/`, and `project/projecty/`. These prefixes enable one-dimensional categorization. That is, everything under a prefix is one category. For example, the prefix `project/projectx` identifies all documents related to project x.

With tagging, you now have another dimension. If you want photo1 in project x category, you can tag the object accordingly.

**Additional benefits**  
In addition to data classification, tagging offers benefits such as the following:
+ Object tags enable fine-grained access control of permissions. For example, you could grant a user permissions to read-only objects with specific tags.
+ Object tags enable fine-grained object lifecycle management in which you can specify a tag-based filter, in addition to a key name prefix, in a lifecycle rule.
+ When using Amazon S3 analytics, you can configure filters to group objects together for analysis by object tags, by key name prefix, or by both prefix and tags.
+ You can also customize Amazon CloudWatch metrics to display information by specific tag filters. The following sections provide details.

**Important**  
It is acceptable to use tags to label objects containing confidential data, such as personally identifiable information (PII) or protected health information (PHI). However, the tags themselves shouldn't contain any confidential information. 

**Adding object tag sets to multiple Amazon S3 object with a single request**  
To add object tag sets to more than one Amazon S3 object with a single request, you can use S3 Batch Operations. You provide S3 Batch Operations with a list of objects to operate on. S3 Batch Operations calls the respective API operation to perform the specified operation. A single Batch Operations job can perform the specified operation on billions of objects containing exabytes of data. 

The S3 Batch Operations feature tracks progress, sends notifications, and stores a detailed completion report of all actions, providing a fully managed, auditable, serverless experience. You can use S3 Batch Operations through the Amazon S3 console, AWS CLI, AWS SDKs, or REST API. For more information, see [S3 Batch Operations basics](batch-ops.md#batch-ops-basics).

For more information about object tags, see [Managing object tags](tagging-managing.md).

## API operations related to object tagging
<a name="tagging-apis"></a>

Amazon S3 supports the following API operations that are specifically for object tagging:

**Object API operations**
+  [PUT Object tagging](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUTtagging.html) – Replaces tags on an object. You specify tags in the request body. There are two distinct scenarios of object tag management using this API.
  + Object has no tags – Using this API you can add a set of tags to an object (the object has no prior tags).
  + Object has a set of existing tags – To modify the existing tag set, you must first retrieve the existing tag set, modify it on the client side, and then use this API to replace the tag set.
**Note**  
 If you send this request with an empty tag set, Amazon S3 deletes the existing tag set on the object. If you use this method, you will be charged for a Tier 1 Request (PUT). For more information, see [Amazon S3 Pricing](https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf).  
The [DELETE Object tagging](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectDELETEtagging.html) request is preferred because it achieves the same result without incurring charges. 
+  [GET Object tagging](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGETtagging.html) – Returns the tag set associated with an object. Amazon S3 returns object tags in the response body.
+ [DELETE Object tagging](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectDELETEtagging.html) – Deletes the tag set associated with an object. 

**Other API operations that support tagging**
+  [PUT Object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html) and [Initiate Multipart Upload](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadInitiate.html)– You can specify tags when you create objects. You specify tags using the `x-amz-tagging` request header. 
+  [GET Object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html) – Instead of returning the tag set, Amazon S3 returns the object tag count in the `x-amz-tag-count` header (only if the requester has permissions to read tags) because the header response size is limited to 8 K bytes. If you want to view the tags, you make another request for the [GET Object tagging](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGETtagging.html) API operation.
+ [POST Object](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html) – You can specify tags in your POST request. 

  As long as the tags in your request don't exceed the 8 K byte HTTP request header size limit, you can use the `PUT Object `API to create objects with tags. If the tags you specify exceed the header size limit, you can use this POST method in which you include the tags in the body. 

   [PUT Object - Copy](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html) – You can specify the `x-amz-tagging-directive` in your request to direct Amazon S3 to either copy (default behavior) the tags or replace tags by a new set of tags provided in the request. 

Note the following:
+ S3 Object Tagging is strongly consistent. For more information, see [Amazon S3 data consistency model](Welcome.md#ConsistencyModel). 

## Additional configurations
<a name="tagging-other-configs"></a>

This section explains how object tagging relates to other configurations.

### Object tagging and lifecycle management
<a name="tagging-and-lifecycle"></a>

In bucket lifecycle configuration, you can specify a filter to select a subset of objects to which the rule applies. You can specify a filter based on the key name prefixes, object tags, or both. 

Suppose that you store photos (raw and the finished format) in your Amazon S3 bucket. You might tag these objects as shown following. 

```
phototype=raw
or
phototype=finished
```

You might consider archiving the raw photos to Amazon Glacier sometime after they are created. You can configure a lifecycle rule with a filter that identifies the subset of objects with the key name prefix (`photos/`) that have a specific tag (`phototype=raw`). 

For more information, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md). 

### Object tagging and replication
<a name="tagging-and-replication"></a>

If you configured Replication on your bucket, Amazon S3 replicates tags, provided you grant Amazon S3 permission to read the tags. For more information, see [Setting up live replication overview](replication-how-setup.md).

### Object tagging event notifications
<a name="tagging-and-event-notifications"></a>

You can set up an Amazon S3 event notification to receive notice when an object tag is added or deleted from an object. The `s3:ObjectTagging:Put` event type notifies you when a tag is PUT on an object or when an existing tag is updated. The `s3:ObjectTagging:Delete` event type notifies you when a tag is removed from an object. For more information, see [ Enabling event notifications](https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-enable-disable-notification-intro.html).

For more information about object tagging, see the following topics:

**Topics**
+ [API operations related to object tagging](#tagging-apis)
+ [Additional configurations](#tagging-other-configs)
+ [Tagging and access control policies](tagging-and-policies.md)
+ [Managing object tags](tagging-managing.md)

# Tagging and access control policies
<a name="tagging-and-policies"></a>

You can also use permissions policies (bucket and user policies) to manage permissions related to object tagging. For policy actions see the following topics: 
+  [Object operations](security_iam_service-with-iam.md#using-with-s3-actions-related-to-objects) 
+  [Bucket operations](security_iam_service-with-iam.md#using-with-s3-actions-related-to-buckets)

Object tags enable fine-grained access control for managing permissions. You can grant conditional permissions based on object tags. Amazon S3 supports the following condition keys that you can use to grant conditional permissions based on object tags:
+ `s3:ExistingObjectTag/<tag-key>` – Use this condition key to verify that an existing object tag has the specific tag key and value. 
**Note**  
When granting permissions for the `PUT Object` and `DELETE Object` operations, this condition key is not supported. That is, you cannot create a policy to grant or deny a user permissions to delete or overwrite an object based on its existing tags. 
+ `s3:RequestObjectTagKeys` – Use this condition key to restrict the tag keys that you want to allow on objects. This is useful when adding tags to objects using the PutObjectTagging and PutObject, and POST object requests.
+ `s3:RequestObjectTag/<tag-key>` – Use this condition key to restrict the tag keys and values that you want to allow on objects. This is useful when adding tags to objects using the PutObjectTagging and PutObject, and POST Bucket requests.

For a complete list of Amazon S3 service-specific condition keys, see [Bucket policy examples using condition keys](amazon-s3-policy-keys.md). The following permissions policies illustrate how object tagging enables fine grained access permissions management.

**Example 1: Allow a user to read only the objects that have a specific tag and key value**  
The following permissions policy limits a user to only reading objects that have the `environment: production` tag key and value. This policy uses the `s3:ExistingObjectTag` condition key to specify the tag key and value.    
****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
  {
    "Principal": {
      "AWS": [
        "arn:aws:iam::111122223333:role/JohnDoe"
      ]
    },
    "Effect": "Allow",
    "Action": ["s3:GetObject", "s3:GetObjectVersion"],
    "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
    "Condition": {
      "StringEquals": 
        {"s3:ExistingObjectTag/environment": "production"}
    }
  }
  ]
}
```

**Example 2: Restrict which object tag keys that users can add**  
The following permissions policy grants a user permissions to perform the `s3:PutObjectTagging` action, which allows user to add tags to an existing object. The condition uses the `s3:RequestObjectTagKeys` condition key to specify the allowed tag keys, such as `Owner` or `CreationDate`. For more information, see [Creating a condition that tests multiple key values](https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_multi-value-conditions.html) in the *IAM User Guide*.  
The policy ensures that every tag key specified in the request is an authorized tag key. The `ForAnyValue` qualifier in the condition ensures that at least one of the specified keys must be present in the request.    
****  

```
{
   "Version":"2012-10-17",		 	 	 
  "Statement": [
    {"Principal":{"AWS":[
            "arn:aws:iam::111122223333:role/JohnDoe"
         ]
       },
 "Effect": "Allow",
      "Action": [
        "s3:PutObjectTagging"
      ],
      "Resource": [
        "arn:aws:s3:::amzn-s3-demo-bucket/*"
      ],
      "Condition": {"ForAnyValue:StringEquals": {"s3:RequestObjectTagKeys": [
            "Owner",
            "CreationDate"
          ]
        }
      }
    }
  ]
}
```

**Example 3: Require a specific tag key and value when allowing users to add object tags**  
The following example policy grants a user permission to perform the `s3:PutObjectTagging` action, which allows a user to add tags to an existing object. The condition requires the user to include a specific tag key (such as `Project`) with the value set to `X`.    
****  

```
{
   "Version":"2012-10-17",		 	 	 
  "Statement": [
    {"Principal":{"AWS":[
       "arn:aws:iam::111122223333:user/JohnDoe"
         ]
       },
      "Effect": "Allow",
      "Action": [
        "s3:PutObjectTagging"
      ],
      "Resource": [
        "arn:aws:s3:::amzn-s3-demo-bucket/*"
      ],
      "Condition": {"StringEquals": {"s3:RequestObjectTag/Project": "X"
        }
      }
    }
  ]
}
```



# Managing object tags
<a name="tagging-managing"></a>

This section explains how you can manage object tags using the AWS SDKs for Java and .NET or the Amazon S3 console.

Object tagging gives you a way to categorize storage in general purpose buckets. Each tag is a key-value pair that adheres to the following rules:
+ You can associate up to 10 tags with an object. Tags that are associated with an object must have unique tag keys.
+ A tag key can be up to 128 Unicode characters in length, and tag values can be up to 256 Unicode characters in length. Amazon S3 object tags are internally represented in UTF-16. Note that in UTF-16, characters consume either 1 or 2 character positions.
+ The key and values are case sensitive. 

For more information about object tags, see [Categorizing your objects using tags](object-tagging.md). For more information about tag restrictions, see [User-Defined Tag Restrictions](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/allocation-tag-restrictions.html) in the *AWS Billing and Cost Management User Guide*. 

## Using the S3 console
<a name="add-object-tags"></a>

**To add tags to an object**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the bucket list, choose the name of the bucket that contains the object.

1. Select the check box to the left of the names of the objects you want to change.

1. In the **Actions** menu, choose **Edit tags**.

1. Review the objects listed, and choose **Add tags**.

1. Each object tag is a key-value pair. Enter a **Key** and a **Value**. To add another tag, choose **Add Tag**. 

   You can enter up to 10 tags for an object.

1. Choose **Save changes**.

   Amazon S3 adds the tags to the specified objects.

For more information, see also [Viewing object properties in the Amazon S3 console](view-object-properties.md) and [Uploading objects](upload-objects.md) in this guide. 

## Using the AWS SDKs
<a name="tagging-manage-sdk"></a>

------
#### [ Java ]

To manage object tags using the AWS SDK for Java, you can set tags for a new object and retrieve or replace tags for an existing object. For more information about object tagging, see [Categorizing your objects using tags](object-tagging.md).

Upload an object to a bucket and set tags using an S3Client. For examples, see [Upload an object to a bucket](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_PutObject_section.html) in the *Amazon S3 API Reference*.

------
#### [ .NET ]

The following example shows how to use the AWS SDK for .NET to set the tags for a new object and retrieve or replace the tags for an existing object. For more information about object tagging, see [Categorizing your objects using tags](object-tagging.md). 

For information about setting up and running the code examples, see [Getting Started with the AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/net-dg-setup.html) in the *AWS SDK for .NET Developer Guide*. 

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
    public class ObjectTagsTest
    {
        private const string bucketName = "*** bucket name ***";
        private const string keyName = "*** key name for the new object ***";
        private const string filePath = @"*** file path ***";
        // Specify your bucket region (an example region is shown).
        private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
        private static IAmazonS3 client;

        public static void Main()
        {
            client = new AmazonS3Client(bucketRegion);
            PutObjectWithTagsTestAsync().Wait();
        }

        static async Task PutObjectWithTagsTestAsync()
        {
            try
            {
                // 1. Put an object with tags.
                var putRequest = new PutObjectRequest
                {
                    BucketName = bucketName,
                    Key = keyName,
                    FilePath = filePath,
                    TagSet = new List<Tag>{
                        new Tag { Key = "Keyx1", Value = "Value1"},
                        new Tag { Key = "Keyx2", Value = "Value2" }
                    }
                };

                PutObjectResponse response = await client.PutObjectAsync(putRequest);
                // 2. Retrieve the object's tags.
                GetObjectTaggingRequest getTagsRequest = new GetObjectTaggingRequest
                {
                    BucketName = bucketName,
                    Key = keyName
                };

                GetObjectTaggingResponse objectTags = await client.GetObjectTaggingAsync(getTagsRequest);
                for (int i = 0; i < objectTags.Tagging.Count; i++)
                    Console.WriteLine("Key: {0}, Value: {1}", objectTags.Tagging[i].Key, objectTags.Tagging[i].Value);


                // 3. Replace the tagset.

                Tagging newTagSet = new Tagging();
                newTagSet.TagSet = new List<Tag>{
                    new Tag { Key = "Key3", Value = "Value3"},
                    new Tag { Key = "Key4", Value = "Value4" }
                };


                PutObjectTaggingRequest putObjTagsRequest = new PutObjectTaggingRequest()
                {
                    BucketName = bucketName,
                    Key = keyName,
                    Tagging = newTagSet
                };
                PutObjectTaggingResponse response2 = await client.PutObjectTaggingAsync(putObjTagsRequest);

                // 4. Retrieve the object's tags.
                GetObjectTaggingRequest getTagsRequest2 = new GetObjectTaggingRequest();
                getTagsRequest2.BucketName = bucketName;
                getTagsRequest2.Key = keyName;
                GetObjectTaggingResponse objectTags2 = await client.GetObjectTaggingAsync(getTagsRequest2);
                for (int i = 0; i < objectTags2.Tagging.Count; i++)
                    Console.WriteLine("Key: {0}, Value: {1}", objectTags2.Tagging[i].Key, objectTags2.Tagging[i].Value);

            }
            catch (AmazonS3Exception e)
            {
                Console.WriteLine(
                        "Error encountered ***. Message:'{0}' when writing an object"
                        , e.Message);
            }
            catch (Exception e)
            {
                Console.WriteLine(
                    "Encountered an error. Message:'{0}' when writing an object"
                    , e.Message);
            }
        }
    }
}
```

------