

# Working with object metadata
<a name="UsingMetadata"></a>

There are two kinds of object metadata in Amazon S3: *system-defined metadata* and *user-defined metadata*. System-defined metadata includes metadata such as the object's creation date, size, and storage class. User-defined metadata is metadata that you can choose to set at the time that you upload an object. This user-defined metadata is a set of name-value pairs. For more information, see [System-defined object metadata](#SysMetadata) and [User-defined object metadata](#UserMetadata). 

When you create an object, you specify the *object key* (or *key name*), which uniquely identifies the object in an Amazon S3 bucket. For more information, see [Naming Amazon S3 objects](object-keys.md). You can also set [user-defined metadata](#UserMetadata) in Amazon S3 at the time that you upload the object. 

After you upload the object, you can't modify this user-defined metadata. The only way to modify this metadata is to make a copy of the object and set the metadata. For more information about editing metadata by using the Amazon S3 console, see [Editing object metadata in the Amazon S3 console](add-object-metadata.md). 

**Query your metadata and accelerate data discovery with S3 Metadata**  
To easily find, store, and query metadata for your S3 objects, you can use S3 Metadata. With S3 Metadata, you can quickly prepare data for use in business analytics, content retrieval, artificial intelligence and machine learning (AI/ML) model training, and more. 

S3 Metadata accelerates data discovery by automatically capturing metadata for the objects in your general purpose buckets and storing it in read-only, fully managed Apache Iceberg tables that you can query. These read-only tables are called *metadata tables*. As objects are added to, updated, and removed from your general purpose buckets, S3 Metadata automatically refreshes the corresponding metadata tables to reflect the latest changes.

By default, S3 Metadata provides [system-defined object metadata](#SysMetadata), such as an object's creation time and storage class, and custom metadata, such as tags and [user-defined metadata](#UserMetadata) that was included during object upload. S3 Metadata also provides event metadata, such as when an object is updated or deleted, and the AWS account that made the request.

Metadata tables are stored in S3 table buckets, which provide storage that's optimized for tabular data. To query your metadata, you can integrate your table bucket with AWS analytics services, such as Amazon Athena, Amazon Redshift, and Amazon Quick. 

For more information about S3 Metadata, see [Discovering your data with S3 Metadata tables](metadata-tables-overview.md).

## System-defined object metadata
<a name="SysMetadata"></a>

For each object stored in a bucket, Amazon S3 maintains a set of system metadata. Amazon S3 processes this system metadata as needed. For example, Amazon S3 maintains object-creation date and size metadata, using this information as part of object management. 

There are two categories of system metadata: 
+ **System controlled** – Metadata such as the object-creation date is system controlled, which means that only Amazon S3 can modify the date value. 
+ **User controlled** – Other system metadata, such as the storage class configured for the object and whether the object has server-side encryption enabled, are examples of system metadata whose values you control. If your bucket is configured as a website, sometimes you might want to redirect a page request either to another page or an external URL. In this case, a webpage is an object in your bucket. Amazon S3 stores the page redirect value as system metadata, which you can control. 

  When you create objects, you can configure the values of these system metadata items or update the values when you need to. For more information about storage classes, see [Understanding and managing Amazon S3 storage classes](storage-class-intro.md). 

  Amazon S3 uses AWS KMS keys to encrypt your Amazon S3 objects. AWS KMS encrypts only the object data. The checksum and the specified algorithm are stored as part of the object's metadata. If server-side encryption is requested for the object, then the checksum is stored in encrypted form. For more information about server-side encryption, see [Protecting data with encryption](UsingEncryption.md). 

**Note**  
The `PUT` request header is limited to 8 KB in size. Within the `PUT` request header, the system-defined metadata is limited to 2 KB in size. The size of system-defined metadata is measured by taking the sum of the number of bytes in the US-ASCII encoding of each key and value. 

The following table provides a list of system-defined metadata and whether you can update it.


| Name | Description | Can user modify the value? | 
| --- | --- | --- | 
| Date | The current date and time. | No | 
| Cache-Control | A general header field used to specify caching policies. | Yes | 
| Content-Disposition | Object presentational information. | Yes | 
| Content-Encoding | The content encodings (like compression) that have been applied to the object's data | Yes | 
| Content-Length | The object size in bytes. | No | 
| Content-Type | The object type. | Yes | 
| Last-Modified |  The object creation date or the last modified date, whichever is the latest. For multipart uploads, the object creation date is the date of initiation of the multipart upload.  | No | 
| ETag | An entity tag (ETag) that represents a specific version of an object. For objects that are not uploaded as a multipart upload and are either unencrypted or encrypted by server-side encryption with Amazon S3 managed keys (SSE-S3), the ETag is an MD5 digest of the data. | No | 
| x-amz-server-side-encryption | A header that indicates whether server-side encryption is enabled for the object, and whether that encryption is using the AWS Key Management Service (AWS KMS) keys (SSE-KMS) or using Amazon S3 managed encryption keys (SSE-S3). For more information, see [Protecting data with server-side encryption](serv-side-encryption.md).  | Yes | 
| x-amz-checksum-crc64nvme, x-amz-checksum-crc32, x-amz-checksum-crc32c, x-amz-checksum-sha1, x-amz-checksum-sha256 | Headers that contain the checksum or digest of the object. At most, one of these headers will be set at a time, depending on the checksum algorithm that you instruct Amazon S3 to use. For more information about choosing the checksum algorithm, see [Checking object integrity in Amazon S3](checking-object-integrity.md). | No | 
| x-amz-checksum-type | The checksum type, which determines how part-level checksums are combined to create an object-level checksum for multipart objects.  | Yes | 
| x-amz-version-id | The object version. When you enable versioning on a bucket, Amazon S3 assigns a version ID to objects added to the bucket. For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md). | No | 
| x-amz-delete-marker | A Boolean marker that indicates whether the object is a delete marker. This marker is used only in buckets that have versioning enabled,  | No | 
| x-amz-storage-class | The storage class used for storing the object. For more information, see [Understanding and managing Amazon S3 storage classes](storage-class-intro.md). | Yes | 
| x-amz-website-redirect-location |  A header that redirects requests for the associated object to another object in the same bucket or to an external URL. For more information, see [(Optional) Configuring a webpage redirect](how-to-page-redirect.md). | Yes | 
| x-amz-server-side-encryption-aws-kms-key-id | A header that indicates the ID of the AWS KMS symmetric encryption KMS key that was used to encrypt the object. This header is used only when the x-amz-server-side-encryption header is present and has the value of aws:kms. | Yes | 
| x-amz-server-side-encryption-customer-algorithm | A header that indicates whether server-side encryption with customer-provided encryption keys (SSE-C) is enabled. For more information, see [Using server-side encryption with customer-provided keys (SSE-C)](ServerSideEncryptionCustomerKeys.md).  | Yes | 
| x-amz-tagging | The tag-set for the object. The tag-set must be encoded as URL Query parameters. | Yes | 

## User-defined object metadata
<a name="UserMetadata"></a>

When uploading an object, you can also assign metadata to the object. You provide this optional information as a name-value (key-value) pair when you send a `PUT` or `POST` request to create the object. When you upload objects using the REST API, the optional user-defined metadata names must begin with `x-amz-meta-` to distinguish them from other HTTP headers. When you retrieve the object using the REST API, this prefix is returned. When you upload objects using the SOAP API, the prefix is not required. When you retrieve the object using the SOAP API, the prefix is removed, regardless of which API you used to upload the object. 

**Note**  
 SOAP APIs for Amazon S3 are not available for new customers, and are approaching End of Life (EOL) on August 31, 2025. We recommend that you use either the REST API or the AWS SDKs. 

When metadata is retrieved through the REST API, Amazon S3 combines headers that have the same name (ignoring case) into a comma-delimited list. If some metadata contains unprintable characters, it is not returned. Instead, the `x-amz-missing-meta` header is returned with a value of the number of unprintable metadata entries. The `HeadObject` action retrieves metadata from an object without returning the object itself. This operation is useful if you're only interested in an object's metadata. To use `HEAD`, you must have `READ` access to the object. For more information, see [HeadObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html) in the * Amazon Simple Storage Service API Reference*.

User-defined metadata is a set of key-value pairs. Amazon S3 stores user-defined metadata keys in lowercase.

Amazon S3 allows arbitrary Unicode characters in your metadata values.

To avoid issues related to the presentation of these metadata values, you should conform to using US-ASCII characters when using REST and UTF-8 when using SOAP or browser-based uploads through `POST`.

When using non-US-ASCII characters in your metadata values, the provided Unicode string is examined for non-US-ASCII characters. Values of such headers are character decoded as per [RFC 2047](https://datatracker.ietf.org/doc/html/rfc2047) before storing and encoded as per [RFC 2047](https://datatracker.ietf.org/doc/html/rfc2047) to make them mail-safe before returning. If the string contains only US-ASCII characters, it is presented as is.

The following is an example.

```
PUT /Key HTTP/1.1
Host: amzn-s3-demo-bucket.s3.amazonaws.com
x-amz-meta-nonascii: ÄMÄZÕÑ S3

HEAD /Key HTTP/1.1
Host: amzn-s3-demo-bucket.s3.amazonaws.com
x-amz-meta-nonascii: =?UTF-8?B?w4PChE3Dg8KEWsODwpXDg8KRIFMz?=

PUT /Key HTTP/1.1
Host: amzn-s3-demo-bucket.s3.amazonaws.com
x-amz-meta-ascii: AMAZONS3

HEAD /Key HTTP/1.1
Host: amzn-s3-demo-bucket.s3.amazonaws.com
x-amz-meta-ascii: AMAZONS3
```

**Note**  
The `PUT` request header is limited to 8 KB in size. Within the `PUT` request header, the user-defined metadata is limited to 2 KB in size. The size of user-defined metadata is measured by taking the sum of the number of bytes in the UTF-8 encoding of each key and value. 

For information about changing the metadata of your object after it has been uploaded by creating a copy of the object, modifying it, and replacing the old object, or creating a new version, see [Editing object metadata in the Amazon S3 console](add-object-metadata.md). 

# Editing object metadata in the Amazon S3 console
<a name="add-object-metadata"></a>

You can use the Amazon S3 console to edit metadata for existing S3 objects by using the **Copy** action. To edit metadata, you copy objects to the same destination and specify the new metadata you want to apply, which replaces the old metadata for the object. Some metadata is set by Amazon S3 when you upload the object. For example, `Content-Length` and `Last-Modified` are system-defined object metadata fields that can't be modified by a user.

You can also set user-defined metadata when you upload the object and replace it as your needs change. For example, you might have a set of objects that you initially store in the `STANDARD` storage class. Over time, you may no longer need this data to be highly available. So, you can change the storage class to `GLACIER` by replacing the value of the `x-amz-storage-class` key from `STANDARD` to `GLACIER`.

**Note**  
Consider the following when you are replacing object metadata in Amazon S3:  
You must specify existing metadata you want to retain, metadata you want to add, and metadata you want to edit.
If your object is less than 5 GB, you can use the **Copy** action in the S3 console to replace object metadata. If your object is greater than 5 GB, you can replace the object metadata when you copy an object with multipart upload by using the [AWS CLI](mpu-upload-object.md#UsingCLImpUpload) or [AWS SDKs](CopyingObjectsMPUapi.md). For more information, see [Copying an object using multipart upload](CopyingObjectsMPUapi.md).
For a list of additional permissions required to replace metadata, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md). For example policies that grant this permission, see [Identity-based policy examples for Amazon S3](example-policies-s3.md).
This action creates a *copy* of the object with updated settings and the last-modified date. If S3 Versioning is enabled, a new version of the object is created, and the existing object becomes an older version. If S3 Versioning isn't enabled, a new copy of the object replaces the original object. The AWS account associated with the IAM role that changes the property also becomes the owner of the new object or (object version).
Editing metadata replaces values for existing key names.
Objects that are encrypted with customer-provided encryption keys (SSE-C) can't be copied by using the console. You must use the AWS CLI, AWS SDK, or the Amazon S3 REST API.
When copying an object by using the Amazon S3 console, you might receive the error message "Copied metadata can't be verified." The console uses headers to retrieve and set metadata for your object. If your network or browser configuration modifies your network requests, this behavior might cause unintended metadata (such as modified `Cache-Control` headers) to be written to your copied object. Amazon S3 can't verify this unintended metadata.  
To address this issue, check your network and browser configuration to make sure it doesn't modify headers, such as `Cache-Control`. For more information, see [The Shared Responsibility Model](https://docs.aws.amazon.com/whitepapers/latest/applying-security-practices-to-network-workload-for-csps/the-shared-responsibility-model.html).

**Warning**  
When replacing metadata for folders, wait for the **Copy** action to finish before adding new objects to the folder. Otherwise, new objects might also be edited.

The following topics describe how to replace metadata for an object by using the **Copy** action in the Amazon S3 console.

## Replacing system-defined metadata
<a name="add-object-metadata-system"></a>

You can replace some system-defined metadata for an S3 object. For a list of system-defined metadata and values that you can modify, see [System-defined object metadata](UsingMetadata.md#SysMetadata).

**To replace system-defined metadata of an object**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets** or **Directory buckets**.

1. In the list of buckets, choose the name of the bucket that contains the objects you want to change.

1. Select the check box for the objects you want to change.

1. On the **Actions** menu, choose **Copy** from the list of options that appears.

1. To specify the destination path, choose **Browse S3**, navigate to the same destination as the source objects, and select the destination check box. Choose **Choose destination** in the lower-right corner. 

   Alternatively, enter the destination path. 

1. If you do *not* have bucket versioning enabled, you will see a warning recommending you enable Bucket Versioning to help protect against unintentionally overwriting or deleting objects. If you want to keep all versions of objects in this bucket, select **Enable Bucket Versioning**. You can also view the default encryption and Object Lock properties in **Destination details**.

1. Under **Additional copy settings**, choose **Specify settings** to specify settings for **Metadata**.

1. Scroll to the **Metadata** section, and then choose **Replace all metadata**.

1. Choose **Add metadata**.

1. For metadata **Type**, select **System-defined**.

1. Specify a unique **Key** and the metadata **Value**.

1. To edit additional metadata, choose **Add metadata**. You can also choose **Remove** to remove a set of type-key-values.

1. Choose **Copy**. Amazon S3 saves your metadata changes.

## Replacing user-defined metadata
<a name="add-object-metadata-user-defined"></a>

You can replace user-defined metadata of an object by combining the metadata prefix, `x-amz-meta-`, and a name you choose to create a custom key. For example, if you add the custom name `alt-name`, the metadata key would be `x-amz-meta-alt-name`. 

User-defined metadata can be as large as 2 KB total. To calculate the total size of user-defined metadata, sum the number of bytes in the UTF-8 encoding for each key and value. Both keys and their values must conform to US-ASCII standards. For more information, see [User-defined object metadata](UsingMetadata.md#UserMetadata).

**To replace user-defined metadata of an object**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane, choose **Buckets**, and then choose the **General purpose buckets** or **Directory buckets** tab. Navigate to the Amazon S3 bucket or folder that contains the objects you want to change.

1. Select the check box for the objects you want to change.

1. On the **Actions** menu, choose **Copy** from the list of options that appears.

1. To specify the destination path, choose **Browse S3**, navigate to the same destination as the source objects, and select the destination check box. Choose **Choose destination**. 

   Alternatively, enter the destination path. 

1. If you do *not* have bucket versioning enabled, you will see a warning recommending you enable Bucket Versioning to help protect against unintentionally overwriting or deleting objects. If you want to keep all versions of objects in this bucket, select **Enable Bucket Versioning**. You can also view the default encryption and Object Lock properties in **Destination details**.

1. Under **Additional copy settings**, choose **Specify settings** to specify settings for **Metadata**.

1. Scroll to the **Metadata** section, and then choose **Replace all metadata**.

1. Choose **Add metadata**.

1. For metadata **Type**, choose **User-defined**.

1. Enter a unique custom **Key** following `x-amz-meta-`. Also enter a metadata **Value**.

1. To add additional metadata, choose **Add metadata**. You can also choose **Remove** to remove a set of type-key-values. 

1. Choose **Copy**. Amazon S3 saves your metadata changes.

# Discovering your data with S3 Metadata tables
<a name="metadata-tables-overview"></a>

Amazon S3 Metadata accelerates data discovery by automatically capturing metadata for objects in your general purpose buckets and storing it in read-only, fully managed Apache Iceberg tables that you can query. These read-only tables are called *metadata tables*. As objects are added to, updated, or removed from your general purpose buckets, S3 Metadata automatically refreshes the corresponding metadata tables to reflect the latest changes.

By default, S3 Metadata provides three types of metadata: 
+ [System-defined metadata](UsingMetadata.md#SysMetadata), such as an object's creation time and storage class
+ Custom metadata, such as tags and [user-defined metadata](UsingMetadata.md#UserMetadata) that was included during object upload
+ Event metadata, such as when an object is updated or deleted, and the AWS account that made the request

With S3 Metadata, you can easily find, store, and query metadata for your S3 objects, so that you can quickly prepare data for use in business analytics, content retrieval, artificial intelligence and machine learning (AI/ML) model training, and more. 

For each general purpose bucket, you can create a metadata table configuration that contains two complementary metadata tables:
+ **Journal table** – By default, your metadata table configuration contains a *journal table*, which captures events that occur for the objects in your bucket. The journal table records changes made to your data in near real time, helping you to identify new data uploaded to your bucket, track recently deleted objects, monitor lifecycle transitions, and more. The journal table records new objects and updates to your objects and their metadata (those updates that require either a `PUT` or a `DELETE` operation). 

  The journal table captures metadata only for change events (such as uploads, updates, and deletes) that happen after you create your metadata table configuration. Because this table is queryable, you can audit the changes to your bucket through simple SQL queries. 

  The journal table is required for each metadata table configuration. (In the initial release of S3 Metadata, the journal table was referred to as "the metadata table.")

  For more information about what data is stored in journal tables, see [S3 Metadata journal tables schema](metadata-tables-schema.md).

  To help minimize your storage costs, you can choose to enable journal table record expiration. For more information, see [Expiring journal table records](metadata-tables-expire-journal-table-records.md). 
+ **Live inventory table** – Optionally, you can add a *live inventory table* to your metadata table configuration. The live inventory table provides a simple, queryable inventory of all the objects and their versions in your bucket so that you can determine the latest state of your data. 

  You can use the live inventory table to simplify and speed up business workflows and big data jobs by identifying objects that you want to process for various workloads. For example, you can query the live inventory table to find all objects stored in a particular storage class, all objects with certain tags, all objects that aren't encrypted with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS), and more. 

  When you enable the live inventory table for your metadata table configuration, the table goes through a process known as *backfilling*, during which Amazon S3 scans your general purpose bucket to retrieve the initial metadata for all objects that exist in the bucket. Depending on the number of objects in your bucket, this process can take minutes (minimum 15 minutes) to hours. When the backfilling process is finished, the status of your live inventory table changes from **Backfilling** to **Active**. After backfilling is completed, updates to your objects are typically reflected in the live inventory table within one hour.

  You're charged for backfilling your inventory table. If your general purpose bucket has more than one billion objects, you're also charged a monthly fee for your live inventory table. For more information, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

  For more information about what data is stored in live inventory tables, see [S3 Metadata live inventory tables schema](metadata-tables-inventory-schema.md).

Your metadata tables are stored in an AWS managed S3 table bucket, which provides storage that's optimized for tabular data. To query your metadata, you can integrate your table bucket with Amazon SageMaker Lakehouse. This integration, which uses the AWS Glue Data Catalog and AWS Lake Formation, allows AWS analytics services to automatically discover and access your table data. 

After your table bucket is integrated with the AWS Glue Data Catalog, you can directly query your metadata tables with AWS analytics services such as Amazon Athena, Amazon EMR, and Amazon Redshift. You can also create interactive dashboards with your query data by using Amazon Quick. For more information about integrating your AWS managed S3 table bucket with Amazon SageMaker Lakehouse, see [Integrating Amazon S3 Tables with AWS analytics services](s3-tables-integrating-aws.md).

You can also query your metadata tables with Apache Spark, Apache Trino, and any other application that supports the Apache Iceberg format by using the AWS Glue Iceberg REST endpoint, the Amazon S3 Tables Iceberg REST endpoint, or the Amazon S3 Tables Catalog for Apache Iceberg client catalog. For more information about accessing your metadata tables, see [Accessing table data](s3-tables-access.md).

For S3 Metadata pricing, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/). 

## How metadata tables work
<a name="metadata-tables-how-they-work"></a>

Metadata tables are managed by Amazon S3, and can't be modified by any IAM principal outside of Amazon S3 itself. You can, however, delete your metadata tables. As a result, metadata tables are read-only, which helps ensure that they correctly reflect the contents of your general purpose bucket.

To generate and store object metadata in AWS managed metadata tables, you create a metadata table configuration for your general purpose bucket. Amazon S3 is designed to continuously update the metadata tables to reflect the latest changes to your data as long as the configuration is active on the general purpose bucket. 

Before you create a metadata table configuration, make sure that you have the necessary AWS Identity and Access Management (IAM) permissions to create and manage metadata tables. For more information, see [Setting up permissions for configuring metadata tables](metadata-tables-permissions.md).

**Metadata table storage, organization, and encryption**  
When you create your metadata table configuration, your metadata tables are stored in an AWS managed table bucket. All metadata table configurations in your account and in the same Region are stored in a single AWS managed table bucket. These AWS managed table buckets are named `aws-s3` and have the following Amazon Resource Name (ARN) format: 

`arn:aws:s3tables:region:account_id:bucket/aws-s3`

For example, if your account ID is 123456789012 and your general purpose bucket is in US East (N. Virginia) (`us-east-1`), your AWS managed table bucket is also created in US East (N. Virginia) (`us-east-1`) and has the following ARN:

`arn:aws:s3tables:us-east-1:123456789012:bucket/aws-s3`

By default, AWS managed table buckets are encrypted with server-side encryption using Amazon S3 managed keys (SSE-S3). After you create your first metadata configuration, you can set the default encryption setting for the AWS managed table bucket to use server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS). For more information, see [Encryption for AWS managed table buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-aws-managed-buckets.html#aws-managed-buckets-encryption) and [Specifying server-side encryption with AWS KMS keys (SSE-KMS) in table buckets](s3-tables-kms-specify.md).

Within your AWS managed table bucket, the metadata tables for your configuration are typically stored in a namespace with the following naming format: 

`b_general-purpose-bucket-name`

**Note**  
If your general purpose bucket name contains any periods, the periods are converted to underscores (`_`) in the namespace name. 
If your general purpose bucket was created before March 1, 2018, its name might contain uppercase letters and underscores, and it might also be up to 255 characters long. If your bucket name has these characteristics, your metadata table namespace will have a different format. The general purpose bucket name will be prefixed with `b_`, truncated to 63 characters, converted to all lowercase, and suffixed with a hash. 

Metadata tables have the following Amazon Resource Name (ARN) format, which includes the table ID of the metadata table: 

`arn:aws:s3tables:region-code:account-id:bucket/aws-s3/table/table-id`

For example, a metadata table in the US East (N. Virginia) Region would have an ARN like the following:

`arn:aws:s3tables:us-east-1:111122223333:bucket/aws-s3/table/a12bc345-67d8-912e-3456-7f89123g4h56`

Journal tables have the name `journal`, and live inventory tables have the name `inventory`.

When you create your metadata table configuration, you can choose to encrypt your AWS managed metadata tables with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS). If you choose to use SSE-KMS, you must provide a customer managed KMS key in the same Region as your general purpose bucket. You can set the encryption type for your tables only during table creation. After an AWS managed table is created, you can't change its encryption setting. To specify SSE-KMS for your metadata tables, you must have certain permissions. For more information, see [ Permissions for SSE-KMS](metadata-tables-permissions.md#metadata-kms-permissions).

The encryption setting for a metadata table takes precedence over the default bucket-level encryption setting. If you don't specify encryption for a table, it will inherit the default encryption setting from the bucket.

AWS managed table buckets don't count toward your S3 Tables quotas. For more information about working with AWS managed table buckets and AWS managed tables, see [Working with AWS managed table buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-aws-managed-buckets.html).

To monitor updates to your metadata table configuration, you can use AWS CloudTrail. For more information, see [Amazon S3 bucket-level actions that are tracked by CloudTrail logging](cloudtrail-logging-s3-info.md#cloudtrail-bucket-level-tracking). 

**Metadata table maintenance and record expiration**  
To keep your metadata tables performing at their best, Amazon S3 performs periodic maintenance activities on your tables, such as compaction and unreferenced file removal. These maintenance activities help to both minimize the cost of storing your metadata tables and optimize query performance. This table maintenance happens automatically, requiring no opt-in or ongoing management by you.

**Note**  
You can't control the expiration of journal table or inventory table snapshots. For each table, Amazon S3 stores a minimum of 1 snapshot for a maximum of 24 hours.
To help minimize your costs, you can configure journal table record expiration. By default, journal table records don't expire, and journal table records must be retained for a minimum of 7 days. For more information, see [Expiring journal table records](metadata-tables-expire-journal-table-records.md).

**Topics**
+ [

## How metadata tables work
](#metadata-tables-how-they-work)
+ [

# Metadata table limitations and restrictions
](metadata-tables-restrictions.md)
+ [

# S3 Metadata journal tables schema
](metadata-tables-schema.md)
+ [

# S3 Metadata live inventory tables schema
](metadata-tables-inventory-schema.md)
+ [

# Configuring metadata tables
](metadata-tables-configuring.md)
+ [

# Querying metadata tables
](metadata-tables-querying.md)
+ [

# Troubleshooting S3 Metadata
](metadata-tables-troubleshooting.md)

# Metadata table limitations and restrictions
<a name="metadata-tables-restrictions"></a>

Amazon S3 Metadata has the following limitations and restrictions: 
+ S3 Metadata is currently available only in certain Regions. For more information, see [S3 Metadata AWS Regions](#metadata-tables-regions).
+ S3 Metadata supports all storage classes supported by general purpose buckets. For the S3 Intelligent-Tiering storage class, the specific tier isn't shown in the metadata table.
+ When you create a metadata table configuration, your metadata tables are stored in an AWS managed table bucket. You can't store your configuration in a customer-managed table bucket.
+ S3 Metadata isn't supported for directory buckets, table buckets, or vector buckets. You can create metadata table configurations only for general purpose buckets. The journal table captures metadata only for change events (such as uploads, updates, and deletes) that happen after you have created your metadata table configuration.
+ You can't control the expiration of journal table or inventory table snapshots. For each table, Amazon S3 stores a minimum of 1 snapshot for a maximum of 24 hours. 

  To help minimize your costs, you can configure journal table record expiration. By default, journal table records don't expire, and journal table records must be retained for a minimum of 7 days. For more information, see [Expiring journal table records](metadata-tables-expire-journal-table-records.md).
+ You can create a metadata table configuration only for an entire general purpose bucket. You can't apply a metadata table configuration at the prefix level.
+ You can't pause and resume updates to metadata tables. However, you can delete your associated metadata configuration for journal or live inventory tables. Deleting your configuration doesn't delete the associated journal or inventory table. To re-create your configuration, you must first delete the old journal or inventory table, and then Amazon S3 can create a new journal or inventory table. When you re-enable the inventory table, Amazon S3 creates a new inventory table, and you're charged again for backfilling the new inventory table.
+ Metadata tables don't contain all the same metadata as is available through S3 Inventory or through the Amazon S3 REST API. For example, the following information isn't available in metadata tables: 
  + S3 Lifecycle expiration eligibility or transition status
  + Object Lock retention period or governance mode
  + Object access control list (ACL) information
  + Replication status
+ When you're using Amazon Athena or Amazon Redshift to query your metadata tables, you must surround your metadata table namespace names in quotation marks (`"`) or backticks (```), otherwise the query might not work. For examples, see [Example metadata table queries](metadata-tables-example-queries.md).
+ When using Apache Spark on Amazon EMR or other third-party engines to query your metadata tables, we recommend that you use the Amazon S3 Tables Iceberg REST endpoint. Your query might not run successfully if you don't use this endpoint. For more information, see [Accessing tables using the Amazon S3 Tables Iceberg REST endpoint](s3-tables-integrating-open-source.md).

## S3 Metadata AWS Regions
<a name="metadata-tables-regions"></a>

S3 Metadata is currently available in the following AWS Regions:


|  AWS Region name  |  AWS Region code  | 
| --- | --- | 
|  Africa (Cape Town)  |  `af-south-1`  | 
|  Asia Pacific (Hong Kong)  |  `ap-east-1`  | 
|  Asia Pacific (Jakarta)  |  `ap-southeast-3`  | 
|  Asia Pacific (Melbourne)  |  `ap-southeast-4`  | 
|  Asia Pacific (Mumbai)  |  `ap-south-1`  | 
|  Asia Pacific (Osaka)  |  `ap-northeast-3`  | 
|  Asia Pacific (Seoul)  |  `ap-northeast-2`  | 
|  Asia Pacific (Singapore)  |  `ap-southeast-1`  | 
|  Asia Pacific (Sydney)  |  `ap-southeast-2`  | 
|  Asia Pacific (Tokyo)  |  `ap-northeast-1`  | 
|  Canada (Central)  |  `ca-central-1`  | 
|  Canada West (Calgary)  |  `ca-west-1`  | 
|  Europe (Frankfurt)  |  `eu-central-1`  | 
|  Europe (Zurich)  |  `eu-central-2`  | 
|  Europe (Ireland)  |  `eu-west-1`  | 
|  Europe (London)  |  `eu-west-2`  | 
|  Europe (Milan)  |  `eu-south-1`  | 
|  Europe (Paris)  |  `eu-west-3`  | 
|  Europe (Spain)  |  `eu-south-2`  | 
|  Europe (Stockholm)  |  `eu-north-1`  | 
|  Israel (Tel Aviv)  |  `il-central-1`  | 
|  Middle East (Bahrain)  |  `me-south-1`  | 
|  Middle East (UAE)  |  `me-central-1`  | 
|  South America (São Paulo)  |  `sa-east-1`  | 
|  US East (N. Virginia)  |  `us-east-1`  | 
|  US East (Ohio)  |  `us-east-2`  | 
|  US West (N. California)  |  `us-west-1`  | 
|  US West (Oregon)  |  `us-west-2`  | 
|  China (Beijing)  |  `cn-north-1`  | 
|  China (Ningxia)  |  `cn-northwest-1`  | 

# S3 Metadata journal tables schema
<a name="metadata-tables-schema"></a>

The journal table records changes made to your data in near real time, helping you to identify new data uploaded to your bucket, track recently deleted objects, monitor lifecycle transitions, and more. The journal table records new objects and updates to your objects and their metadata (those updates that require either a `PUT` or a `DELETE` operation). Because this table is queryable, you can audit the changes to your bucket through simple SQL queries. 

You can use the journal table for security, auditing, and compliance use cases to track uploaded, deleted, and changed objects in the bucket. For example, you can query the journal table to answer questions such as: 
+ Which objects were deleted in the past 24 hours by S3 Lifecycle?
+ Which IP addresses did the most recent `PUT` requests come from?
+ Which AWS Key Management Service (AWS KMS) keys were used for `PUT` requests in the past 7 days?
+ Which objects in your bucket were created by Amazon Bedrock in the last five days?

Amazon S3 Metadata journal tables contain rows and columns. Each row represents a mutation event that has created, updated, or deleted an object in your general purpose bucket. Most of these events result from user actions, but some of these events result from actions taken by Amazon S3 on your behalf, such as S3 Lifecycle expirations or storage class transitions. 

S3 Metadata journal tables are eventually consistent with the changes that have occurred in your general purpose bucket. In some cases, by the time S3 Metadata is notified that an object is created or updated, that object might already have been overwritten or deleted in the bucket. In such cases, the objects can no longer be retrieved and some columns might show a NULL value to indicate missing metadata schema.

The following is an example of a journal table for a general purpose bucket named `amzn-s3-demo-bucket:` 

```
bucket                key                        sequence_number                                                                                          record_type   record_timestamp           version_id   is_delete_marker   size   last_modified_date   e_tag	                           storage_class  is_multipart   encryption_status   is_bucket_key_enabled   kms_key_arn                                                                   checksum_algorithm   object_tags   user_metadata	                                                                                                                 requester      source_ip_address   request_id 
amzn-s3-demo-bucket   Finance/statement1.pdf     80e737d8b4d82f776affffffffffffffff006737d8b4d82f776a00000000000000000000000000000000000000000000000072   CREATE        2024-11-15 23:26:44.899                 FALSE              6223   11/15/2024 23:26     e131b86632dda753aac4018f72192b83    STANDARD	  FALSE          SSE-KMS             FALSE                   arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890df   SSECRC32             {}            {count -> Asia, customs -> false, family -> true, location -> Mary, name -> football, user -> United States}                       111122223333   192.0.2.1           CVK8FWYRW0M9JW65
amzn-s3-demo-bucket   s3-dg.pdf                  80e737d8b4e39f1dbdffffffffffffffff006737d8b4e39f1dbd00000000000000000000000000000000000000000000000072   CREATE        2024-11-15 23:26:44.942                 FALSE              3554   11/15/2024 23:26     9bb49efc2d92c05558ddffbbde8636d5    STANDARD	  FALSE          DSSE-KMS            FALSE                   arn:aws:kms:us-east-1:936810216292:key/0dcebce6-49fd-4cae-b2e2-5512ad281afd   SSESHA1              {}            {}                                                                                                                                 111122223333   192.0.2.1           CVKAQDRAZEG7KXAY
amzn-s3-demo-bucket   Development/Projects.xls   80e737d8b4ed9ac5c6ffffffffffffffff006737d8b4ed9ac5c600000000000000000000000000000000000000000000000072   CREATE        2024-11-15 23:26:44.966                 FALSE              7746   11/15/2024 23:26     729a6863e47fb9955b31bfabce984908    STANDARD	  FALSE          SSE-S3              FALSE                   NULL                                                                          SSECRC32             {}            {count -> Asia, customs -> Canada, family -> Billiards, filter -> true, location -> Europe, name -> Asia, user -> United States}   111122223333   192.0.2.1           CVK7Z6XQTQ90BSRV
```

Journal tables have the following schema:


| Column name | Required? | Data type |   | 
| --- | --- | --- | --- | 
| `bucket` | Yes | String | The general purpose bucket name. For more information, see [General purpose bucket naming rules](bucketnamingrules.md). | 
| `key` | Yes | String | The object key name (or key) that uniquely identifies the object in the bucket. For more information, see [Naming Amazon S3 objects](object-keys.md). | 
| `sequence_number` | Yes | String | The sequence number, which is an ordinal that's included in the records for a given object. To order records of the same bucket and key, you can sort on `sequence_number`. For a given bucket and key, a lexicographically larger `sequence_number` value implies that the record was introduced to the bucket more recently. | 
| `record_type` | Yes | String | The type of this record, one of `CREATE`, `UPDATE_METADATA`, or `DELETE`. `CREATE` records indicate that a new object (or a new version of the object) was written to the bucket. `UPDATE_METADATA` records capture changes to mutable metadata for an existing object, such as the storage class or tags. `DELETE` records indicate that this object (or this version of the object) has been deleted. When versioning is enabled, `DELETE` records represent either a delete marker or a permanent delete. They are further disambiguated by consulting the optional `is_delete_marker` column. For more information, see [Deleting object versions from a versioning-enabled bucket](DeletingObjectVersions.md).  A permanent delete carries `NULL`s in all columns, *except* `bucket`, `key`, `sequence_number`, `record_type`, `record_timestamp`, and `version_id` (i.e. those columns marked as Required).  | 
| `record_timestamp` | Yes | Timestamp NTZ (no time zone) | The timestamp that's associated with this record. | 
| `version_id` | No | String |  The object's version ID. When you enable versioning on a bucket, Amazon S3 assigns a version number to objects that are added to the bucket. For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md). Objects that are stored in your bucket before you set the versioning state have a version ID of null.  | 
| `is_delete_marker` | No | Boolean |  The object's delete marker status. For DELETE records that are delete markers, this value is `TRUE`. For permanent deletions, this value is omitted (`NULL`). Other record types (CREATE and UPDATE\$1METADATA) have value `FALSE`. For more information, see [Working with delete markers](DeleteMarker.md).  Rows that are added for delete markers have a `record_type` value of `DELETE`, not `UPDATE_METADATA`. If the delete marker is created as the result of an S3 Lifecycle expiration, the `requester` value is `s3.amazonaws.com`.   | 
| `size` | No | Long | The object size in bytes, not including the size of incomplete multipart uploads or object metadata. If `is_delete_marker` is `TRUE`, the size is `0`. For more information, see [System-defined object metadata](UsingMetadata.md#SysMetadata). | 
| `last_modified_date` | No | Timestamp NTZ (no time zone) | The object creation date or the last modified date, whichever is the latest. For multipart uploads, the object creation date is the date when the multipart upload is initiated. For more information, see [System-defined object metadata](UsingMetadata.md#SysMetadata). | 
| `e_tag` | No | String | The entity tag (ETag), which is a hash of the object. The ETag reflects changes only to the contents of an object, not to its metadata. The ETag can be an MD5 digest of the object data. Whether the ETag is an MD5 digest depends on how the object was created and how it's encrypted. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_Object.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Object.html) in the *Amazon S3 API Reference*. | 
| `storage_class` | No | String | The storage class that’s used for storing the object. One of `STANDARD`, `REDUCED_REDUNDANCY`, `STANDARD_IA`, `ONEZONE_IA`, `INTELLIGENT_TIERING`, `GLACIER`, `DEEP_ARCHIVE`, or `GLACIER_IR`. For more information, see [Understanding and managing Amazon S3 storage classes](storage-class-intro.md). | 
| `is_multipart` | No | Boolean | The object's upload type. If the object was uploaded as a multipart upload, this value is `TRUE`. Otherwise, it's `FALSE`. For more information, see [Uploading and copying objects using multipart upload in Amazon S3](mpuoverview.md). | 
| `encryption_status` | No | String | The object's server-side encryption status, depending on what kind of encryption key is used: server-side encryption with Amazon S3 managed keys (SSE-S3), server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), dual-layer server-side encryption with AWS KMS keys (DSSE-KMS), or server-side encryption with customer-provided keys (SSE-C). If the object is unencrypted, this value is null. Possible values are `SSE-S3`, `SSE-KMS`, `DSSE-KMS`, `SSE-C`, or null. For more information, see [Protecting data with encryption](UsingEncryption.md). | 
| `is_bucket_key_enabled` | No | Boolean | The object's S3 Bucket Key enablement status. If the object uses an S3 Bucket Key for SSE-KMS, this value is `TRUE`. Otherwise, it's `FALSE`. For more information, see [Configuring an S3 Bucket Key at the object level](configuring-bucket-key-object.md). | 
| `kms_key_arn` | No | String |  The Amazon Resource Name (ARN) for the KMS key with which the object is encrypted, for rows where `encryption_status` is `SSE-KMS` or `DSSE-KMS`. If the object isn't encrypted with SSE-KMS or DSSE-KMS, the value is null. For more information, see [Using server-side encryption with AWS KMS keys (SSE-KMS)](UsingKMSEncryption.md) and [Using dual-layer server-side encryption with AWS KMS keys (DSSE-KMS)](UsingDSSEncryption.md).  If a row represents an object version that no longer existed at the time that a delete or overwrite event was processed, `kms_key_arn` contains a null value, even if the `encryption_status` column value is `SSE-KMS` or `DSSE-KMS`.   | 
| `checksum_algorithm` | No | String | The algorithm that’s used to create the checksum for the object, one of `CRC64NVME`, `CRC32`, `CRC32C`, `SHA1`, or `SHA256`. If no checksum is present, this value is null. For more information, see [Using supported checksum algorithms](checking-object-integrity-upload.md#using-additional-checksums). | 
| `object_tags` | No | Map <String, String> |  The object tags that are associated with the object. Object tags are stored as a map of key-value pairs. If an object has no object tags, an empty map (`{}`) is stored. For more information, see [Categorizing your objects using tags](object-tagging.md).  If the `record_type` value is `DELETE`, the `object_tags` column contains a null value. If the `record_type` value is `CREATE` or `UPDATE_METADATA`, rows that represent object versions that no longer existed at the time that a delete or overwrite event was processed will contain a null value in the `object_tags` column.    | 
| `user_metadata` | No | Map <String, String> |  The user metadata that's associated with the object. User metadata is stored as a map of key-value pairs. If an object has no user metadata, an empty map (`{}`) is stored. For more information, see [User-defined object metadata](UsingMetadata.md#UserMetadata).   If the `record_type` value is `DELETE`, the `user_metadata` column contains a null value. If the `record_type` value is `CREATE` or `UPDATE_METADATA`, rows that represent object versions that no longer existed at the time that a delete or overwrite event was processed will contain a null value in the `user_metadata` column.   | 
| `requester` | No | String | The AWS account ID of the requester or the AWS service principal that made the request. For example, if the requester is S3 Lifecycle, this value is `s3.amazonaws.com`.  | 
| `source_ip_address` | No | String | The source IP address of the request. For records that are generated by a user request, this column contains the source IP address of the request. For actions taken by Amazon S3 or another AWS service on behalf of the user, this column contains a null value. | 
| `request_id` | No | String | The request ID that's associated with the request. | 

# S3 Metadata live inventory tables schema
<a name="metadata-tables-inventory-schema"></a>

The live inventory table provides a simple, queryable inventory of all the objects and their versions in your bucket so that you can determine the latest state of your data. Updates to your objects are typically reflected in the inventory table within one hour.

You can use this table to simplify and speed up business workflows and big data jobs by identifying objects that you want to process for various workloads. For example, you can query the inventory table to do the following: 
+ Find all objects stored in the S3 Glacier Deep Archive storage class.
+ Create a distribution of object tags or find objects without tags.
+ Find all objects that aren't encrypted by using server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS). 

When you enable the inventory table for your metadata table configuration, the table goes through a process known as *backfilling*, during which Amazon S3 scans your general purpose bucket to retrieve the initial metadata for all objects in the bucket. Depending on the number of objects in your bucket, this process can take minutes (minimum 15 minutes) to hours. When the backfilling process is finished, the status of your inventory table changes from **Backfilling** to **Active**. After backfilling is completed, updates to your objects are typically reflected in the inventory table within one hour.

**Note**  
You're charged for backfilling your inventory table. If your general purpose bucket has more than one billion objects, you're also charged a monthly fee for your inventory table. For more information, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

Amazon S3 Metadata inventory tables contain rows and columns. Each row represents the current state of an object in your general purpose bucket. The inventory table provides a simple, queryable inventory of all objects in your bucket so that you can determine the current state of your data.

The following is an example of an inventory table for a general purpose bucket named `amzn-s3-demo-bucket:` 

```
bucket                key                        sequence_number                                                                                          version_id   is_delete_marker   size   last_modified_date   e_tag	                          storage_class   is_multipart   encryption_status   is_bucket_key_enabled   kms_key_arn                                                                   checksum_algorithm   object_tags   user_metadata	                                                                                                                  
amzn-s3-demo-bucket   Finance/statement1.pdf     80e737d8b4d82f776affffffffffffffff006737d8b4d82f776a00000000000000000000000000000000000000000000000072                FALSE              6223   11/15/2024 23:26     e131b86632dda753aac4018f72192b83    STANDARD	  FALSE          SSE-KMS             FALSE                   arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890df   SSECRC32             {}            {count -> Asia, customs -> false, family -> true, location -> Mary, name -> football, user -> United States}                      
amzn-s3-demo-bucket   s3-dg.pdf                  80e737d8b4e39f1dbdffffffffffffffff006737d8b4e39f1dbd00000000000000000000000000000000000000000000000072                FALSE              3554   11/15/2024 23:26     9bb49efc2d92c05558ddffbbde8636d5    STANDARD	  FALSE          DSSE-KMS            FALSE                   arn:aws:kms:us-east-1:936810216292:key/0dcebce6-49fd-4cae-b2e2-5512ad281afd   SSESHA1              {}            {}                                                                                                                                
amzn-s3-demo-bucket   Development/Projects.xls   80e737d8b4ed9ac5c6ffffffffffffffff006737d8b4ed9ac5c600000000000000000000000000000000000000000000000072                FALSE              7746   11/15/2024 23:26     729a6863e47fb9955b31bfabce984908    STANDARD	  FALSE          SSE-S3              FALSE                   NULL                                                                          SSECRC32             {}            {count -> Asia, customs -> Canada, family -> Billiards, filter -> true, location -> Europe, name -> Asia, user -> United States}
```

Inventory tables have the following schema:


| Column name | Required? | Data type |   | 
| --- | --- | --- | --- | 
|  `bucket`  | Yes | String | The general purpose bucket name. For more information, see [General purpose bucket naming rules](bucketnamingrules.md). | 
|  `key`  | Yes | String | The object key name (or key) that uniquely identifies the object in the bucket. For more information, see [Naming Amazon S3 objects](object-keys.md). | 
|  `sequence_number`  | Yes | String |  The sequence number, which is an ordinal that's included in the records for a given object. To order records of the same bucket and key, you can sort on `sequence_number`. For a given bucket and key, a lexicographically larger `sequence_number` value implies that the record was introduced to the bucket more recently.  | 
|  `version_id`  | No | String |  The object's version ID. When you enable versioning on a bucket, Amazon S3 assigns a version number to objects that are added to the bucket. For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md). Objects that are stored in your bucket before you set the versioning state have a version ID of null.  | 
|  `is_delete_marker`  | No | Boolean |  The object's delete marker status. If the object is a delete marker, this value is `True`. Otherwise, it's `False`. For more information, see [Working with delete markers](DeleteMarker.md).  Rows that are added for delete markers have a `record_type` value of `DELETE`, not `UPDATE_METADATA`. If the delete marker is created as the result of an S3 Lifecycle expiration, the `requester` value is `s3.amazonaws.com`.   | 
|  `size`  | No | Long |  The object size in bytes, not including the size of incomplete multipart uploads or object metadata. If `is_delete_marker` is `True`, the size is `0`. For more information, see [System-defined object metadata](UsingMetadata.md#SysMetadata).  | 
|  `last_modified_date`  | No | Timestamp NTZ (no time zone) |  The object creation date or the last modified date, whichever is the latest. For multipart uploads, the object creation date is the date when the multipart upload is initiated. For more information, see [System-defined object metadata](UsingMetadata.md#SysMetadata).  | 
|  `e_tag`  | No | String |  The entity tag (ETag), which is a hash of the object. The ETag reflects changes only to the contents of an object, not to its metadata. The ETag can be an MD5 digest of the object data. Whether the ETag is an MD5 digest depends on how the object was created and how it's encrypted. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_Object.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Object.html) in the *Amazon S3 API Reference*.  | 
|  `storage_class`  | No | String |  The storage class that’s used for storing the object. One of `STANDARD`, `REDUCED_REDUNDANCY`, `STANDARD_IA`, `ONEZONE_IA`, `INTELLIGENT_TIERING`, `GLACIER`, `DEEP_ARCHIVE`, or `GLACIER_IR`. For more information, see [Understanding and managing Amazon S3 storage classes](storage-class-intro.md).  | 
|  `is_multipart`  | No | Boolean |  The object's upload type. If the object was uploaded as a multipart upload, this value is `True`. Otherwise, it's `False`. For more information, see [Uploading and copying objects using multipart upload in Amazon S3](mpuoverview.md).  | 
|  `encryption_status`  | No | String |  The object's server-side encryption status, depending on what kind of encryption key is used: server-side encryption with Amazon S3 managed keys (SSE-S3), server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), dual-layer server-side encryption with AWS KMS keys (DSSE-KMS), or server-side encryption with customer-provided keys (SSE-C). If the object is unencrypted, this value is null. Possible values are `SSE-S3`, `SSE-KMS`, `DSSE-KMS`, `SSE-C`, or null. For more information, see [Protecting data with encryption](UsingEncryption.md).  | 
|  `is_bucket_key_enabled`  | No | Boolean |  The object's S3 Bucket Key enablement status. If the object uses an S3 Bucket Key for SSE-KMS, this value is `True`. Otherwise, it's `False`. For more information, see [Configuring an S3 Bucket Key at the object level](configuring-bucket-key-object.md).  | 
|  `kms_key_arn`  | No | String |  The Amazon Resource Name (ARN) for the KMS key with which the object is encrypted, for rows where `encryption_status` is `SSE-KMS` or `DSSE-KMS`. If the object isn't encrypted with SSE-KMS or DSSE-KMS, the value is null. For more information, see [Using server-side encryption with AWS KMS keys (SSE-KMS)](UsingKMSEncryption.md) and [Using dual-layer server-side encryption with AWS KMS keys (DSSE-KMS)](UsingDSSEncryption.md).  If a row represents an object version that no longer existed at the time that a delete or overwrite event was processed, `kms_key_arn` contains a null value, even if the `encryption_status` column value is `SSE-KMS` or `DSSE-KMS`.   | 
|  `checksum_algorithm`  | No | String |  The algorithm that’s used to create the checksum for the object, one of `CRC64-NVME`, `CRC32`, `CRC32C`, `SHA1`, or `SHA256`. If no checksum is present, this value is null. For more information, see [Using supported checksum algorithms](checking-object-integrity-upload.md#using-additional-checksums).  | 
|  `object_tags`  | No | Map <String, String> |  The object tags that are associated with the object. Object tags are stored as a map of key-value pairs. If an object has no object tags, an empty map (`{}`) is stored. For more information, see [Categorizing your objects using tags](object-tagging.md).  If the `record_type` value is `DELETE`, the `object_tags` column contains a null value. If the `record_type` value is `CREATE` or `UPDATE_METADATA`, rows that represent object versions that no longer existed at the time that a delete or overwrite event was processed will contain a null value in the `object_tags` column.    | 
|  `user_metadata`  | No | Map <String, String> |  The user metadata that's associated with the object. User metadata is stored as a map of key-value pairs. If an object has no user metadata, an empty map (`{}`) is stored. For more information, see [User-defined object metadata](UsingMetadata.md#UserMetadata).   If the `record_type` value is `DELETE`, the `user_metadata` column contains a null value. If the `record_type` value is `CREATE` or `UPDATE_METADATA`, rows that represent object versions that no longer existed at the time that a delete or overwrite event was processed will contain a null value in the `user_metadata` column.   | 

# Configuring metadata tables
<a name="metadata-tables-configuring"></a>

Amazon S3 Metadata accelerates data discovery by automatically capturing metadata for the objects in your general purpose buckets and storing it in read-only, fully managed Apache Iceberg tables that you can query. These read-only tables are called *metadata tables*. As objects are added to, updated, and removed from your general purpose buckets, S3 Metadata automatically refreshes the corresponding metadata tables to reflect the latest changes.

With S3 Metadata, you can easily find, store, and query metadata for your S3 objects, so that you can quickly prepare data for use in business analytics, artificial intelligence and machine learning (AI/ML) model training, and more. 

To generate and store object metadata in AWS managed metadata tables, you create a metadata table configuration for your general purpose bucket. Amazon S3 is designed to continuously update the metadata tables to reflect the latest changes to your data as long as the configuration is active on the bucket. Additionally, Amazon S3 continuously optimizes your metadata tables to help reduce storage costs and improve analytics query performance.

To create a metadata table configuration, make sure that you have the necessary AWS Identity and Access Management (IAM) permissions to create and manage metadata tables. 

To monitor updates to your metadata table configuration, you can use AWS CloudTrail. For more information, see [Amazon S3 bucket-level actions that are tracked by CloudTrail logging](cloudtrail-logging-s3-info.md#cloudtrail-bucket-level-tracking).

**Topics**
+ [

# Setting up permissions for configuring metadata tables
](metadata-tables-permissions.md)
+ [

# Creating metadata table configurations
](metadata-tables-create-configuration.md)
+ [

# Controlling access to metadata tables
](metadata-tables-access-control.md)
+ [

# Expiring journal table records
](metadata-tables-expire-journal-table-records.md)
+ [

# Enabling or disabling live inventory tables
](metadata-tables-enable-disable-inventory-tables.md)
+ [

# Viewing metadata table configurations
](metadata-tables-view-configuration.md)
+ [

# Deleting metadata table configurations
](metadata-tables-delete-configuration.md)
+ [

# Deleting metadata tables
](metadata-tables-delete-table.md)

# Setting up permissions for configuring metadata tables
<a name="metadata-tables-permissions"></a>

To create a metadata table configuration, you must have the necessary AWS Identity and Access Management (IAM) permissions to both create and manage your metadata table configuration and to create and manage your metadata tables and the table bucket where your metadata tables are stored. 

To create and manage your metadata table configuration, you must have these permissions: 
+ `s3:CreateBucketMetadataTableConfiguration` – This permission allows you to create a metadata table configuration for your general purpose bucket. To create a metadata table configuration, additional permissions, including S3 Tables permissions, are required, as explained in the following sections. For a summary of the required permissions, see [Bucket operations and permissions](using-with-s3-policy-actions.md#using-with-s3-policy-actions-related-to-buckets). 
+ `s3:GetBucketMetadataTableConfiguration` – This permission allows you to retrieve information about your metadata table configuration.
+ `s3:DeleteBucketMetadataTableConfiguration` – This permission allows you to delete your metadata table configuration.
+ `s3:UpdateBucketMetadataJournalTableConfiguration` – This permission allows you to update your journal table configuration to expire journal table records.
+ `s3:UpdateBucketMetadataInventoryTableConfiguration` – This permission allows you to update your inventory table configuration to enable or disable the inventory table. To update an inventory table configuration, additional permissions, including S3 Tables permissions, are required. For a list of the required permissions, see [Bucket operations and permissions](using-with-s3-policy-actions.md#using-with-s3-policy-actions-related-to-buckets).
**Note**  
The `s3:CreateBucketMetadataTableConfiguration`, `s3:GetBucketMetadataTableConfiguration`, and `s3:DeleteBucketMetadataTableConfiguration` permissions are used for both V1 and V2 S3 Metadata configurations. For V2, the names of the corresponding API operations are `CreateBucketMetadataConfiguration`, `GetBucketMetadataConfiguration`, and `DeleteBucketMetadataConfiguration`.

To create and work with tables and table buckets, you must have certain `s3tables` permissions. At a minimum, to create a metadata table configuration, you must have the following `s3tables` permissions: 
+ `s3tables:CreateTableBucket` – This permission allows you to create an AWS managed table bucket. All metadata table configurations in your account and in the same Region are stored in a single AWS managed table bucket named `aws-s3`. For more information, see [How metadata tables work](metadata-tables-overview.md#metadata-tables-how-they-work) and [Working with AWS managed table buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-aws-managed-buckets.html).
+ `s3tables:CreateNamespace` – This permission allows you to create a namespace in a table bucket. Metadata tables typically use the `b_general_purpose_bucket_name` namespace. For more information about metadata table namespaces, see [How metadata tables work](metadata-tables-overview.md#metadata-tables-how-they-work).
+ `s3tables:CreateTable` – This permission allows you to create your metadata tables.
+ `s3tables:GetTable` – This permission allows you to retrieve information about your metadata tables.
+ `s3tables:PutTablePolicy` – This permission allows you to add or update your metadata table policies.
+ `s3tables:PutTableEncryption` – This permission allows you to set server-side encryption for your metadata tables. Additional permissions are required if you want to encrypt your metadata tables with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS). For more information, see [Permissions for SSE-KMS](#metadata-kms-permissions). 
+ `kms:DescribeKey` – This permission allows you to retrieve information about a KMS key. 
+ `s3tables:PutTableBucketPolicy` – This permission allows you to create or update a new table bucket policy.

For detailed information about all table and table bucket permissions, see [Access management for S3 Tables](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-setting-up.html).

**Important**  
If you also want to integrate your table bucket with AWS analytics services so that you can query your metadata table, you need additional permissions. For more information, see [Integrating Amazon S3 Tables with AWS analytics services](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-aws.html).

**Permissions for SSE-KMS**  
To encrypt your metadata tables with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), you must have additional permissions. 

1. The user or AWS Identity and Access Management (IAM) role needs the following permissions. You can grant these permissions by using the IAM console: [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

   1. `s3tables:PutTableEncryption` to configure table encryption

   1. `kms:DescribeKey` on the AWS KMS key used

1. On the resource policy for the KMS key, you need the following permissions. You can grant these permissions by using the AWS KMS console: [https://console.aws.amazon.com/kms](https://console.aws.amazon.com/kms).

   1. Grant `kms:GenerateDataKey` permission to `metadata.s3.amazonaws.com` and `maintenance.s3tables.amazonaws.com`.

   1. Grant `kms:Decrypt` permission to `metadata.s3.amazonaws.com` and `maintenance.s3tables.amazonaws.com`.

   1. Grant `kms:DescribeKey` permission to the invoking AWS principal.

In addition to these permissions, make sure that the customer managed KMS key used to encrypt the tables still exists, is active, is in the same Region as your general purpose bucket.

**Example policy**  
To create and work with metadata tables and table buckets, you can use the following example policy. In this policy, the general purpose bucket that you're applying the metadata table configuration to is referred to as `amzn-s3-demo-bucket`. To use this policy, replace the `user input placeholders` with your own information. 

When you create your metadata table configuration, your metadata tables are stored in an AWS managed table bucket. All metadata table configurations in your account and in the same Region are stored in a single AWS managed table bucket named `aws-s3`. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "PermissionsToWorkWithMetadataTables",
            "Effect": "Allow",
            "Action": [
                "s3:CreateBucketMetadataTableConfiguration",
                "s3:GetBucketMetadataTableConfiguration",
                "s3:DeleteBucketMetadataTableConfiguration",
                "s3:UpdateBucketMetadataJournalTableConfiguration",
                "s3:UpdateBucketMetadataInventoryTableConfiguration",
                "s3tables:*",
                "kms:DescribeKey"
            ],
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket",
                "arn:aws:s3tables:us-east-1:111122223333:bucket/aws-s3",
                "arn:aws:s3tables:us-east-1:111122223333:bucket/aws-s3/table/*"
            ]
        }
    ]
}
```

------

To query metadata tables, you can use the following example policy. If your metadata tables have been encrypted with SSE-KMS, you will need the `kms:Decrypt` permission as shown. To use this policy, replace the `user input placeholders` with your own information.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "PermissionsToQueryMetadataTables",
            "Effect": "Allow",
            "Action": [
                "s3tables:GetTable",
                "s3tables:GetTableData",
                "s3tables:GetTableMetadataLocation",
                "kms:Decrypt"
            ],
            "Resource": [
                "arn:aws:s3tables:us-east-1:111122223333:bucket/aws-s3",
                "arn:aws:s3tables:us-east-1:111122223333:bucket/aws-s3/table/*"
            ]
        }
    ]
}
```

------

# Creating metadata table configurations
<a name="metadata-tables-create-configuration"></a>

To generate and store Amazon S3 Metadata in fully managed Apache Iceberg metadata tables, you create a metadata table configuration for your general purpose bucket. Amazon S3 is designed to continuously update the metadata tables to reflect the latest changes to your data as long as the configuration is active on the bucket. Additionally, Amazon S3 continuously optimizes your metadata tables to help reduce storage costs and improve analytics query performance.

For each general purpose bucket, you can create a metadata table configuration that contains two complementary metadata tables:
+ **Journal table** – By default, your metadata table configuration contains a *journal table*, which captures events that occur for the objects in your bucket. The journal table records changes made to your data in near real time, helping you to identify new data uploaded to your bucket, track recently deleted objects, monitor lifecycle transitions, and more. The journal table records new objects and updates to your objects and their metadata (those updates that require either a `PUT` or a `DELETE` operation). 

  The journal table captures metadata only for change events (such as uploads, updates, and deletes) that happen after you create your metadata table configuration. Because this table is queryable, you can audit the changes to your bucket through simple SQL queries. 

  The journal table is required for each metadata table configuration. (In the initial release of S3 Metadata, the journal table was referred to as "the metadata table.")

  For more information about what data is stored in journal tables, see [S3 Metadata journal tables schema](metadata-tables-schema.md).

  To help minimize your storage costs, you can choose to enable journal table record expiration. For more information, see [Expiring journal table records](metadata-tables-expire-journal-table-records.md). 
+ **Live inventory table** – Optionally, you can add a *live inventory table* to your metadata table configuration. The live inventory table provides a simple, queryable inventory of all the objects and their versions in your bucket so that you can determine the latest state of your data. 

  You can use the live inventory table to simplify and speed up business workflows and big data jobs by identifying objects that you want to process for various workloads. For example, you can query the live inventory table to find all objects stored in a particular storage class, all objects with certain tags, all objects that aren't encrypted with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS), and more. 

  When you enable the live inventory table for your metadata table configuration, the table goes through a process known as *backfilling*, during which Amazon S3 scans your general purpose bucket to retrieve the initial metadata for all objects that exist in the bucket. Depending on the number of objects in your bucket, this process can take minutes (minimum 15 minutes) to hours. When the backfilling process is finished, the status of your live inventory table changes from **Backfilling** to **Active**. After backfilling is completed, updates to your objects are typically reflected in the live inventory table within one hour.

  You're charged for backfilling your live inventory table. If your general purpose bucket has more than one billion objects, you're also charged a monthly fee for your live inventory table. For more information, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

  For more information about what data is stored in live inventory tables, see [S3 Metadata live inventory tables schema](metadata-tables-inventory-schema.md).

Metadata tables have the following Amazon Resource Name (ARN) format, which includes the table ID of the metadata table: 

`arn:aws:s3tables:region-code:account-id:bucket/aws-s3/table/table-id`

For example, a metadata table in the US East (N. Virginia) Region would have an ARN like the following:

`arn:aws:s3tables:us-east-1:111122223333:bucket/aws-s3/table/a12bc345-67d8-912e-3456-7f89123g4h56`

Journal tables have the name `journal`, and live inventory tables have the name `inventory`.

When you create your metadata table configuration, your metadata tables are stored in an AWS managed table bucket. All metadata table configurations in your account and in the same Region are stored in a single AWS managed table bucket. These AWS managed table buckets are named `aws-s3` and have the following Amazon Resource Name (ARN) format: 

`arn:aws:s3tables:region:account_id:bucket/aws-s3`

For example, if your account ID is 123456789012 and your general purpose bucket is in US East (N. Virginia) (`us-east-1`), your AWS managed table bucket is also created in US East (N. Virginia) (`us-east-1`) and has the following ARN:

`arn:aws:s3tables:us-east-1:123456789012:bucket/aws-s3`

By default, AWS managed table buckets are encrypted with server-side encryption using Amazon S3 managed keys (SSE-S3). After you create your first metadata configuration, you can set the default encryption setting for the AWS managed table bucket to use server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS). For more information, see [Encryption for AWS managed table buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-aws-managed-buckets.html#aws-managed-buckets-encryption) and [Specifying server-side encryption with AWS KMS keys (SSE-KMS) in table buckets](s3-tables-kms-specify.md).

Within your AWS managed table bucket, the metadata tables for your configuration are typically stored in a namespace with the following naming format:

`b_general-purpose-bucket-name`

For more information about metadata table namespaces, see [How metadata tables work](metadata-tables-overview.md#metadata-tables-how-they-work).

When you create your metadata table configuration, you can choose to encrypt your AWS managed metadata tables with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS). If you choose to use SSE-KMS, you must provide a customer managed KMS key in the same Region as your general purpose bucket. You can set the encryption type for your tables only during table creation. After an AWS managed table is created, you can't change its encryption setting. To specify SSE-KMS for your metadata tables, you must have certain permissions. For more information, see [ Permissions for SSE-KMS](metadata-tables-permissions.md#metadata-kms-permissions).

The encryption setting for a metadata table takes precedence over the default bucket-level encryption setting. If you don't specify encryption for a table, it will inherit the default encryption setting from the bucket.

AWS managed table buckets don't count toward your S3 Tables quotas. For more information about working with AWS managed table buckets and AWS managed tables, see [Working with AWS managed table buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-aws-managed-buckets.html). 

You can create a metadata table configuration by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the Amazon S3 REST API.

**Note**  
If you created your S3 Metadata configuration before July 15, 2025, we recommend that you delete and re-create your configuration so that you can expire journal table records and create an inventory table. For more information, see [Enabling inventory tables on metadata configurations created before July 15, 2025](#metadata-tables-migration).
If you've deleted your metadata table configuration and want to re-create a configuration for the same general purpose bucket, you must first manually delete the old journal and inventory tables from your AWS managed table bucket. Otherwise, creating the new metadata table configuration fails because those tables already exist. To delete your metadata tables, see [Delete a metadata table](metadata-tables-delete-table.md#delete-metadata-table-procedure).  
Deleting a metadata table configuration deletes only the configuration. The AWS managed table bucket and your metadata tables still exist, even if you delete the metadata table configuration. 

**Prerequisites**  
Before you create a metadata table configuration make sure that you've met the following prerequisites:
+ Before you create a metadata table configuration make sure that you have the necessary AWS Identity and Access Management (IAM) permissions to create and manage metadata tables. For more information, see [Setting up permissions for configuring metadata tables](metadata-tables-permissions.md).
+ If you plan to query your metadata tables with Amazon Athena or another AWS query engine, make sure that you integrate your AWS managed table bucket with AWS analytics services. For more information, see [Integrating Amazon S3 Tables with AWS analytics services](s3-tables-integrating-aws.md). 

  If you've already integrated an existing table bucket in this Region, your AWS managed table bucket is also automatically integrated. To determine the integration status for your table buckets in this Region, open the Amazon S3 console, and choose **Table buckets** in the left navigation pane. Under **Integration with AWS analytics services**, check the Region and whether the integration status says **Enabled**.

## Create a metadata table configuration
<a name="create-metadata-config-procedure"></a>

### Using the S3 console
<a name="create-metadata-config-console"></a>

**To create a metadata table configuration**

Before you create a metadata table configuration, make sure that you've reviewed and met the [prerequisites](#metadata-table-config-prereqs) and that you've reviewed [Metadata table limitations and restrictions](metadata-tables-restrictions.md).

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. Choose the general purpose bucket that you want to create a metadata table configuration for. 
**Note**  
Make sure that this general purpose bucket is an AWS Region where table buckets are available. Table buckets are available only in the US East (N. Virginia), US East (Ohio), and US West (Oregon) Regions.

1. On the bucket's details page, choose the **Metadata** tab. 

1. On the **Metadata** tab, choose **Create metadata configuration**.

1. On the **Create metadata configuration** page, under **Journal table**, you can choose whether to encrypt your table with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS). By default, journal tables are encrypted with server-side encryption using Amazon S3 managed keys (SSE-S3).

   If you choose to use SSE-KMS, you must provide a customer managed KMS key in the same Region as your general purpose bucket. 
**Important**  
You can set the encryption type for your metadata tables only during table creation. After an AWS managed table is created, you can't change its encryption setting.
   + To encrypt your journal table with SSE-S3 (the default), choose **Don't specify encryption type**. 
   + To encrypt your journal table with SSE-KMS, choose **Specify encryption type**. Under **Encryption type**, choose **Server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS)**. Under **AWS KMS key**, either choose from your existing KMS keys, or enter your KMS key ARN. If you don't already have a KMS key, choose **Enter KMS key ARN**, and then choose **Create a KMS key**. 

     Make sure that you've set up the necessary permissions for SSE-KMS. For more information, see [ Permissions for SSE-KMS](metadata-tables-permissions.md#metadata-kms-permissions).

1. (Optional) By default, the records in your journal table don't expire. To help minimize the storage costs for your journal table, choose **Enabled** for **Record expiration**. 

   If you enable journal table record expiration, you can set the number of days to retain your journal table records. To set the **Days after which records expire** value, you can specify any whole number between `7` and `2147483647`. For example, to retain your journal table records for one year, set this value to `365`.

   Records will be expired within 24 to 48 hours after they become eligible for expiration. 
**Important**  
After journal table records expire, they can't be recovered.

   Under **Journal table records will expire after the specified number of days**, select the checkbox.

1. (Optional) If you want to add an inventory table to your metadata table configuration, under **Live inventory table**, choose **Enabled** for **Configuration status**.

   You can choose whether to encrypt your table with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS). By default, inventory tables are encrypted with server-side encryption using Amazon S3 managed keys (SSE-S3).

   If you choose to use SSE-KMS, you must provide a customer managed KMS key in the same Region as your general purpose bucket. 
**Important**  
You can set the encryption type for your metadata tables only during table creation. After an AWS managed table is created, you can't change its encryption setting.
   + To encrypt your inventory table with SSE-S3 (the default), choose **Don't specify encryption type**. 
   + To encrypt your inventory table with SSE-KMS, choose **Specify encryption type**. Under **Encryption type**, choose **Server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS)**. Under **AWS KMS key**, either choose from your existing KMS keys, or enter your KMS key ARN. If you don't already have a KMS key, choose **Enter KMS key ARN**, and then choose **Create a KMS key**.

     Make sure that you've set up the necessary permissions for SSE-KMS. For more information, see [ Permissions for SSE-KMS](metadata-tables-permissions.md#metadata-kms-permissions).

1. Choose **Create metadata table configuration**.

If your metadata table configuration was successful, the names and ARNs for your metadata tables are displayed on the **Metadata** tab, along with the name of your AWS managed table bucket and namespace. 

If you chose to enable an inventory table for your metadata table configuration, the table goes through a process known as *backfilling*, during which Amazon S3 scans your general purpose bucket to retrieve the initial metadata for all objects that exist in the bucket. Depending on the number of objects in your bucket, this process can take minutes (minimum 15 minutes) to hours. When the backfilling process is finished, the status of your inventory table changes from **Backfilling** to **Active**. After backfilling is completed, updates to your objects are typically reflected in the inventory table within one hour.

To monitor updates to your metadata table configuration, you can use AWS CloudTrail. For more information, see [Amazon S3 bucket-level actions that are tracked by CloudTrail logging](cloudtrail-logging-s3-info.md#cloudtrail-bucket-level-tracking).

### Using the AWS CLI
<a name="create-metadata-config-cli"></a>

To run the following commands, you must have the AWS CLI installed and configured. If you don’t have the AWS CLI installed, see [Install or update to the latest version of the AWS CLI](https://docs.aws.amazon.com//cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

Alternatively, you can run AWS CLI commands from the console by using AWS CloudShell. AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. For more information, see [What is CloudShell?](https://docs.aws.amazon.com//cloudshell/latest/userguide/welcome.html) and [Getting started with AWS CloudShell](https://docs.aws.amazon.com//cloudshell/latest/userguide/getting-started.html) in the *AWS CloudShell User Guide*.

**To create a metadata table configuration by using the AWS CLI**

Before you create a metadata table configuration, make sure that you've reviewed and met the [prerequisites](#metadata-table-config-prereqs) and that you've reviewed [Metadata table limitations and restrictions](metadata-tables-restrictions.md).

To use the following example commands, replace the `user input placeholders` with your own information. 

1. Create a JSON file that contains your metadata table configuration, and save it (for example, `metadata-config.json`). The following is a sample configuration. 

   You must specify whether to enable or disable journal table record expiration. If you choose to enable record expiration, you must also specify the number of days after which your journal table records will expire. To set the `Days` value, you can specify any whole number between `7` and `2147483647`. For example, to retain your journal table records for one year, set this value to `365`.

   You can optionally choose to configure an inventory table. 

   For both journal tables and inventory tables, you can optionally specify an encryption configuration. By default, metadata tables are encrypted with server-side encryption using Amazon S3 managed keys (SSE-S3), which you can specify by setting `SseAlgorithm` to `AES256`.

   To encrypt your metadata tables with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS), set `SseAlgorithm` to `aws:kms`. You must also set `KmsKeyArn` to the ARN of a customer managed KMS key in the same Region where your general purpose bucket is located.

   ```
   {
     "JournalTableConfiguration": {
        "RecordExpiration": {          
          "Expiration": "ENABLED",
         "Days": 10
       },
       "EncryptionConfiguration": {  
         "SseAlgorithm": "AES256"
       }
     },
     "InventoryTableConfiguration": { 
       "ConfigurationState": "ENABLED",
       "EncryptionConfiguration": {   
         "SseAlgorithm": "aws:kms",
         "KmsKeyArn": "arn:aws:kms:us-east-2:account-id:key/key-id"
       }
     }
   }
   ```

1. Use the following command to apply the metadata table configuration to your general purpose bucket (for example, `amzn-s3-demo-bucket`):

   ```
   aws s3api create-bucket-metadata-configuration \
   --bucket amzn-s3-demo-bucket \
   --metadata-configuration file://./metadata-config.json \
   --region us-east-2
   ```

1. To verify that the configuration was created, use the following command:

   ```
   aws s3api get-bucket-metadata-configuration \
   --bucket amzn-s3-demo-bucket \
   --region us-east-2
   ```

To monitor updates to your metadata table configuration, you can use AWS CloudTrail. For more information, see [Amazon S3 bucket-level actions that are tracked by CloudTrail logging](cloudtrail-logging-s3-info.md#cloudtrail-bucket-level-tracking).

### Using the REST API
<a name="create-metadata-config-rest-api"></a>

You can send REST requests to create a metadata table configuration. For more information, see [https://docs.aws.amazon.com//AmazonS3/latest/API/API_CreateBucketMetadataConfiguration.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_CreateBucketMetadataConfiguration.html) in the *Amazon S3 API Reference*.

### Using the AWS SDKs
<a name="create-metadata-config-sdk"></a>

You can use the AWS SDKs to create a metadata table configuration in Amazon S3. For information, see the [list of supported SDKs](https://docs.aws.amazon.com//AmazonS3/latest/API/API_CreateBucketMetadataConfiguration.html#API_CreateBucketMetadataConfiguration_SeeAlso) in the *Amazon S3 API Reference*.

## Enabling inventory tables on metadata configurations created before July 15, 2025
<a name="metadata-tables-migration"></a>

If you created your S3 Metadata configuration before July 15, 2025, we recommend that you delete and re-create your configuration so that you can expire journal table records and create an inventory table. Any changes to your general purpose bucket that occur between deleting the old configuration and creating the new one aren't recorded in either of your journal tables.

To migrate from an old metadata configuration to a new configuration, do the following:

1. Delete your existing metadata table configuration. For step-by-step instructions, see [Deleting metadata table configurations](metadata-tables-delete-configuration.md). 

1. Create a new metadata table configuration. For step-by-step instructions, see [Creating metadata table configurations](#metadata-tables-create-configuration).

If you need assistance with migrating your configuration, contact AWS Support. 

After you create your new metadata configuration, you will have two journal tables. If you no longer need the old journal table, you can delete it. For step-by-step instructions, see [Deleting metadata tables](metadata-tables-delete-table.md). If you've retained your old journal table and want to join it with your new one, see [Joining custom metadata with S3 metadata tables](metadata-tables-join-custom-metadata.md) for examples of how to join two tables.

After migration, you can do the following:

1. To view your configuration, you can now use the `GetBucketMetadataConfiguration` API operation. To determine whether your configuration is old or new, you can look at the following attribute of your `GetBucketMetadataConfiguration` API response. An AWS managed bucket type (`"aws"`) indicates a new configuration, and a customer-managed bucket type (`"customer"`) indicates an old configuration.

   ```
   "MetadataTableConfigurationResult": {
               "TableBucketType": ["aws" | "customer"]
   ```

   For more information, see [Viewing metadata table configurations](metadata-tables-view-configuration.md).
**Note**  
You can use the `GetBucketMetadataConfiguration` and `DeleteBucketMetadataConfiguration` API operations with old or new metadata table configurations. However, if you try to use the `GetBucketMetadataTableConfiguration` and `DeleteBucketMetadataTableConfiguration` API operations with new configurations, you will receive HTTP `405 Method Not Allowed` errors.  
Make sure that you update your processes to use the new API operations (`CreateBucketMetadataConfiguration`, `GetBucketMetadataConfiguration`, and `DeleteBucketMetadataConfiguration`) instead of the old API operations. 

1. If you plan to query your metadata tables with Amazon Athena or another AWS query engine, make sure that you integrate your AWS managed table bucket with AWS analytics services. If you've already integrated an existing table bucket in this Region, your AWS managed table bucket is also automatically integrated. For more information, see [Integrating Amazon S3 Tables with AWS analytics services](s3-tables-integrating-aws.md).

# Controlling access to metadata tables
<a name="metadata-tables-access-control"></a>

To control access to your Amazon S3 metadata tables, you can use AWS Identity and Access Management (IAM) resource-based policies that are attached to your table bucket and to your metadata tables. In other words, you can control access to your metadata tables at both the table bucket level and the table level. 

For more information about controlling access to your table buckets and tables, see [Access management for S3 Tables](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-setting-up.html).

**Important**  
When you're creating or updating table bucket or table policies, make sure that you don't restrict the Amazon S3 service principals `metadata.s3.amazonaws.com` and `maintenance.s3tables.amazonaws.com` from writing to your table bucket or your metadata tables.   
If Amazon S3 is unable to write to your table bucket or your metadata tables, you must delete your metadata configuration, delete your metadata tables, and then create a new configuration. If you had an inventory table in your configuration, a new inventory table has to be created, and you will be charged again for backfilling the new inventory table.

You can also control access to the rows and columns in your metadata tables through AWS Lake Formation. For more information, see [Managing Lake Formation permissions](https://docs.aws.amazon.com/lake-formation/latest/dg/managing-permissions.html) and [Data filtering and cell-level security in Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/data-filtering.html) in the *AWS Lake Formation Developer Guide*.

# Expiring journal table records
<a name="metadata-tables-expire-journal-table-records"></a>

By default, the records in your journal table don't expire. To help minimize the storage costs for your journal table, you can enable journal table record expiration. 

**Note**  
If you created your S3 Metadata configuration before July 15, 2025, you can't enable journal table record expiration on that configuration. We recommend that you delete and re-create your configuration so that you can expire journal table records and create an inventory table. For more information, see [Enabling inventory tables on metadata configurations created before July 15, 2025](metadata-tables-create-configuration.md#metadata-tables-migration).

If you enable journal table record expiration, you can set the number of days to retain your journal table records. To set this value, specify any whole number between `7` and `2147483647`. For example, to retain your journal table records for one year, set this value to `365`.

**Important**  
After journal table records expire, they can't be recovered.

Records are expired within 24 to 48 hours after they become eligible for expiration. Journal records are removed from the latest snapshot. The data and storage for the deleted records is removed through table maintenance operations.

If you've enabled journal table record expiration, you can disable it at any time to stop expiring your journal table records.

You can expire journal table records by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the Amazon S3 REST API.

## How to expire journal table records
<a name="metadata-tables-expire-journal-table-records-procedure"></a>

### Using the S3 console
<a name="metadata-tables-expire-journal-table-records-console"></a>

**To expire journal table records**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. Choose the general purpose bucket that contains the metadata table configuration with the journal table that you want to expire records from. 

1. On the bucket's details page, choose the **Metadata** tab. 

1. On the **Metadata** tab, choose **Edit**, then choose **Edit journal table record expiration**.

1. On the **Edit journal table record expiration** page, choose **Enabled** under **Record expiration**.

1. Set the number of days to retain your journal table records. To set the **Days after which records expire** value, specify any whole number between `7` and `2147483647`. For example, to retain your journal table records for one year, set this value to `365`.
**Important**  
After journal table records expire, they can't be recovered.

1. Under **Journal table records will expire after the specified number of days**, select the checkbox. 

1. Choose **Save changes**. 

If you want to disable journal table record expiration, repeat the preceding steps, but choose **Disabled** instead of **Enabled** for step 6. 

### Using the AWS CLI
<a name="metadata-tables-expire-journal-table-records-cli"></a>

To run the following commands, you must have the AWS CLI installed and configured. If you don't have the AWS CLI installed, see [Install or update to the latest version of the AWS CLI](https://docs.aws.amazon.com//cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

You can also run AWS CLI commands from the console by using AWS CloudShell. AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. For more information, see [What is CloudShell?](https://docs.aws.amazon.com//cloudshell/latest/userguide/welcome.html) and [Getting started with AWS CloudShell](https://docs.aws.amazon.com//cloudshell/latest/userguide/getting-started.html) in the *AWS CloudShell User Guide*.

**To expire journal table records by using the AWS CLI**

To use the following example commands, replace the `user input placeholders` with your own information. 

1. Create a JSON file that contains your journal table configuration, and save it (for example, `journal-config.json`). The following is a sample configuration. 

   To set the `Days` value, specify any whole number between `7` and `2147483647`. For example, to retain your journal table records for one year, set this value to `365`.

   ```
   {
     "RecordExpiration": {
       "Expiration": "ENABLED",
       "Days": 10
     }
   }
   ```

   To disable journal table record expiration, create the following sample configuration instead. If `Expiration` is set to `DISABLED`, you must not specify a `Days` value in the configuration.

   ```
   {
     "RecordExpiration": {
       "Expiration": "DISABLED"
     }
   }
   ```

1. Use the following command to expire records from the journal table in your general purpose bucket (for example, `amzn-s3-demo-bucket`):

   ```
   aws s3api update-bucket-metadata-journal-table-configuration \
   --bucket amzn-s3-demo-bucket \
   --journal-table-configuration file://./journal-config.json \
   --region us-east-2
   ```

### Using the REST API
<a name="metadata-tables-expire-journal-table-records-rest-api"></a>

You can send REST requests to expire journal table records. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataJournalTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataJournalTableConfiguration.html).

### Using the AWS SDKs
<a name="metadata-tables-expire-journal-table-records-sdk"></a>

You can use the AWS SDKs to expire journal table records in Amazon S3. For information, see the [list of supported SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataJournalTableConfiguration.html#API_UpdateBucketMetadataJournalTableConfiguration_SeeAlso).

# Enabling or disabling live inventory tables
<a name="metadata-tables-enable-disable-inventory-tables"></a>

By default, your metadata table configuration contains a *journal table*, which records the events that occur for the objects in your bucket. The journal table is required for each metadata table configuration. 

Optionally, you can add a *live inventory table* to your metadata table configuration. The live inventory table provides a simple, queryable inventory of all the objects and their versions in your bucket so that you can determine the latest state of your data.

**Note**  
If you created your S3 Metadata configuration before July 15, 2025, you can't enable an inventory table on that configuration. We recommend that you delete and re-create your configuration so that you can create an inventory table and expire journal table records. For more information, see [Enabling inventory tables on metadata configurations created before July 15, 2025](metadata-tables-create-configuration.md#metadata-tables-migration).

The inventory table contains the latest metadata for all objects in your bucket. You can use this table to simplify and speed up business workflows and big data jobs by identifying objects that you want to process for various workloads. For example, you can query the inventory table to do the following: 
+ Find all objects stored in the S3 Glacier Deep Archive storage class.
+ Create a distribution of object tags or find objects without tags.
+ Find all objects that aren't encrypted by using server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS). 
+ Compare your inventory table at two different points in time to understand the growth in objects with specific tags.

If you chose to enable an inventory table for your metadata table configuration, the table goes through a process known as *backfilling*, during which Amazon S3 scans your general purpose bucket to retrieve the initial metadata for all objects that exist in the bucket. Depending on the number of objects in your bucket, this process can take minutes (minimum 15 minutes) to hours. When the backfilling process is finished, the status of your inventory table changes from **Backfilling** to **Active**. After backfilling is completed, updates to your objects are typically reflected in the inventory table within one hour.

**Note**  
You're charged for backfilling your inventory table. If your general purpose bucket has more than one billion objects, you're also charged a monthly fee for your inventory table. For more information, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).
You can't pause updates to your inventory table and then resume them. However, you can disable the inventory table configuration. Disabling the inventory table doesn't delete it. The inventory table is retained for your records until you decide to delete it.   
If you've disabled your inventory table and later want to re-enable it, you must first delete the old inventory table from your AWS managed table bucket. When you re-enable the inventory table configuration, Amazon S3 creates a new inventory table, and you're charged again for backfilling the new inventory table.

You can enable or disable inventory tables by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the Amazon S3 REST API.

**Prerequisites**  
If you've disabled your inventory table and now want to re-enable it, you must first manually delete the old inventory table from your AWS managed table bucket. Otherwise, re-enabling the inventory table fails because an inventory table already exists in the table bucket. To delete your inventory table, see [Delete a metadata table](metadata-tables-delete-table.md#delete-metadata-table-procedure). 

When you re-enable the inventory table configuration, Amazon S3 creates a new inventory table, and you're charged again for backfilling the new inventory table. 

## Enable or disable inventory tables
<a name="metadata-tables-enable-disable-inventory-tables-procedure"></a>

### Using the S3 console
<a name="metadata-tables-enable-disable-inventory-tables-console"></a>

**To enable or disable inventory tables**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. Choose the general purpose bucket with the metadata table configuration that you want to enable or disable an inventory table for.

1. On the bucket's details page, choose the **Metadata** tab. 

1. On the **Metadata** tab, choose **Edit**, then choose **Edit inventory table configuration**.

1. On the **Edit inventory table configuration** page, choose **Enabled** or **Disabled** under **Inventory table**.
**Note**  
Before you choose **Enabled**, make sure that you've reviewed and met the [prerequisites](#inventory-table-config-prereqs). 
   + If you chose **Enabled**, you can choose whether to encrypt your table with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS). By default, inventory tables are encrypted with server-side encryption using Amazon S3 managed keys (SSE-S3).

     If you choose to use SSE-KMS, you must provide a customer managed KMS key in the same Region as your general purpose bucket. 
**Important**  
You can set the encryption type for your metadata tables only during table creation. After an AWS managed table is created, you can't change its encryption setting.
     + To encrypt your inventory table with SSE-S3 (the default), choose **Don't specify encryption type**. 
     + To encrypt your inventory table with SSE-KMS, choose **Specify encryption type**. Under **Encryption type**, choose **Server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS)**. Under **AWS KMS key**, either choose from your existing KMS keys, or enter your KMS key ARN. If you don't already have a KMS key, choose **Enter KMS key ARN**, and then choose **Create a KMS key**.
   + If you chose **Disabled**, under **After the inventory table is disabled, the table will no longer be updated, and updates can't be resumed**, select the checkbox.

1. Choose **Save changes**.

### Using the AWS CLI
<a name="metadata-tables-enable-disable-inventory-tables-cli"></a>

To run the following commands, you must have the AWS CLI installed and configured. If you don’t have the AWS CLI installed, see [Install or update to the latest version of the AWS CLI](https://docs.aws.amazon.com//cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

Alternatively, you can run AWS CLI commands from the console by using AWS CloudShell. AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. For more information, see [What is CloudShell?](https://docs.aws.amazon.com//cloudshell/latest/userguide/welcome.html) and [Getting started with AWS CloudShell](https://docs.aws.amazon.com//cloudshell/latest/userguide/getting-started.html) in the *AWS CloudShell User Guide*.

**To enable or disable inventory tables by using the AWS CLI**

To use the following example commands, replace the `user input placeholders` with your own information. 
**Note**  
Before enabling an inventory configuration, make sure that you've reviewed and met the [prerequisites](#inventory-table-config-prereqs). 

1. Create a JSON file that contains your inventory table configuration, and save it (for example, `inventory-config.json`). The following is a sample configuration to enable a new inventory table.

   If you're enabling an inventory table, you can optionally specify an encryption configuration. By default, metadata tables are encrypted with server-side encryption using Amazon S3 managed keys (SSE-S3), which you can specify by setting `SseAlgorithm` to `AES256`.

   To encrypt your inventory table with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS), set `SseAlgorithm` to `aws:kms`. You must also set `KmsKeyArn` to the ARN of a customer managed KMS key in the same Region where your general purpose bucket is located.

   ```
   {
     "ConfigurationState": "ENABLED",
     "EncryptionConfiguration": {       
       "SseAlgorithm": "aws:kms",
       "KmsKeyArn": "arn:aws:kms:us-east-2:account-id:key/key-id"
     }  
   }
   ```

   If you want to disable an existing inventory table, use the following configuration: 

   ```
   {
     "ConfigurationState": "DISABLED"  }  
   }
   ```

1. Use the following command to update the inventory table configuration for your general purpose bucket (for example, `amzn-s3-demo-bucket`):

   ```
   aws s3api update-bucket-metadata-inventory-table-configuration \
   --bucket amzn-s3-demo-source-bucket \
   --inventory-table-configuration file://./inventory-config.json \
   --region us-east-2
   ```

### Using the REST API
<a name="metadata-tables-enable-disable-inventory-tables-rest-api"></a>

You can send REST requests to enable or disable inventory tables. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataInventoryTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataInventoryTableConfiguration.html).

### Using the AWS SDKs
<a name="metadata-tables-enable-disable-inventory-tables-sdk"></a>

You can use the AWS SDKs to enable or disable inventory tables in Amazon S3. For information, see the [list of supported SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UpdateBucketMetadataInventoryTableConfiguration.html#API_UpdateBucketMetadataInventoryTableConfiguration_SeeAlso).

# Viewing metadata table configurations
<a name="metadata-tables-view-configuration"></a>

If you've created a metadata table configuration for a general purpose bucket, you can view information about the configuration, such as whether an inventory table has been enabled, or whether journal table record expiration has been enabled. You can also view the status of your journal and inventory tables. 

You can view your metadata table configuration for a general purpose bucket by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the Amazon S3 REST API.

## View a metadata table configuration
<a name="metadata-tables-view-configuration-procedure"></a>

### Using the S3 console
<a name="metadata-tables-view-configuration-console"></a>

**To view a metadata table configuration**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. Choose the general purpose bucket that contains the metadata table configuration that you want to view.

1. On the bucket's details page, choose the **Metadata** tab. 

1. On the **Metadata** tab, scroll down to the **Metadata configuration** section. In the **Journal table** and **Inventory table** sections, you can view various information for these configurations, such as their Amazon Resource Names (ARNs), the status of your tables, and whether you've enabled journal table record expiration or an inventory table.

### Using the AWS CLI
<a name="metadata-tables-view-configuration-cli"></a>

To run the following commands, you must have the AWS CLI installed and configured. If you don’t have the AWS CLI installed, see [Install or update to the latest version of the AWS CLI](https://docs.aws.amazon.com//cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

Alternatively, you can run AWS CLI commands from the console by using AWS CloudShell. AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. For more information, see [What is CloudShell?](https://docs.aws.amazon.com//cloudshell/latest/userguide/welcome.html) and [Getting started with AWS CloudShell](https://docs.aws.amazon.com//cloudshell/latest/userguide/getting-started.html) in the *AWS CloudShell User Guide*.

**To view a metadata table configuration by using the AWS CLI**

To use the following example command, replace the `user input placeholders` with your own information. 

1. Use the following command to view the metadata table configuration for your general purpose bucket (for example, `amzn-s3-demo-bucket`):

   ```
   aws s3api get-bucket-metadata-configuration \
   --bucket amzn-s3-demo-bucket \
   --region us-east-2
   ```

1. View the output of this command to see the status of your metadata table configuration. For example:

   ```
   {
       "GetBucketMetadataConfigurationResult": {
           "MetadataConfigurationResult": {
               "DestinationResult": {
                   "TableBucketType": "aws",
                   "TableBucketArn": "arn:aws:s3tables:us-east-2:111122223333:bucket/aws-managed-s3-111122223333-us-east-2",
                   "TableNamespace": "b_general-purpose-bucket-name"
               },
               "JournalTableConfigurationResult": {
                   "TableStatus": "ACTIVE",
                   "TableName": "journal",
                   "TableArn": "arn:aws:s3tables:us-east-2:111122223333:bucket/aws-managed-s3-111122223333-us-east-2/table/0f01234c-fe7a-492f-a4c7-adec3864ea85",
                   "EncryptionConfiguration": {
                       "SseAlgorithm": "AES256"
                   },
                   "RecordExpiration": {
                       "Expiration": "ENABLED",
                       "Days": 10
                   }
               },
               "InventoryTableConfigurationResult": {
                   "ConfigurationState": "ENABLED",
                   "TableStatus": "BACKFILL_COMPLETE",
                   "TableName": "inventory",
                   "TableArn": "arn:aws:s3tables:us-east-2:111122223333:bucket/aws-managed-s3-111122223333-us-east-2/table/e123456-b876-4e5e-af29-bb055922ee4d",
                   "EncryptionConfiguration": {
                       "SseAlgorithm": "AES256"
                   }
               }
           }
       }
   }
   ```

### Using the REST API
<a name="metadata-tables-view-configuration-rest-api"></a>

You can send REST requests to view a metadata table configuration. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetadataTableConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetadataTableConfiguration.html).

**Note**  
You can use the V2 `GetBucketMetadataConfiguration` API operation with V1 or V2 metadata table configurations. However, if you try to use the V1 `GetBucketMetadataTableConfiguration` API operation with V2 configurations, you will receive an HTTP `405 Method Not Allowed` error.

### Using the AWS SDKs
<a name="metadata-tables-view-configuration-sdk"></a>

You can use the AWS SDKs to view a metadata table configuration in Amazon S3. For information, see the [list of supported SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketMetadataTableConfiguration.html#API_GetBucketMetadataTableConfiguration_SeeAlso).

# Deleting metadata table configurations
<a name="metadata-tables-delete-configuration"></a>

If you want to stop updating the metadata table configuration for an Amazon S3 general purpose bucket, you can delete the metadata table configuration that's attached to your bucket. Deleting a metadata table configuration deletes only the configuration. The AWS managed table bucket and your metadata tables still exist, even if you delete the metadata table configuration. However, the metadata tables will no longer be updated.

**Note**  
If you delete your metadata table configuration and want to re-create a configuration for the same general purpose bucket, you must first manually delete the old journal and inventory tables from your AWS managed table bucket. Otherwise, creating the new metadata table configuration fails because those tables already exist. To delete your metadata tables, see [Deleting metadata tables](metadata-tables-delete-table.md). 

To delete your metadata tables, see [Delete a metadata table](metadata-tables-delete-table.md#delete-metadata-table-procedure). To delete your table bucket, see [Deleting table buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-buckets-delete.html) and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTableBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTableBucket.html) in the *Amazon S3 API Reference*. 

You can delete a metadata table configuration by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the Amazon S3 REST API.

## Delete a metadata table configuration
<a name="delete-metadata-config-procedure"></a>

### Using the S3 console
<a name="delete-metadata-config-console"></a>

**To delete a metadata table configuration**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. Choose the general purpose bucket that you want to remove a metadata table configuration from. 

1. On the bucket's details page, choose the **Metadata** tab. 

1. On the **Metadata** tab, choose **Delete**.

1. In the **Delete metadata configuration** dialog box, enter **confirm** to confirm that you want to delete the configuration. Then choose **Delete**. 

### Using the AWS CLI
<a name="delete-metadata-config-cli"></a>

To run the following commands, you must have the AWS CLI installed and configured. If you don’t have the AWS CLI installed, see [Install or update to the latest version of the AWS CLI](https://docs.aws.amazon.com//cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

Alternatively, you can run AWS CLI commands from the console by using AWS CloudShell. AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. For more information, see [What is CloudShell?](https://docs.aws.amazon.com//cloudshell/latest/userguide/welcome.html) and [Getting started with AWS CloudShell](https://docs.aws.amazon.com//cloudshell/latest/userguide/getting-started.html) in the *AWS CloudShell User Guide*.

**To delete a metadata table configuration by using the AWS CLI**

To use the following example commands, replace the `user input placeholders` with your own information. 

1. Use the following command to delete the metadata table configuration from your general purpose bucket (for example, `amzn-s3-demo-bucket`):

   ```
   aws s3api delete-bucket-metadata-configuration \
   --bucket amzn-s3-demo-bucket \
   --region us-east-2
   ```

1. To verify that the configuration was deleted, use the following command:

   ```
   aws s3api get-bucket-metadata-configuration \
   --bucket amzn-s3-demo-bucket \
   --region us-east-2
   ```

### Using the REST API
<a name="delete-metadata-config-rest-api"></a>

You can send REST requests to delete a metadata table configuration. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetadataConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetadataConfiguration.html).

**Note**  
You can use the V2 `DeleteBucketMetadataConfiguration` API operation with V1 or V2 metadata table configurations. However, if you try to use the V1 `DeleteBucketMetadataTableConfiguration` API operation with V2 configurations, you will receive an HTTP `405 Method Not Allowed` error.

### Using the AWS SDKs
<a name="delete-metadata-config-sdk"></a>

You can use the AWS SDKs to delete a metadata table configuration in Amazon S3. For information, see the [list of supported SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketMetadataConfiguration.html#API_DeleteBucketMetadataConfiguration_SeeAlso).

# Deleting metadata tables
<a name="metadata-tables-delete-table"></a>

If you want to delete the metadata tables that you created for an Amazon S3 general purpose bucket, you can delete the metadata tables from your AWS managed table bucket. 

**Important**  
Deleting a table is permanent and can't be undone. Before deleting a table, make sure that you have backed up any important data.
Before you delete a metadata table, we recommend that you first delete the associated metadata table configuration on your general purpose bucket. For more information, see [Deleting metadata table configurations](metadata-tables-delete-configuration.md).

To delete your AWS managed table bucket, see [Deleting table buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-buckets-delete.html) and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTableBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTableBucket.html) in the *Amazon S3 API Reference*. Before you delete your AWS managed table bucket, we recommend that you first delete all metadata table configurations that are associated with this bucket. You must also first delete all metadata tables in the bucket. 

You can delete a metadata table by using the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the Amazon S3 REST API.

## Delete a metadata table
<a name="delete-metadata-table-procedure"></a>

### Using the AWS CLI
<a name="delete-metadata-table-cli"></a>

To run the following commands, you must have the AWS CLI installed and configured. If you don’t have the AWS CLI installed, see [Install or update to the latest version of the AWS CLI](https://docs.aws.amazon.com//cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

Alternatively, you can run AWS CLI commands from the console by using AWS CloudShell. AWS CloudShell is a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. For more information, see [What is CloudShell?](https://docs.aws.amazon.com//cloudshell/latest/userguide/welcome.html) and [Getting started with AWS CloudShell](https://docs.aws.amazon.com//cloudshell/latest/userguide/getting-started.html) in the *AWS CloudShell User Guide*.

**To delete a metadata table configuration by using the AWS CLI**

To use the following example commands, replace the `user input placeholders` with your own information. 

1. Use the following command to delete the metadata table from your AWS managed table bucket:

   ```
   aws s3tables delete-table \
   --table-bucket-arn arn:aws:s3tables:us-east-2:111122223333:bucket/aws-s3 \
   --namespace b_general-purpose-bucket-name \
   --name journal \
   --region us-east-2
   ```

1. To verify that the table was deleted, use the following command:

   ```
   aws s3tables get-table \
   --table-bucket-arn arn:aws:s3tables:us-east-2:111122223333:bucket/aws-s3 \
   --namespace b_general-purpose-bucket-name \
   --name journal \
   --region us-east-2
   ```

### Using the REST API
<a name="delete-metadata-table-rest-api"></a>

You can send REST requests to delete a metadata table configuration. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTable.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTable.html) in the *Amazon S3 API Reference*.

### Using the AWS SDKs
<a name="delete-metadata-table-sdk"></a>

You can use the AWS SDKs to delete a metadata table configuration in Amazon S3. For information, see the [list of supported SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTable.html#API_s3TableBuckets_DeleteTable_SeeAlso) in the *Amazon S3 API Reference*.

# Querying metadata tables
<a name="metadata-tables-querying"></a>

Your Amazon S3 Metadata tables are stored in an AWS managed S3 table bucket, which provides storage that's optimized for tabular data. To query your metadata, you can integrate your table bucket with Amazon SageMaker Lakehouse. This integration, which uses the AWS Glue Data Catalog and AWS Lake Formation, allows AWS analytics services to automatically discover and access your table data. 

After your table bucket is integrated with the AWS Glue Data Catalog, you can directly query your metadata tables with AWS analytics services such as Amazon Athena, Amazon EMR, and Amazon Redshift. You can also create interactive dashboards with your query data by using Amazon Quick.

For more information about integrating your AWS managed S3 table bucket with Amazon SageMaker Lakehouse, see [Integrating Amazon S3 Tables with AWS analytics services](s3-tables-integrating-aws.md).

You can also query your metadata tables with Apache Spark, Apache Trino, and any other application that supports the Apache Iceberg format by using the AWS Glue Iceberg REST endpoint, Amazon S3 Tables Iceberg REST endpoint, or the Amazon S3 Tables Catalog for Apache Iceberg client catalog. For more information about accessing your metadata tables, see [Accessing table data](s3-tables-access.md).

You can analyze your metadata tables with any query engine that supports the Apache Iceberg format. For example, you can query your metadata tables to do the following:
+ Discover storage usage patterns and trends
+ Audit AWS Key Management Service (AWS KMS) encryption key usage across your objects
+ Search for objects by user-defined metadata and object tags
+ Understand object metadata changes over time
+ Learn when objects are updated or deleted, including the AWS account ID or IP address that made the request

You can also join S3 managed metadata tables and custom metadata tables, allowing you to query across multiple datasets.

## Query pricing considerations
<a name="metadata-tables-querying-pricing"></a>

Additional pricing applies for running queries on your metadata tables. For more information, see pricing information for the query engine that you're using.

For information on making your queries more cost effective, see [Optimizing metadata table query performance](metadata-tables-optimizing-query-performance.md).

**Topics**
+ [

## Query pricing considerations
](#metadata-tables-querying-pricing)
+ [

# Permissions for querying metadata tables
](metadata-tables-bucket-query-permissions.md)
+ [

# Querying metadata tables with AWS analytics services
](metadata-tables-bucket-integration.md)
+ [

# Querying metadata tables with open-source query engines
](metadata-tables-bucket-integration-open-source.md)
+ [

# Optimizing metadata table query performance
](metadata-tables-optimizing-query-performance.md)
+ [

# Example metadata table queries
](metadata-tables-example-queries.md)

# Permissions for querying metadata tables
<a name="metadata-tables-bucket-query-permissions"></a>

Before you can query your S3 Metadata journal and live inventory tables, you must have certain S3 Tables permissions. If your metadata tables have been encrypted with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS), you must also have the `kms:Decrypt` permission to decrypt the table data. 

When you create your metadata table configuration, your metadata tables are stored in an AWS managed table bucket. All metadata table configurations in your account and in the same Region are stored in a single AWS managed table bucket named `aws-s3`. 

To query metadata tables, you can use the following example policy. To use this policy, replace the `user input placeholders` with your own information.

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"PermissionsToQueryMetadataTables",
         "Effect":"Allow",
         "Action":[
             "s3tables:GetTable",
             "s3tables:GetTableData",
             "s3tables:GetTableMetadataLocation",
             "kms:Decrypt"
         ],
         "Resource":[
            "arn:aws:s3tables:us-east-1:111122223333:bucket/aws-s3",
            "arn:aws:s3tables:us-east-1:111122223333:bucket/aws-s3/table/*",
            "arn:aws:kms:us-east-1:111122223333:key/01234567-89ab-cdef-0123-456789abcdef"
         ]
       }
    ]
}
```

# Querying metadata tables with AWS analytics services
<a name="metadata-tables-bucket-integration"></a>

You can query your S3 managed metadata tables with AWS analytics services such as Amazon Athena, Amazon Redshift, and Amazon EMR.

Before you can run queries, you must first [integrate the AWS managed S3 table buckets](s3-tables-integrating-aws.md) in your AWS account and Region with AWS analytics services.

## Querying metadata tables with Amazon Athena
<a name="metadata-tables-bucket-integration-athena"></a>

After you [integrate your AWS managed S3 table buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-aws.html) with AWS analytics services, you can start querying your metadata tables in Athena. In your queries, do the following: 
+ Specify your catalog as `s3tablescatalog/aws-s3` and your database as `b_general_purpose_bucket_name` (which is typically the namespace for your metadata tables). 
+ Make sure to surround your metadata table namespace names in quotation marks (`"`) or backticks (```), otherwise the query might not work.

For more information, see [Querying Amazon S3 tables with Athena](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-athena.html).

You can also run queries in Athena from the Amazon S3 console. 

### Using the S3 console and Amazon Athena
<a name="query-metadata-table-console"></a>

The following procedure uses the Amazon S3 console to access the Athena query editor so that you can query a table with Amazon Athena. 

**To query a metadata table**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. On the **General purpose buckets** tab, choose the bucket that contains the metadata configuration for the metadata table that you want to query.

1. On the bucket details page, choose the **Metadata** tab. 

1. Choose **Query table with Athena**, and then choose one of the sample queries for journal or inventory tables.

1. The Amazon Athena console opens and the Athena query editor appears with a sample query loaded for you. Modify this query as needed for your use case.

   In the query editor, the **Catalog** field should be populated with **s3tablescatalog/aws-s3**. The **Database** field should be populated with the namespace where your table is stored (for example, **b\$1*general-purpose-bucket-name***). 
**Note**  
If you don't see these values in the **Catalog** and **Database** fields, make sure that you've integrated your AWS managed table bucket with AWS analytics services in this Region. For more information, see [Integrating Amazon S3 Tables with AWS analytics services](s3-tables-integrating-aws.md). 

1. To run the query, choose **Run**.
**Note**  
If you receive the error "Insufficient permissions to execute the query. Principal does not have any privilege on specified resource" when you try to run a query in Athena, you must be granted the necessary Lake Formation permissions on the table. For more information, see [Granting Lake Formation permission on a table or database](grant-permissions-tables.md#grant-lf-table).  
Also make sure that you have the appropriate AWS Identity and Access Management (IAM) permissions to query metadata tables. For more information, see [Permissions for querying metadata tables](metadata-tables-bucket-query-permissions.md).
If you receive the error "Iceberg cannot access the requested resource" when you try to run the query, go to the AWS Lake Formation console and make sure that you've granted yourself permissions on the table bucket catalog and database (namespace) that you created. Don't specify a table when granting these permissions. For more information, see [Granting Lake Formation permission on a table or database](grant-permissions-tables.md#grant-lf-table). 

## Querying metadata tables with Amazon Redshift
<a name="metadata-tables-bucket-integration-redshift"></a>

After you [integrate your AWS managed S3 table buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-aws.html) with AWS analytics services, do the following:
+ [Create a resource link](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-aws.html#database-link-tables) to your metadata table namespace (typically `b_general_purpose_bucket_name`). 
+ Make sure to surround your metadata table namespace names in quotation marks (`"`) or backticks (```), otherwise the query might not work. 

After that's done, you can start querying your metadata tables in the Amazon Redshift console. For more information, see [Accessing Amazon S3 tables with Amazon Redshift](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-redshift.html).

## Querying metadata tables with Amazon EMR
<a name="metadata-tables-bucket-integration-emr"></a>

To query your metadata tables by using Amazon EMR, you create an Amazon EMR cluster configured for Apache Iceberg and connect to your metadata tables using Apache Spark. You can set this up by integrating your AWS managed S3 table buckets with AWS analytics services or using the open-source Amazon S3 Tables Catalog for Iceberg client catalog.

**Note**  
When using Apache Spark on Amazon EMR or other third-party engines to query your metadata tables, we recommend that you use the Amazon S3 Tables Iceberg REST endpoint. Your query might not run successfully if you don't use this endpoint. For more information, see [Accessing tables using the Amazon S3 Tables Iceberg REST endpoint](s3-tables-integrating-open-source.md).

 For more information, see [Accessing Amazon S3 tables with Amazon EMR](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-emr.html).

# Querying metadata tables with open-source query engines
<a name="metadata-tables-bucket-integration-open-source"></a>

You can query your S3 managed metadata tables by using open-source query engines, such as Apache Spark. When using Apache Spark on Amazon EMR or other third-party engines to query your metadata tables, we recommend that you use the Amazon S3 Tables Iceberg REST endpoint. Your query might not run successfully if you don't use this endpoint. For more information, see [Accessing tables using the Amazon S3 Tables Iceberg REST endpoint](s3-tables-integrating-open-source.md).

# Optimizing metadata table query performance
<a name="metadata-tables-optimizing-query-performance"></a>

Because S3 Metadata is based on the Apache Iceberg table format, you can optimize the performance and [cost](#metadata-tables-optimizing-query-performance) of your journal table queries by using specific time ranges.

For example, the following SQL query provides the sensitivity level of new objects in an S3 general purpose bucket:

```
SELECT key, object_tags['SensitivityLevel'] 
FROM "b_general-purpose-bucket-name"."journal"
WHERE record_type = 'CREATE'
GROUP BY object_tags['SensitivityLevel']
```

This query scans the entire journal table, which might take a long time to run. To improve performance, you can include the `record_timestamp` column to focus on a specific time range. We also recommend using the fully qualified table name, which you can find in the Amazon S3 console on the metadata configuration details page on the general purpose bucket's **Metadata** tab. Here's an updated version of the previous query that looks at new objects from the past month:

```
SELECT key, object_tags['SensitivityLevel'] 
FROM b_general-purpose-bucket-name"."aws-s3.b_general-purpose-bucket-name.journal"
WHERE record_type = 'CREATE'
AND record_timestamp > (CURRENT_TIMESTAMP – interval '1' month)
GROUP BY object_tags['SensitivityLevel']
```

To improve the performance of queries on inventory tables, make sure that you query only on the minimum columns that you need. 

# Example metadata table queries
<a name="metadata-tables-example-queries"></a>

The following examples show how you can get different types information from your S3 Metadata tables by using standard SQL queries.

Remember when using these examples:
+ The examples are written to work with Amazon Athena. You might have to modify the examples to work with a different query engine.
+ Make sure that you understand how to [optimize your queries](metadata-tables-optimizing-query-performance.md).
+ Replace `b_general-purpose-bucket-name` with the name of your namespace. 
+ For a full list of supported columns, see the [S3 Metadata journal tables schema](metadata-tables-schema.md) and [S3 Metadata live inventory tables schema](metadata-tables-inventory-schema.md). 

**Contents**
+ [

## Journal table example queries
](#metadata-tables-example-queries-journal-tables)
  + [

### Finding objects by file extension
](#metadata-tables-example-query-object-pattern)
  + [

### Listing object deletions
](#metadata-tables-example-query-delete-events)
  + [

### Listing AWS KMS encryption keys used by your objects
](#metadata-tables-example-query-objects-using-kms-key)
  + [

### Listing objects that don't use KMS keys
](#metadata-tables-example-query-objects-not-using-kms-key)
  + [

### Listing AWS KMS encryption keys used for `PUT` operations in the last 7 days
](#metadata-tables-example-query-objects-using-kms-key-puts)
  + [

### Listing objects deleted in the last 24 hours by S3 Lifecycle
](#metadata-tables-example-query-objects-deleted-lifecycle)
  + [

### Viewing metadata provided by Amazon Bedrock
](#metadata-tables-example-query-bedrock)
  + [

### Understanding the current state of your objects
](#metadata-tables-example-query-current-state)
+ [

## Inventory table example queries
](#metadata-tables-example-queries-inventory-tables)
  + [

### Discovering datasets that use specific tags
](#metadata-tables-example-query-datasets-specific-tags)
  + [

### Listing objects not encrypted with SSE-KMS
](#metadata-tables-example-query-objects-not-kms-encrypted)
  + [

### Listing objects that aren't encrypted
](#metadata-tables-example-query-objects-not-encrypted)
  + [

### Listing objects generated by Amazon Bedrock
](#metadata-tables-example-query-objects-generated-bedrock)
  + [

### Reconciling the inventory table with the journal table
](#metadata-tables-example-query-generate-latest-inventory)
  + [

### Finding the current versions of your objects
](#metadata-tables-example-query-latest-version)
+ [

# Joining custom metadata with S3 metadata tables
](metadata-tables-join-custom-metadata.md)
+ [

# Visualizing metadata table data with Amazon Quick
](metadata-tables-quicksight-dashboards.md)

## Journal table example queries
<a name="metadata-tables-example-queries-journal-tables"></a>

You can use the following example queries to query your journal tables.

### Finding objects by file extension
<a name="metadata-tables-example-query-object-pattern"></a>

The following query returns objects with a specific file extension (`.jpg` in this case):

```
SELECT key FROM "s3tablescatalog/aws-s3"."b_general-purpose-bucket-name"."journal"
WHERE key LIKE '%.jpg'
AND record_type = 'CREATE'
```

### Listing object deletions
<a name="metadata-tables-example-query-delete-events"></a>

The following query returns object deletion events, including the AWS account ID or AWS service principal that made the request:

```
SELECT DISTINCT bucket, key, sequence_number, record_type, record_timestamp, requester, source_ip_address, version_id
FROM "s3tablescatalog/aws-s3"."b_general-purpose-bucket-name"."journal"
WHERE record_type = 'DELETE';
```

### Listing AWS KMS encryption keys used by your objects
<a name="metadata-tables-example-query-objects-using-kms-key"></a>

The following query returns the ARNs of the AWS Key Management Service (AWS KMS) keys encrypting your objects:

```
SELECT DISTINCT kms_key_arn
FROM "s3tablescatalog/aws-s3"."b_general-purpose-bucket-name"."journal";
```

### Listing objects that don't use KMS keys
<a name="metadata-tables-example-query-objects-not-using-kms-key"></a>

The following query returns objects that aren't encrypted with AWS KMS keys:

```
SELECT DISTINCT kms_key_arn
FROM "s3tablescatalog/aws-s3"."b_general-purpose-bucket-name"."journal"
WHERE encryption_status NOT IN ('SSE-KMS', 'DSSE-KMS')
AND record_type = 'CREATE';
```

### Listing AWS KMS encryption keys used for `PUT` operations in the last 7 days
<a name="metadata-tables-example-query-objects-using-kms-key-puts"></a>

The following query returns the ARNs of the AWS Key Management Service (AWS KMS) keys encrypting your objects:

```
SELECT DISTINCT kms_key_arn 
FROM "s3tablescatalog/aws-s3"."b_general-purpose-bucket-name"."journal"
WHERE record_timestamp > (current_date - interval '7' day)
AND kms_key_arn is NOT NULL;
```

### Listing objects deleted in the last 24 hours by S3 Lifecycle
<a name="metadata-tables-example-query-objects-deleted-lifecycle"></a>

The following query returns lists the objects expired in the last day by S3 Lifecycle:

```
SELECT bucket, key, version_id, last_modified_date, record_timestamp, requester
FROM "s3tablescatalog/aws-s3"."b_general-purpose-bucket-name"."journal"
WHERE requester = 's3.amazonaws.com'
AND record_type = 'DELETE' 
AND record_timestamp > (current_date - interval '1' day)
```

### Viewing metadata provided by Amazon Bedrock
<a name="metadata-tables-example-query-bedrock"></a>

Some AWS services (such as [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/APIReference/welcome.html)), upload objects to Amazon S3. You can query the object metadata provided by these services. For example, the following query includes the `user_metadata` column to determine if there are objects uploaded by Amazon Bedrock to a general purpose bucket:

```
SELECT DISTINCT bucket, key, sequence_number, record_type, record_timestamp, user_metadata
FROM "s3tablescatalog/aws-s3"."b_general-purpose-bucket-name"."journal"
WHERE record_type = 'CREATE'
AND user_metadata['content-source'] = 'AmazonBedrock';
```

If Amazon Bedrock uploaded an object to your bucket, the `user_metadata` column will display the following metadata associated with the object in the query result:

```
user_metadata
{content-additional-params -> requestid="CVK8FWYRW0M9JW65", signedContentSHA384="38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b", content-model-id -> bedrock-model-arn, content-source -> AmazonBedrock}
```

### Understanding the current state of your objects
<a name="metadata-tables-example-query-current-state"></a>

The following query can help you determine the current state of your objects. The query identifies the most recent version of each object, filters out deleted objects, and marks the latest version of each object based on sequence numbers. Results are ordered by the `bucket`, `key`, and `sequence_number` columns.

```
WITH records_of_interest as (
   -- Start with a query that can narrow down the records of interest.
    SELECT * from "s3tablescatalog/aws-s3"."b_general-purpose-bucket-name"."journal"
),

version_stacks as (
   SELECT *,
          -- Introduce a column called 'next_sequence_number', which is the next larger
          -- sequence_number for the same key version_id in sorted order.
          LEAD(sequence_number, 1) over (partition by (bucket, key, coalesce(version_id, '')) order by sequence_number ASC) as next_sequence_number
   from records_of_interest
),

-- Pick the 'tip' of each version stack triple: (bucket, key, version_id).
-- The tip of the version stack is the row of that triple with the largest sequencer.
-- Selecting only the tip filters out any row duplicates.
-- This isn't typical, but some events can be delivered more than once to the table
-- and include rows that might no longer exist in the bucket (since the
-- table contains rows for both extant and extinct objects).
-- In the next subquery, eliminate the rows that contain deleted objects.
current_versions as (
    SELECT * from version_stacks where next_sequence_number is NULL
),

-- Eliminate the rows that are extinct from the bucket by filtering with
-- record_type. An object version has been deleted from the bucket if its tip is
-- record_type==DELETE.
existing_current_versions as (
    SELECT * from current_versions where not (record_type = 'DELETE' and is_delete_marker = FALSE)
),

-- Optionally, to determine which of several object versions is the 'latest',
-- you can compare their sequence numbers. A version_id is the latest if its
-- tip's sequencer is the largest among all other tips in the same key.
with_is_latest as (
    SELECT *,
           -- Determine if the sequence_number of this row is the same as the largest sequencer for the key that still exists.
           sequence_number = (MAX(sequence_number) over (partition by (bucket, key))) as is_latest_version
    FROM existing_current_versions
)

SELECT * from with_is_latest
ORDER BY bucket, key, sequence_number;
```

## Inventory table example queries
<a name="metadata-tables-example-queries-inventory-tables"></a>

You can use the following example queries to query your inventory tables.

### Discovering datasets that use specific tags
<a name="metadata-tables-example-query-datasets-specific-tags"></a>

The following query returns the dataset that uses the specified tags:

```
SELECT * 
FROM "s3tablescatalog/aws-s3"."b_general-purpose-bucket-name"."inventory"
WHERE object_tags['key1'] = 'value1'
AND object_tags['key2'] = 'value2';
```

### Listing objects not encrypted with SSE-KMS
<a name="metadata-tables-example-query-objects-not-kms-encrypted"></a>

The following query returns objects that aren't encrypted with SSE-KMS:

```
SELECT key, encryption_status 
FROM "s3tablescatalog/aws-s3"."b_general-purpose-bucket-name"."inventory"
WHERE encryption_status != 'SSE-KMS';
```

### Listing objects that aren't encrypted
<a name="metadata-tables-example-query-objects-not-encrypted"></a>

The following query returns objects that aren't encrypted:

```
SELECT bucket, key, version_id  
FROM "s3tablescatalog/aws-s3"."b_general-purpose-bucket-name"."inventory"
WHERE encryption_status IS NULL;
```

### Listing objects generated by Amazon Bedrock
<a name="metadata-tables-example-query-objects-generated-bedrock"></a>

The following query lists objects that were generated by Amazon Bedrock:

```
SELECT DISTINCT bucket, key, sequence_number, user_metadata
FROM "s3tablescatalog/aws-s3"."b_general-purpose-bucket-name"."inventory"
WHERE user_metadata['content-source'] = 'AmazonBedrock';
```

### Reconciling the inventory table with the journal table
<a name="metadata-tables-example-query-generate-latest-inventory"></a>

The following query generates an inventory-table-like list that's up to date with the current contents of the bucket. More precisely, the resulting list combines the latest snapshot of the inventory table with the latest events in the journal table. 

For this query to produce the most accurate results, both the journal and inventory tables must be in Active status.

We recommend using this query for general purpose buckets containing fewer than a billion (10^9) objects.

This example query applies the following simplifications to the list results (compared to the inventory table):
+ **Column omissions** – The columns `bucket`, `is_multipart`, `encryption_status`, `is_bucket_key_enabled`, `kms_key_arn`, and `checksum_algorithm` aren't part of the final results. Keeping the set of optional columns to a minimum improves performance.
+ **Inclusion of all records** – The query returns all object keys and versions, including the null version (in unversioned or versioning-suspended buckets) and delete markers. For examples of how to filter the results to show only the keys that you're interested in, see the `WHERE` clause at the end of the query.
+ **Accelerated reconciliation** – The query could, in rare cases, temporarily report objects that are no longer in the bucket. Those discrepancies are eliminated as soon as the next snapshot of the inventory table becomes available. This behavior is a tradeoff between performance and accuracy.

To run this query in Amazon Athena, make sure to select the `s3tablescatalog/aws-s3` catalog and the `b_general-purpose-bucket-name` database for the general purpose bucket metadata configuration that contains your journal and inventory tables.

```
WITH inventory_time_cte AS (
    SELECT COALESCE(inventory_time_from_property, inventory_time_default) AS inventory_time FROM
    (
      SELECT * FROM
        (VALUES (TIMESTAMP '2024-12-01 00:00')) AS T (inventory_time_default)
      LEFT OUTER JOIN
        (
         SELECT from_unixtime(CAST(value AS BIGINT) / 1000.0) AS inventory_time_from_property FROM "journal$properties"
         WHERE key = 'aws.s3metadata.oldest-uncoalesced-record-timestamp' LIMIT 1
        )
      ON TRUE
    )
),

working_set AS (
    SELECT
        key,
        sequence_number,
        version_id,
        is_delete_marker,
        size,
        COALESCE(last_modified_date, record_timestamp) AS last_modified_date,
        e_tag,
        storage_class,
        object_tags,
        user_metadata,
        (record_type = 'DELETE' AND NOT COALESCE(is_delete_marker, FALSE)) AS _is_perm_delete
    FROM journal j
    CROSS JOIN inventory_time_cte t
    WHERE j.record_timestamp > (t.inventory_time - interval '15' minute)

    UNION ALL

    SELECT
        key,
        sequence_number,
        version_id,
        is_delete_marker,
        size,
        last_modified_date,
        e_tag,
        storage_class,
        object_tags,
        user_metadata,
        FALSE AS _is_perm_delete
    FROM inventory i
),

updated_inventory AS (
    SELECT * FROM (
        SELECT *,
            MAX(sequence_number) OVER (PARTITION BY key, version_id) AS _supremum_sn
        FROM working_set
    )
    WHERE sequence_number = _supremum_sn
)

SELECT
    key,
    sequence_number,
    version_id,
    is_delete_marker,
    size,
    last_modified_date,
    e_tag,
    storage_class,
    object_tags,
    user_metadata
FROM updated_inventory
-- This filter omits only permanent deletes from the results. Delete markers will still be shown.
WHERE NOT _is_perm_delete
-- You can add additional filters here. Examples:
--    AND object_tags['department'] = 'billing'
--    AND starts_with(key, 'reports/')
ORDER BY key ASC, sequence_number DESC;
```

### Finding the current versions of your objects
<a name="metadata-tables-example-query-latest-version"></a>

The following query uses the inventory table to generate a new output table that shows which object versions are current. The output table is intentionally similar to an S3 Inventory report. The output table includes an `is_latest` field, which indicates if an object is the current version. The `is_latest` field is equivalent to the **IsLatest** field in an [S3 Inventory report](storage-inventory.md#storage-inventory-contents). 

This query works for general purpose buckets with [S3 Versioning](Versioning.md) in a versioning-enabled or versioning-suspended state. 

**Prerequisites**  
The query outputs the results to a new S3 table to support further queries and for higher performance versus outputting rows on screen. Therefore, before running this query, make sure you've met the following conditions. If you choose not to output the results to a new table, you can skip these steps. 
+ You must have an existing customer-managed table bucket with an existing namespace as a place to output the new table. For more information, see [Creating a table bucket](s3-tables-buckets-create.md) and [Creating a namespace](s3-tables-namespace-create.md). 
+ To query your new output table, you must set up an access method for querying it. For more information, see [Accessing table data](s3-tables-access.md). If you want to query the output table with AWS analytics services such as Amazon Athena, your customer-managed table bucket must be integrated with AWS analytics services. For more information, see [Amazon S3 Tables integration with AWS analytics services overview](s3-tables-integration-overview.md). 

To use this query, replace `amzn-s3-demo-table-bucket` with the name of the existing customer-managed table bucket where you want the new output table to be created. Replace *`existing_namespace`* with the name of the namespace where you want the output table to be created in your table bucket. Replace *`new_table`* with the name that you want to use for the output table. Make sure that the name of your output table follows the [table naming rules](s3-tables-buckets-naming.md#naming-rules-table).

To run this query in Amazon Athena, make sure to select the `s3tablescatalog/aws-s3` catalog and the `b_general-purpose-bucket-name` database for the general purpose bucket metadata configuration that contains your inventory table. 

```
-- If you don't want to output the results to a new table, remove the following two lines 
-- (everything before the WITH clause). 
CREATE TABLE "s3tablescatalog/amzn-s3-demo-table-bucket"."existing_namespace"."new_table" 
as (
WITH 
my_inventory AS (
  SELECT 
        bucket,
        key,
        version_id,
        sequence_number,
        is_delete_marker,
        size,
        last_modified_date,
        storage_class
  FROM inventory
-- For prefix filtering, use a WHERE clause with % at the end.
--     WHERE key LIKE 'prefix%'
  ),
 
inventory_with_is_latest as (
SELECT *,
       ROW_NUMBER() OVER (
         PARTITION BY key 
         ORDER BY sequence_number DESC
       ) = 1 AS is_latest
FROM my_inventory
    )

SELECT
        bucket,
        key,
        version_id,
        sequence_number,
        is_delete_marker,
        size,
        last_modified_date,
        storage_class,
        is_latest

FROM inventory_with_is_latest

-- If you want only the current version of each key, uncomment the following WHERE clause.
-- WHERE is_latest = TRUE
-- If you aren't outputting the results to a new table, remove the next line: 
);
```

# Joining custom metadata with S3 metadata tables
<a name="metadata-tables-join-custom-metadata"></a>

You can analyze data across your AWS managed metadata tables and customer (self-managed) metadata tables. By using a standard SQL `JOIN` operator, you can query data from these multiple sources.

The following example SQL query finds matching records between an AWS managed journal table (`"journal"`) and a self-managed metadata table (`my_self_managed_metadata_table`). The query also filters information based on `CREATE` events, which indicate that a new object (or a new version of the object) was written to the bucket. (For more information, see the [S3 Metadata journal tables schema](metadata-tables-schema.md).)

```
SELECT *
FROM "s3tablescatalog/aws-s3"."b_general-purpose-bucket-name"."journal" a
JOIN "my_namespace"."my_self_managed_metadata_table" b
ON a.bucket = b.bucket AND a.key = b.key AND a.version_id = b.version_id
WHERE a.record_type = 'CREATE';
```

The following example SQL query finds matching records between an AWS managed inventory table (`"inventory"`) and a self-managed metadata table (`my_self_managed_metadata_table`):

```
SELECT *
FROM "s3tablescatalog/aws-s3"."b_general-purpose-bucket-name"."inventory" a
JOIN "my_namespace"."my_self_managed_metadata_table" b
ON a.bucket = b.bucket AND a.key = b.key AND a.version_id = b.version_id;
```

# Visualizing metadata table data with Amazon Quick
<a name="metadata-tables-quicksight-dashboards"></a>

With Amazon Quick, you can create interactive dashboards to analyze and visualize SQL query results about your S3 managed metadata tables. Quick dashboards can help you monitor statistics, track changes, and get operational insights about your metadata tables.

A dashboard about your journal table might show you:
+ What's the percentage of object uploads compared to deletions?
+ Which objects were deleted by S3 Lifecycle in the past 24 hours?
+ Which IP addresses did the most recent `PUT` requests come from?

A dashboard about your inventory table might show you:
+ How many objects are in different storage classes?
+ What percentage of your storage data is small objects compared to large objects?
+ What types of objects are in my bucket?

After you [integrate your S3 table buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-aws.html) with AWS analytics services, you can create datasets from your metadata tables and work with them in Amazon Quick using SPICE or direct SQL queries from your query engine. Quick supports Amazon Athena and Amazon Redshift as data sources.

For more information, see [Visualizing table data with Amazon Quick](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-integrating-quicksight.html).

# Troubleshooting S3 Metadata
<a name="metadata-tables-troubleshooting"></a>

Use the following information to help you diagnose and fix common issues that you might encounter when working with Amazon S3 Metadata.

## I'm unable to delete my AWS managed table bucket and metadata tables
<a name="metadata-tables-troubleshooting-cannot-delete-aws-managed-bucket-or-tables"></a>

Before you can delete a metadata table, you must first delete the associated metadata table configuration on your general purpose bucket. For more information, see [Deleting metadata table configurations](metadata-tables-delete-configuration.md).

Before you can delete your AWS managed table bucket, you must delete all metadata table configurations that are associated with this bucket and all metadata tables in the bucket. For more information, see [Deleting metadata table configurations](metadata-tables-delete-configuration.md) and [Deleting metadata tables](metadata-tables-delete-table.md). 

## I'm unable to set or change the encryption settings for my AWS managed metadata table
<a name="metadata-tables-troubleshooting-cannot-change-encryption"></a>

When you create your metadata table configuration, you can choose to encrypt your AWS managed metadata tables with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS). If you choose to use SSE-KMS, you must provide a customer managed KMS key in the same Region as your general purpose bucket. You can set the encryption type for your tables only during table creation. After an AWS managed table is created, you can't change its encryption setting. To specify SSE-KMS for your metadata tables, you must have certain permissions. For more information, see [ Permissions for SSE-KMS](metadata-tables-permissions.md#metadata-kms-permissions).

The encryption setting for a metadata table takes precedence over the default bucket-level encryption setting. If you don't specify encryption for a table, it will inherit the default encryption setting from the bucket.

By default, AWS managed table buckets are encrypted with server-side encryption using Amazon S3 managed keys (SSE-S3). After you create your first metadata configuration, you can set the default encryption setting for the AWS managed table bucket to use server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS). For more information, see [Encryption for AWS managed table buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-aws-managed-buckets.html#aws-managed-buckets-encryption) and [Specifying server-side encryption with AWS KMS keys (SSE-KMS) in table buckets](s3-tables-kms-specify.md).

## When I try to re-create my metadata table configuration, I get an error
<a name="metadata-tables-troubleshooting-cannot-recreate-configuration"></a>

Deleting a metadata table configuration deletes only the configuration. The AWS managed table bucket and your metadata tables still exist, even if you delete the metadata table configuration. 

If you delete your metadata table configuration and want to re-create a configuration for the same general purpose bucket, you must first manually delete the old journal and inventory tables from your AWS managed table bucket. Otherwise, creating the new metadata table configuration fails because those tables already exist. 

To delete your metadata tables, see [Deleting metadata tables](metadata-tables-delete-table.md).

## I can't enable an inventory table on my configuration
<a name="metadata-tables-troubleshooting-cannot-enable-inventory"></a>

If you created your S3 Metadata configuration before July 15, 2025, you can't enable an inventory table on that configuration. We recommend that you delete and re-create your configuration so that you can create an inventory table and expire journal table records. For more information, see [Enabling inventory tables on metadata configurations created before July 15, 2025](metadata-tables-create-configuration.md#metadata-tables-migration).

## I can't enable journal table record expiration on my configuration
<a name="metadata-tables-troubleshooting-cannot-enable-record-expiration"></a>

If you created your S3 Metadata configuration before July 15, 2025, you can't enable journal table record expiration on that configuration. We recommend that you delete and re-create your configuration so that you can expire journal table records and create an inventory table. For more information, see [Enabling inventory tables on metadata configurations created before July 15, 2025](metadata-tables-create-configuration.md#metadata-tables-migration).

## I can't query my metadata tables
<a name="metadata-tables-troubleshooting-cannot-query-metadata-tables"></a>

If you're unable to query your metadata tables, check the following:
+ When you're using Amazon Athena or Amazon Redshift to query your metadata tables, you must surround your metadata table namespace names in quotation marks (`"`) or backticks (```), otherwise the query might not work.
+ When using Apache Spark on Amazon EMR or other third-party engines to query your metadata tables, we recommend that you use the Amazon S3 Tables Iceberg REST endpoint. Your query might not run successfully if you don't use this endpoint. For more information, see [Accessing tables using the Amazon S3 Tables Iceberg REST endpoint](s3-tables-integrating-open-source.md).
+ Make sure that you have the appropriate AWS Identity and Access Management (IAM) permissions to query metadata tables. For more information, see [Permissions for querying metadata tables](metadata-tables-bucket-query-permissions.md).
+ If you're using Amazon Athena and receive errors when you try to run your queries, do the following:
  + If you receive the error "Insufficient permissions to execute the query. Principal does not have any privilege on specified resource" when you try to run a query in Athena, you must be granted the necessary Lake Formation permissions on the table. For more information, see [Granting Lake Formation permission on a table or database](grant-permissions-tables.md#grant-lf-table).
  + If you receive the error "Iceberg cannot access the requested resource" when you try to run the query, go to the AWS Lake Formation console and make sure that you've granted yourself permissions on the table bucket catalog and database (namespace) that you created. Don't specify a table when granting these permissions. For more information, see [Granting Lake Formation permission on a table or database](grant-permissions-tables.md#grant-lf-table). 

## I'm receiving 405 errors when I try to use certain S3 Metadata AWS CLI commands and API operations
<a name="metadata-tables-troubleshooting-405-errors"></a>

Calling the V1 `GetBucketMetadataTableConfiguration` API operation or using the `get-bucket-metadata-table-configuration` AWS Command Line Interface (AWS CLI) command against a V2 metadata table configuration results in an HTTP `405 Method Not Allowed` error. Likewise, calling the V1 `DeleteBucketMetadataTableConfiguration` API operation or using the `delete-bucket-metadata-table-configuration` AWS CLI command also causes a 405 error.

You can use the V2 `GetBucketMetadataConfiguration` API operation or the `get-bucket-metadata-configuration` AWS CLI command against a V1 or V2 metadata table configuration. Likewise, you can use the V2 `DeleteBucketMetadataConfiguration` API operation or the `delete-bucket-metadata-configuration` AWS CLI command against a V1 or V2 metadata table configuration.

We recommend updating your processes to use the new V2 API operations (`CreateBucketMetadataConfiguration`, `GetBucketMetadataConfiguraion`, and `DeleteBucketMetadataConfiguration`) instead of the V1 API operations. For more information about migrating from V1 of S3 Metadata to V2, see [Enabling inventory tables on metadata configurations created before July 15, 2025](metadata-tables-create-configuration.md#metadata-tables-migration).

To determine whether your configuration is V1 or V2, you can look at the following attribute of your `GetBucketMetadataConfiguration` API response. An AWS managed bucket type (`"aws"`) indicates a V2 configuration, and a customer-managed bucket type (`"customer"`) indicates a V1 configuration.

```
"MetadataTableConfigurationResult": {
            "TableBucketType": ["aws" | "customer"]
```

For more information, see [Viewing metadata table configurations](metadata-tables-view-configuration.md).