

# Working with Amazon S3 Files
<a name="s3-files"></a>

## What is S3 Files?
<a name="s3-files-what-is"></a>

S3 Files is a shared file system that connects any AWS compute resource directly with your data in Amazon S3. It provides fast, direct access to all of your S3 data as files with full file system semantics and low-latency performance, without your data ever leaving S3. Every file-based application, agent, and team can access and work with your S3 data as a file system using the tools they already depend on. Built using Amazon EFS, S3 Files gives you the performance and simplicity of a file system with the scalability, durability, and cost-effectiveness of S3. You can read, write, and organize data using file and directory operations, while S3 Files manages the synchronization of changes between your bucket and file system.

## How does S3 Files work?
<a name="s3-files-how-it-works"></a>

When you create an S3 file system linked to your S3 bucket or to a prefix within it and mount it on a compute resource such as an EC2 instance or a Lambda function, S3 Files first presents a traversable view of your bucket's objects as files. As you navigate through directories and open files, associated metadata and contents are placed onto the file system's high-performance storage. When you read files, S3 Files loads file contents onto the high-performance storage on demand without duplicating your entire dataset. When you write data, your writes go to the high-performance storage and are synchronized back to your S3 bucket. S3 Files intelligently translates your file system operations into efficient S3 requests on your behalf. Many read operations bypass the file system entirely, with data served directly from S3.

You can configure the file size threshold for what gets loaded onto the high-performance storage (default <128 KiB), as latencies matter most for small files. S3 Files streams file reads directly from your S3 bucket in two cases: when the file's data is not stored in the file system's high-performance storage, and for large reads >= 1 MiB, even when the data also resides on the file system's high-performance storage. The S3 bucket is optimized for high throughput while the file system's high-performance storage layer is optimized for low-latency access. S3 Files asynchronously imports data for small files (< 128 KiB by default) to the file system's high-performance storage for low latency access on subsequent reads. Recently modified data that has not yet been synchronized to S3 is always served from the file system. For more information, see [Customizing synchronization for S3 Files](s3-files-synchronization-customizing.md).

Data that has not been read within a configurable window (1 to 365 days, default 30) automatically expires from the high-performance storage. Your authoritative data always remains in S3, and background synchronization keeps the file system and bucket consistent in both directions. For more information, see [Understanding how synchronization works](s3-files-synchronization.md).

Supported compute services to mount your S3 file systems are Amazon EC2, AWS Lambda, Amazon EKS, and Amazon ECS. For more information, see [Mounting your S3 buckets on compute resources](s3-files-attach-compute.md).

![\[Diagram showing the data flow between an S3 bucket, S3 file system, and compute resources.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/S3Files_Compute_dataflow.png)


## Are you a first-time user of S3 Files?
<a name="s3-files-first-time"></a>

If you are a first-time user of S3 Files, create your first S3 file system using the S3 Console or the AWS CLI by following the [Tutorial: Getting started with S3 Files](s3-files-getting-started.md).

## Key concepts
<a name="s3-files-key-concepts"></a>

The following terms are used throughout S3 Files documentation:

**File system**  
A shared file system linked to your S3 bucket.

**High-performance storage**  
The low-latency storage layer within your file system where actively used file data and metadata reside. S3 Files automatically manages this storage, copying data onto it when you access files and removing data that has not been read within a configurable expiration window. You pay a storage rate for data residing on the high-performance storage.

**Synchronization**  
The process by which S3 Files keeps your active working dataset and your changes consistent between your file system and S3 bucket. Importing copies data from your S3 bucket onto the file system. Exporting copies changes you make through the file system back to your S3 bucket. S3 Files performs synchronization automatically in both directions.

**Mount target**  
A mount target provides network access to your file system within a single Availability Zone in your VPC. You need at least one mount target to access your file system from compute resources, and you can create a maximum of one mount target per Availability Zone.

**Access point**  
Access points are application-specific entry points to a file system that simplify managing data access at scale for shared datasets. You can use access points to enforce user identities and permissions for all file system requests that are made through the access point. When you create a file system using the AWS Management Console, S3 Files automatically creates one access point for the file system.

## Features
<a name="s3-files-performance-storage"></a>

**High performance without full data replication**  
S3 Files delivers low-latency file access by copying only your active working set onto the file system's high performance storage, not your entire dataset. Small, frequently accessed files are served from the high-performance storage at sub-millisecond to single-digit millisecond latencies. Large reads are streamed directly from S3 at up to terabytes per second of aggregate throughput. This means you get file system performance for interactive workloads and S3 throughput for streaming workloads, without paying to store or import data you are not using or doesn't benefit from low latency. For more information, see [Performance specifications](s3-files-performance.md).

**Intelligent read routing**  
S3 Files automatically routes read requests to the storage layer (S3 file system or S3 bucket) best suited for them, while maintaining full file system semantics including consistency, locking, and POSIX permissions. Small, random reads of actively used files are served from the high-performance storage for low latency. Large sequential reads and reads of data not on the file system are served directly from your S3 bucket for high throughput, with no file system data charge.

**Automatic synchronization**  
S3 Files automatically keeps your file system and S3 bucket consistent in both directions. Changes you make through the file system are copied back to your S3 bucket, and changes made directly to your S3 bucket are reflected in your file system's view. You can customize synchronization behavior, including what data is imported and how long it stays on the file system. For more information, see [Understanding how synchronization works](s3-files-synchronization.md).

**Scalable performance**  
S3 Files automatically scales throughput and IOPS to match your workload activity. You do not need to provision or manage performance capacity and you pay only for what you use.

**Regional durability**  
Data written to the high performance storage layer has the same durability as Amazon S3. It stores data redundantly across multiple geographically separated Availability Zones within the same AWS Region, providing high durability and availability for your data.

**Encryption**  
S3 Files encrypts all data in transit using TLS and all data at rest using AWS KMS keys. You can use AWS owned keys (default) or your own customer managed keys. For more information, see [Encryption](s3-files-encryption.md).

**File system semantics**  
S3 Files supports the NFS version 4.2 and 4.1 protocols. It provides file-system-access semantics, such as read-after-write data consistency, file locking, and POSIX permissions.

## How are you billed for S3 Files?
<a name="s3-files-billing"></a>

You pay a storage rate for the fraction of active data resident on the high-performance storage, and you pay file system access charges for reading from and writing to your file system's high performance storage. S3 Files streams file reads directly from your S3 bucket in two cases: when the file's data is not stored in the file system's high-performance storage, and for large reads >= 1 MiB, even when the data also resides on the file system's high-performance storage. The S3 bucket is optimized for high throughput while the file system's high-performance storage layer is optimized for low-latency access. S3 Files asynchronously imports data for small files (< 128 KiB by default) to the file system's high-performance storage for low latency access on subsequent reads. These reads incur only standard S3 GET request cost with no file system access charge. The file system access charges apply to synchronization operations: importing data onto the file system incurs write charges, and exporting changes back to S3 incurs read charges. For more information, see [How S3 Files is metered](s3-files-metering.md). For current pricing, see the [S3 Files pricing page](https://aws.amazon.com/s3/pricing/).

**Topics**
+ [

## What is S3 Files?
](#s3-files-what-is)
+ [

## How does S3 Files work?
](#s3-files-how-it-works)
+ [

## Are you a first-time user of S3 Files?
](#s3-files-first-time)
+ [

## Key concepts
](#s3-files-key-concepts)
+ [

## Features
](#s3-files-performance-storage)
+ [

## How are you billed for S3 Files?
](#s3-files-billing)
+ [

# Prerequisites for S3 Files
](s3-files-prereq-policies.md)
+ [

# Tutorial: Getting started with S3 Files
](s3-files-getting-started.md)
+ [

# Mounting your S3 buckets on compute resources
](s3-files-attach-compute.md)
+ [

# Creating and managing S3 Files resources
](s3-files-resources.md)
+ [

# Understanding how synchronization works
](s3-files-synchronization.md)
+ [

# Monitoring and auditing S3 Files
](s3-files-monitoring-logging.md)
+ [

# Performance specifications
](s3-files-performance.md)
+ [

# Security for S3 Files
](s3-files-security.md)
+ [

# How S3 Files is metered
](s3-files-metering.md)
+ [

# S3 Files best practices
](s3-files-best-practices.md)
+ [

# Unsupported features, limits, and quotas
](s3-files-quotas.md)
+ [

# Troubleshooting S3 Files
](s3-files-troubleshooting.md)

# Prerequisites for S3 Files
<a name="s3-files-prereq-policies"></a>

Before you begin using S3 Files, make sure that you have completed the following prerequisites.

## AWS account and compute setup
<a name="s3-files-prereq-account-setup"></a>
+ You have an AWS account.
+ You have a compute resource and an S3 general purpose bucket in your desired AWS Region where you want to create your file system. For more information, see [Creating a general purpose bucket](create-bucket-overview.md).
+ Your S3 bucket has versioning enabled. S3 Files requires S3 Versioning to synchronize changes between your file system and your S3 bucket. For more information, see [Enabling versioning on buckets](manage-versioning-examples.md).
+ Your S3 bucket must use one of the following encryption types: Server-side encryption with Amazon S3 managed keys (SSE-S3) or Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS).

## S3 Files client
<a name="s3-files-prereq-client"></a>

To use S3 Files with Amazon EC2, you must install the client `amazon-efs-utils`, a shared open-source collection of tools for Amazon EFS and Amazon S3 Files. To work with S3 Files, you need `amazon-efs-utils` version 3.0.0 or above. The client includes a mount helper program that simplifies mounting S3 file systems and enables Amazon CloudWatch metrics for monitoring your file system's mount status.

### Step 1: Install the client
<a name="s3-files-prereq-client-install"></a>
+ Access the terminal for your Amazon EC2 instance through Secure Shell (SSH), and log in with the appropriate user name. For more information, see [Connect to your EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-to-linux-instance.html) in the *Amazon Elastic Compute Cloud User Guide*.
+ For those using Amazon Linux, do the following to install efs-utils from Amazon's repositories:

  ```
  sudo yum -y install amazon-efs-utils
  ```
+ If you use other supported Linux distributions, you can do the following:

  ```
  curl https://amazon-efs-utils.aws.com/efs-utils-installer.sh | sudo sh -s -- --install
  ```
+ For other Linux distributions, see [On other Linux distributions](https://github.com/aws/efs-utils/?tab=readme-ov-file#on-other-linux-distributions) in the amazon-efs-utils README on GitHub.

### Step 2: Install botocore
<a name="s3-files-prereq-client-botocore"></a>

The `amazon-efs-utils` client uses botocore to interact with other AWS services. For example, you need to install botocore to use Amazon CloudWatch to monitor your file system. For instructions on installing and upgrading botocore, see [Installing botocore](https://github.com/aws/efs-utils#install-botocore) in the amazon-efs-utils README on GitHub.

### Enabling FIPS mode for S3 Files
<a name="s3-files-prereq-client-fips"></a>

If you need to comply with Federal Information Processing Standards (FIPS), then you must enable FIPS mode in the client. Enabling the FIPS mode involves modifying the `s3files-utils.conf` file on the operating system.

Follow these steps to enable FIPS mode in the client for S3 Files:

1. Using your text editor of choice, open the `/etc/amazon/efs/s3files-utils.conf` file.

1. Find the line containing the following text:

   ```
   fips_mode_enabled = false
   ```

1. Change the text to the following:

   ```
   fips_mode_enabled = true
   ```

1. Save your changes.

## IAM roles and policies
<a name="s3-files-prereq-iam"></a>

To use S3 Files, you must configure IAM roles and attached policies for two purposes:
+ Accessing your bucket from the file system
+ Attaching your file system to AWS compute resources

### IAM role for accessing your bucket from the file system
<a name="s3-files-prereq-iam-creation-role"></a>

When you create an S3 file system, you must specify an IAM role that S3 Files assumes to read from and write to your S3 bucket. This role allows S3 Files to synchronize changes between your file system and your S3 bucket. The role also grants permissions to manage Amazon EventBridge rules that S3 Files uses to detect changes in your S3 bucket and trigger synchronization. You must also make sure that the bucket policies of your source bucket don't deny access from your compute resource.

**Note**  
When you create a file system using the AWS Management Console, S3 Files automatically creates this IAM role with the required permissions.

This IAM role requires the following:
+ An inline policy as follows:

  ```
  {
      "Version": "2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "S3BucketPermissions",
              "Effect": "Allow",
              "Action": [
                  "s3:ListBucket",
                  "s3:ListBucketVersions"
              ],
              "Resource": "arn:aws:s3:::bucket",
              "Condition": {
                  "StringEquals": {
                      "aws:ResourceAccount": "accountId"
                  }
              }
          },
          {
              "Sid": "S3ObjectPermissions",
              "Effect": "Allow",
              "Action": [
                  "s3:AbortMultipartUpload",
                  "s3:DeleteObject*",
                  "s3:GetObject*",
                  "s3:List*",
                  "s3:PutObject*"
              ],
              "Resource": "arn:aws:s3:::bucket/*",
              "Condition": {
                  "StringEquals": {
                      "aws:ResourceAccount": "accountId"
                  }
              }
          },
          {
              "Sid": "UseKmsKeyWithS3Files",
              "Effect": "Allow",
              "Action": [
                  "kms:GenerateDataKey",
                  "kms:Encrypt",
                  "kms:Decrypt",
                  "kms:ReEncryptFrom",
                  "kms:ReEncryptTo"
              ],
              "Condition": {
                  "StringLike": {
                      "kms:ViaService": "s3.region.amazonaws.com",
                      "kms:EncryptionContext:aws:s3:arn": [
                          "arn:aws:s3:::bucket",
                          "arn:aws:s3:::bucket/*"
                      ]
                  }
              },
              "Resource": "arn:aws:kms:region:accountId:*"
          },
          {
              "Sid": "EventBridgeManage",
              "Effect": "Allow",
              "Action": [
                  "events:DeleteRule",
                  "events:DisableRule",
                  "events:EnableRule",
                  "events:PutRule",
                  "events:PutTargets",
                  "events:RemoveTargets"
              ],
              "Condition": {
                  "StringEquals": {
                      "events:ManagedBy": "elasticfilesystem.amazonaws.com"
                  }
              },
              "Resource": [
                  "arn:aws:events:*:*:rule/DO-NOT-DELETE-S3-Files*"
              ]
          },
          {
              "Sid": "EventBridgeRead",
              "Effect": "Allow",
              "Action": [
                  "events:DescribeRule",
                  "events:ListRuleNamesByTarget",
                  "events:ListRules",
                  "events:ListTargetsByRule"
              ],
              "Resource": [
                  "arn:aws:events:*:*:rule/*"
              ]
          }
      ]
  }
  ```

  Replace the placeholder values with your own values.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-files-prereq-policies.html)
+ A trust policy that allows S3 Files to assume the IAM role. Add the following trust policy to the IAM role to allow the S3 Files service to assume it. Replace *accountId* and *region* with your values.

  ```
  {
      "Version": "2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "AllowS3FilesAssumeRole",
              "Effect": "Allow",
              "Principal": {
                  "Service": "elasticfilesystem.amazonaws.com"
              },
              "Action": "sts:AssumeRole",
              "Condition": {
                  "StringEquals": {
                      "aws:SourceAccount": "accountId"
                  },
                  "ArnLike": {
                      "aws:SourceArn": "arn:aws:s3files:region:accountId:file-system/*"
                  }
              }
          }
      ]
  }
  ```

### IAM role for attaching your file system to AWS compute resources
<a name="s3-files-prereq-iam-compute-role"></a>

Your compute resources on which you mount an S3 file system must have an IAM role attached (for example, an EC2 instance profile) with policies that allow your compute resource to interact with your S3 file system and your source S3 bucket. You must also make sure that the bucket policies of your source bucket don't deny access from your compute resource.

Add the following two policies to the IAM role attached to your compute resource:
+ **Permissions for the compute resource to connect to and interact with S3 file systems**

  The IAM role must include permissions for the mount helper to connect to and interact with S3 file systems. You can attach an AWS managed policy such as `AmazonS3FilesClientFullAccess` managed policy if you want to grant the compute resource full read and write access to your S3 file system or the `AmazonS3FilesClientReadOnlyAccess` for read-only access. You can also attach the `AmazonElasticFileSystemUtils` managed policy if you want to enable Amazon CloudWatch monitoring. For more information and a complete list of available managed policies for S3 Files, see [AWS managed policies for Amazon S3 Files](s3-files-security-iam-awsmanpol.md). You can also provide these permissions by adding individual IAM permissions such as `s3files:ClientMount` or `s3files:ClientWrite` (not required for read-only connections) to the IAM role of your compute resource.
+ **An inline policy that grants the compute resource read access to S3 objects**

  Add the following inline policy to the IAM role. This policy grants the compute resource permissions to directly read objects from the linked S3 bucket in the same account to optimize read performance. Replace *bucket* with your S3 bucket name or bucket name with prefix.

  ```
  {
      "Version": "2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "S3ObjectReadAccess",
              "Effect": "Allow",
              "Action": [
                  "s3:GetObject",
                  "s3:GetObjectVersion"
              ],
              "Resource": "arn:aws:s3:::bucket/*"
          },
          {
              "Sid": "S3BucketListAccess",
              "Effect": "Allow",
              "Action": "s3:ListBucket",
              "Resource": "arn:aws:s3:::bucket"
          }
      ]
  }
  ```

## Security groups
<a name="s3-files-prereq-security-groups"></a>

Once your file system and mount targets are created, you must configure the right security groups to start using your file system. Security groups on both the compute resource and the mount target must allow the required traffic as shown in the table below:


| Security group | Rule type | Protocol | Port | Source/destination | 
| --- | --- | --- | --- | --- | 
| EC2 Instance | Outbound | TCP | 2049 | Mount target security group | 
| Mount Target | Inbound | TCP | 2049 | EC2 instance security group | 

# Tutorial: Getting started with S3 Files
<a name="s3-files-getting-started"></a>

In this tutorial, you create an S3 file system and mount it on an EC2 instance. You then test basic file operations. You can use either the S3 console or the AWS CLI to get started with S3 Files.

## Getting started with S3 Files using the AWS Console
<a name="s3-files-getting-started-console"></a>

The S3 Files workflow on S3 Console consists of the following steps:
+ Create your S3 file system
+ Mount the file system on your EC2 instance and run file system operations

### Prerequisites
<a name="s3-files-getting-started-console-prereqs"></a>

Before getting started, make sure you have the following:
+ You have completed the [AWS account and compute setup](s3-files-prereq-policies.md#s3-files-prereq-account-setup).
+ You are set up with Amazon EC2 and are familiar with launching EC2 instances. For more information, see [Get started with Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html) in the *Amazon EC2 User Guide*. For this tutorial, use the default VPC for your EC2 instance.
+ You have an [IAM role for attaching your file system to AWS compute resources](s3-files-prereq-policies.md#s3-files-prereq-iam-compute-role) attached to your EC2 instance so that it can interact with your S3 file system and your S3 bucket.

### Step 1: Create your S3 file system
<a name="s3-files-getting-started-console-step1"></a>
+ Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).
+ In the navigation bar at the top of the page, verify you are in the AWS Region where your EC2 instance and S3 bucket is.
+ In the left navigation pane, choose **General purpose buckets**.
+ Select the bucket for which you want to create a file system.
+ Choose the **File systems** tab.
+ Choose **Create file system**.
+ Review and confirm your VPC. For this tutorial, use your default VPC.
+ Choose **Create**.

When you create a file system using the AWS Management Console, S3 Files automatically creates one mount target in every Availability Zone in your default VPC and one access point for the file system. This can take a few minutes. Your file system will become available for the next step once all the resources are created.

### Step 2: Mount the file system on your EC2 instance
<a name="s3-files-getting-started-console-step2"></a>
+ On the file system **Overview** page, choose **Attach** under **Attach to an EC2 instance**. This will open a new page to mount your file system on an EC2 instance.
+ Select your desired EC2 instance from the dropdown **Available EC2 instances**.
+ Enter a path on your EC2 instance where you want to mount the file system. For example, `/mnt/s3files/`.
+ Make sure you have configured the right [Security groups](s3-files-prereq-policies.md#s3-files-prereq-security-groups) on your EC2 instance and the mount target to allow the required traffic to flow.
+ Make sure you have the right IAM role with required permissions attached to your EC2 instance so that it can interact with your S3 file system and your S3 bucket. For more information, see [IAM role for attaching your file system to AWS compute resources](s3-files-prereq-policies.md#s3-files-prereq-iam-compute-role). For this tutorial, you can consider giving the client full access by adding the managed policy `AmazonS3FilesClientFullAccess` to EC2 instance's IAM role.
+ Follow the attach instructions displayed on the page to open CloudShell, mount your file system, and run basic file system operations.

## Getting started with S3 Files using the AWS CLI
<a name="s3-files-getting-started-cli"></a>

The S3 Files workflow on AWS CLI consists of the following steps:

1. Create your file system.

1. Create mount targets for your file system.

1. Mount the file system on your EC2 instance using a mount target.

1. Test file operations such as listing a directory, writing text to a file, reading a file, and copying a file. Then verify that your changes reflect in your S3 bucket.

### Prerequisites
<a name="s3-files-getting-started-cli-prereqs"></a>

Before getting started, make sure you have the following:
+ You have installed and configured the AWS CLI. For more information, see [Installing or updating to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
+ You have completed all the prerequisites described in [Prerequisites for S3 Files](s3-files-prereq-policies.md).
+ You are set up with Amazon EC2 and are familiar with launching EC2 instances. You need an AWS account, a user with administrative access, a key pair, and a security group. For more information, see [Get started with Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html) in the *Amazon EC2 User Guide*.

### Step 1: Create your S3 file system
<a name="s3-files-getting-started-cli-step1"></a>

[Connect to your EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect.html). Run the `create-file-system` command to create a file system.

```
aws s3files create-file-system --region aws-region --bucket bucket-arn --role-arn iam-role
```

Replace the following with your desired values:
+ *aws-region* : The AWS Region of your bucket. For example, `us-east-1`.
+ *bucket-arn* : The ARN of your S3 bucket.
+ *iam-role* : ARN of the IAM role that S3 Files assumes to read from and write to your S3 bucket. Make sure you have added the right permissions to this IAM role. For more information, see [IAM role for accessing your bucket from the file system](s3-files-prereq-policies.md#s3-files-prereq-iam-creation-role).

After successfully creating the file system, S3 Files returns the file system description as JSON. Note down the file system ID for the next step.

### Step 2: Create mount targets
<a name="s3-files-getting-started-cli-step2"></a>

A mount target provides network access to your file system in your VPC within a single Availability Zone. You need a mount target to access your file system from compute resources. You can create a maximum of one mount target per Availability Zone. We recommend creating a mount target in every Availability Zone you operate in.

Run the following `create-mount-target` command to create a mount target for your file system. You must make sure the *subnet-id* is in the same VPC as your EC2 instance. You must create your mount target in the same Availability Zone as your EC2 instance.

```
aws s3files create-mount-target --region aws-region --file-system-id file-system-id --subnet-id subnet-id
```

Here *file-system-id* is the file system ID that you received in the response of `create-file-system` command. Mount targets can take up to \$15 minutes to create.

### Step 3: Mount the file system on your EC2 instance
<a name="s3-files-getting-started-cli-step3"></a>

Before mounting your file system, make sure you have configured the right [Security groups](s3-files-prereq-policies.md#s3-files-prereq-security-groups) on your compute resource and the mount target to allow the required traffic to flow. For more details on security groups, visit the [VPC user guide](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html).

Run the following commands to mount your file system:
+ Create a directory `/mnt/s3files` that you will use as the file system mount point using the following command:

  ```
  sudo mkdir /mnt/s3files
  ```
+ Mount the file system:

  ```
  sudo mount -t s3files file-system-id:/ /mnt/s3files
  ```

If you don't have your file system ID, you can find it by running the following:

```
aws s3files get-file-system --region aws-region --file-system-id file-system-id
```

### Step 4: Test file operations
<a name="s3-files-getting-started-cli-step4"></a>

Test basic file operations on your mounted file system as follows:
+ Change to the directory you mounted:

  ```
  cd /mnt/s3files
  ```
+ You can list the contents of your directory to check that the contents of your source bucket or prefix got imported. Synchronization typically occurs within seconds, but may take longer, especially for the first file. If your bucket is empty, the command below will also return an empty result.

  ```
  ls
  ```
+ You can also test other file operations:
  + Create a file:

    ```
    echo "Hello, S3 Files!" > test.txt
    ```
  + Read the file:

    ```
    cat test.txt
    ```
  + Create a directory:

    ```
    mkdir test-directory
    ```
  + Copy the file to the directory:

    ```
    cp /mnt/s3files/test.txt /mnt/s3files/test-directory/
    ```

You can then go to your S3 bucket and check that the directory `test-directory` reflects in your bucket. Note that it may take \$11 minute to synchronize changes back to your S3 bucket.

# Mounting your S3 buckets on compute resources
<a name="s3-files-attach-compute"></a>

You can mount an S3 file system on compute resources to access your S3 data as files. Your compute resource must run in the same Amazon Virtual Private Cloud (Amazon VPC) as the S3 file system. All compute resources communicate with the file system through mount targets on NFS port 2049.

S3 Files supports the following compute environments:
+ [Amazon Elastic Compute Cloud (Amazon EC2) instances](s3-files-mounting.md)
+ [AWS Lambda functions](s3-files-mounting-lambda.md)
+ [Amazon Elastic Kubernetes Service (Amazon EKS) clusters](s3-files-mounting-eks.md)
+ [Amazon Elastic Container Service (Amazon ECS) clusters](s3-files-mounting-ecs.md)

# Mounting S3 file systems on Amazon EC2
<a name="s3-files-mounting"></a>

To mount S3 file systems on an EC2 instance, you must use the S3 Files mount helper. The mount helper helps you mount your S3 file systems on EC2 instances running the supported distributions. When mounting a file system, the mount helper defines a new network file system type, called `s3files`, which is fully compatible with the standard `mount` command in Linux. The mount helper also supports mounting an S3 file system at instance boot time automatically by using entries in the `/etc/fstab` configuration file on EC2 Linux instances. The mount helper is part of the open-source collection of tools that is installed when you install the S3 Files client (amazon-efs-utils).

![\[Diagram showing the data flow between an S3 bucket, S3 file system, and Amazon EC2 instance.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/S3Files_EC2_dataflow.png)


## Prerequisites to mount on EC2 instances
<a name="s3-files-mounting-prereqs"></a>
+ You have an S3 file system with at least one mount target available.
+ Your EC2 instance is in the same Availability Zone as the mount target that you will use to mount your file system.
+ An IAM instance profile is attached to the EC2 instance with the required permissions for S3 Files. For details, see [IAM role for attaching your file system to AWS compute resources](s3-files-prereq-policies.md#s3-files-prereq-iam-compute-role).
+ You have configured the required [Security groups](s3-files-prereq-policies.md#s3-files-prereq-security-groups).
+ You have installed the amazon-efs-utils package on the EC2 instance. For more information, see [S3 Files client](s3-files-prereq-policies.md#s3-files-prereq-client).

## How does the mount helper work?
<a name="s3-files-mounting-how-it-works"></a>

When you issue a mount command, the mount helper performs the following actions:
+ Retrieves IAM credentials from the EC2 instance profile.
+ Initializes the efs-proxy process to establish a TLS-encrypted connection to the mount target.
+ Starts the amazon-efs-mount-watchdog supervisor process, which monitors the health of TLS mounts. This process is started automatically the first time an S3 file system is mounted.
+ Mounts the file system at the specified mount point.

The mount helper uses TLS version 1.2 to communicate with your file system. Using TLS requires certificates, and these certificates are signed by a trusted Amazon Certificate Authority. For more information on how encryption works, see [Security for S3 Files](s3-files-security.md).

The mount helper uses the following mount options that are optimized for S3 Files:


| Option | Value | Description | 
| --- | --- | --- | 
| nfsvers | 4.2 | NFS protocol version. | 
| rsize | 1048576 | Sets the maximum number of bytes of data that the NFS client can receive for each network READ request to 1048576 (1 MB), the largest available, to avoid diminished performance. | 
| wsize | 1048576 | Sets the maximum number of bytes of data that the NFS client can send for each network WRITE request to 1048576 (1 MB), the largest available, to avoid diminished performance. | 
| hard | — | Sets the recovery behavior of the NFS client after an NFS request times out, so that NFS requests are retried indefinitely until the server replies, to ensure data integrity. | 
| timeo | 600 | Sets the timeout value that the NFS client uses to wait for a response before it retries an NFS request to 600 deciseconds (60 seconds) to avoid diminished performance. | 
| retrans | 2 | Sets the number of times the NFS client retries a request before it attempts further recovery action to 2. | 
| noresvport | — | Tells the NFS client to use a new non-privileged TCP source port when a network connection is reestablished. Using noresvport helps ensure that your file system has uninterrupted availability after a reconnection or network recovery event. | 

In addition, the mount helper automatically uses `tls` and `iam` mount options when mounting an S3 file system as S3 Files requires these options to establish a connection. This is because S3 Files always mounts a file system using TLS encryption and IAM authentication and these cannot be disabled.

## How to mount your S3 file system on an EC2 instance?
<a name="s3-files-mounting-steps"></a>
+ Connect to your EC2 instance through Secure Shell (SSH) or EC2 Instance Connect on EC2 Console. For more information, see [Connect to your EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect.html).
+ Create a directory `/mnt/s3files` that you will use as the file system mount point using the following command:

  ```
  sudo mkdir /mnt/s3files
  ```
+ Mount your S3 file system:

  ```
  FS="{YOUR_FILE_SYSTEM_ID}"
  sudo mount -t s3files $FS:/ /mnt/s3files
  ```
+ Confirm the file system is mounted.

  ```
  df -h /mnt/s3files
  ```

  You should see a response similar to the following:

  ```
  Filesystem      Size  Used Avail Use% Mounted on
  {s3files-dns}    8.0E  129M  8.0E   1% {path/to/mount}
  ```

  You can also verify file system mount and inspect mount options by listing the contents of the local mount point. If the mount is successful, this command shows the mount details, including your mount options, for the specific directory.

  ```
  findmnt -T /mnt/s3files
  ```

For detailed information on mount commands, visit the [GitHub documentation](https://github.com/aws/efs-utils/blob/master/README.md#mountefs).

You can now read and write S3 objects as files on your local mount path using standard file system operations. If you have objects in your S3 bucket then you can view them as files using the following commands.

```
ls /mnt/s3files
```

You can monitor your file system storage, performance, client connections, and synchronization errors using [CloudWatch metrics](s3-files-monitoring-cloudwatch.md).

## How to mount your S3 file system on an EC2 instance using access points
<a name="s3-files-mounting-access-points-inline"></a>

When you mount a file system using an access point, the mount command includes the `access-point-id` mount option.

```
sudo mount -t s3files -o accesspoint=access-point-id file-system-id /mnt/s3files
```

where:
+ *access-point-id* is the ID of your access point.
+ *file-system-id* is the ID of your S3 file system.

## Automatically mounting S3 file systems when your EC2 instance starts
<a name="s3-files-mounting-auto"></a>

You can configure your EC2 instance to automatically mount an S3 file system when the instance starts or restarts by updating the `/etc/fstab` file. The `/etc/fstab` file contains information about file systems and is used by the operating system to determine which file systems to mount at boot time.

**Warning**  
Use the `_netdev` option, used to identify network file systems, when mounting your file system automatically. If `_netdev` is missing, your EC2 instance might stop responding. This result is because network file systems need to be initialized after the compute instance starts its networking. For more information, see [Automatic mounting fails and the instance is unresponsive](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/builtInFS-fstab-issues.html).

You can use the mount helper to configure an Amazon EC2 instance to automatically mount an S3 file system when the instance starts:
+ Update the EC2 `/etc/fstab` file with an entry for the S3 file system.
+ Attach an S3 file system when you create a new EC2 instance using the EC2 launch instance wizard.

### Updating the /etc/fstab file
<a name="s3-files-mounting-auto-fstab"></a>

Perform the following steps to update the `/etc/fstab` on an EC2 Linux instance so that the instance uses the mount helper to automatically remount an S3 file system when the instance restarts.
+ [Connect to your EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect.html).
+ Open the `/etc/fstab` file in an editor and add the following line to the file:

  ```
  file-system-id:/ mount-directory s3files _netdev 0 0
  ```

  Where:
  + *file-system-id* is the ID of your S3 file system (for example, `fs-0123456789abcdef0`).
  + *mount-directory* is the mount point directory on your EC2 instance (for example, `/mnt/s3files`).
  + `_netdev` specifies that the file system is a network file system, ensuring the instance waits for network availability before attempting the mount.
+ Save the file and close the editor.
+ Test the fstab entry by mounting all file systems in fstab:

  ```
  sudo mount -a
  ```
+ Verify the file system is mounted:

  ```
  findmnt -T mount-directory
  ```

**Using the nofail option**

We recommend adding the `nofail` option to your fstab entry in production environments. This option allows the instance to boot even if the file system fails to mount:

```
file-system-id:/ mount-directory s3files _netdev,nofail 0 0
```

**Automatic mounting with an access point**

To automatically mount using an S3 Files access point, include the `accesspoint` option:

```
file-system-id:/ mount-directory s3files _netdev,accesspoint=access-point-id 0 0
```

**Automatic mounting with a subdirectory**

To automatically mount a specific subdirectory of your file system, specify the path:

```
file-system-id:/path/to/directory mount-directory s3files _netdev 0 0
```

### Using the EC2 launch instance wizard
<a name="s3-files-mounting-auto-wizard"></a>
+ Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).
+ Choose **Launch Instance**.
+ Follow this documentation to launch an EC2 instance using the launch instance wizard in the AWS console. Before choosing **Launch Instance**, configure your network and add your S3 file system as shown in following steps.
+ Make sure you select a subnet in your **Network settings**.
+ Select the default security group to make sure that your EC2 instance can access your S3 file system. You can't access your EC2 instance by Secure Shell (SSH) using this security group. For access by SSH, later you can edit the default security and add a rule to allow SSH or a new security group that allows SSH. You can use the following settings:
  + Type: SSH
  + Protocol: TCP
  + Port Range: 22
  + Source: Anywhere 0.0.0.0/0
+ Under **Storage** section, choose **File systems** and choose **S3 Files**.
+ Under the file system drop down, you will see your file systems in the Availability Zone based on the subnet you selected previously in your Network settings. Choose the S3 file system that you want to mount. If you don't have any file systems, choose create a new file system to create a new one.
+ Enter a local mount path on your EC2 instance where you want to mount the file system (for example, `/mnt/s3files`).
+ A command will be generated to mount the file system and add it to fstab. You can choose to add the command to User data or run it manually on your EC2 instance after it is launched. Your EC2 instance will then be configured to mount the S3 file system at launch and whenever it's rebooted.
+ Choose **Launch Instance**.

## Mounting S3 file systems from another VPC
<a name="s3-files-mounting-cross-vpc"></a>

When you use a VPC peering connection or transit gateway to connect VPCs, Amazon EC2 instances that are in one VPC can access S3 file systems in another VPC.

A transit gateway is a network transit hub that you can use to interconnect your VPCs and on-premises networks. For more information about using VPC transit gateways, see [Getting Started with transit gateways](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-getting-started.html) in the *Amazon VPC Transit Gateways Guide*. A VPC peering connection is a networking connection between two VPCs. This type of connection enables you to route traffic between them using private Internet Protocol version 4 (IPv4) or Internet Protocol version 6 (IPv6) addresses. You can use VPC peering to connect VPCs within the same AWS Region or between AWS Regions. For more information on VPC peering, see [What is VPC Peering?](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) in the *Amazon VPC User Guide*.

When mounting a file system from a different VPC, you need to resolve the mount target manually. You should use the IP address of the mount targets in the corresponding Availability Zone as follows and replace the *mount-target-ip-address*, *file-system-id*, and *mount-directory* with your values.

```
sudo mount -t s3files -o mounttargetip=mount-target-ip-address file-system-id mount-directory
```

To ensure high availability of your file system, we recommend that you always use a mount target IP address that is in the same Availability Zone as your NFS client.

Alternatively, you can use Amazon Route 53 as your DNS service. In Route 53, you can resolve the mount target IP addresses from another VPC by creating a private hosted zone and resource record set. For more information on how to do so, see [Working with private hosted zones](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html) in the *Amazon Route 53 Developer Guide*.

For more details on mounting from another VPC, visit the [GitHub ReadMe](https://github.com/aws/efs-utils/blob/master/README.md).

## Mounting S3 file systems from a different AWS Region
<a name="s3-files-mounting-cross-region"></a>

If you are mounting your S3 file system from another VPC that is in a different AWS Region than the file system, you will need to edit the `s3files-utils.conf` file. In `/etc/amazon/efs/s3files-utils.conf`, locate the following lines:

```
#region = us-east-1
```

Uncomment the line, and replace the value for the ID of the region in which the file system is located, if it is not in us-east-1.

Then, you need to specify the mount target IP in the mount command after changing the region in the config:

```
sudo mount -t s3files -o mounttargetip=mount-target-ip-address file-system-id mount-directory
```

## Unmounting your S3 file system
<a name="s3-files-mounting-unmount"></a>

To unmount an S3 file system connected to an EC2 instance running Linux, use the `umount` command as follows:

```
umount mount-directory
```

We recommend that you do not specify any other `umount` options. Avoid setting any other `umount` options that are different from the defaults. You can verify that your S3 file system has been unmounted by running the `findmnt` command. If the unmount was successful, the `findmnt` command on your mount directory will yield no output.

# Mounting S3 file systems on AWS Lambda functions
<a name="s3-files-mounting-lambda"></a>

While AWS Lambda functions provide an ephemeral local storage available during execution, many serverless workloads, such as machine learning inference, data processing, and content management, require access to large reference datasets, shared files, or persistent storage. By attaching an S3 file system to your Lambda function, you can easily share data across function invocations, read large reference data files, and write function output to a persistent and shared store, all through a local mount path.

![\[Diagram showing the data flow between an S3 bucket, S3 file system, and AWS Lambda function.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/S3Files_Lambda_dataflow.png)


## Prerequisites
<a name="s3-files-mounting-lambda-prereqs"></a>

Before you mount an S3 file system on a Lambda function, make sure that you have the following:
+ **File system, mount targets, and access point** — The S3 file system, at least one mount target, and one access point must be available. If you create a file system using the AWS Management Console, S3 Files automatically creates one mount target in every Availability Zone in your default VPC and one access point (UID/GID 1000/1000 and `/Lambda` as the access point scope) for the file system.
+ **Lambda function** — A Lambda function with an execution role that has access to mount the file system. See [Execution role and user permissions](https://docs.aws.amazon.com/lambda/latest/dg/configuration-filesystem-s3files.html#configuration-filesystem-s3files-permissions) in the *AWS Lambda User Guide*.
+ **VPC** — The Lambda function must be in the same VPC as your mount target. The subnets you assign to your Lambda function must be in the Availability Zone that has a mount target.
+ You have configured the required [Security groups](s3-files-prereq-policies.md#s3-files-prereq-security-groups).

## How to mount your S3 file system on a Lambda function
<a name="s3-files-mounting-lambda-steps"></a>
+ On the S3 Console, choose **File systems** in the left navigation pane.
+ Select the file system you want to mount on your Lambda function.
+ In the **Overview** tab, choose **Attach** under **Attach to a Lambda function**.
+ Select an available Lambda function from the drop down. The available list only shows functions within the same VPC and subnets where you have a mount target.
+ Specify the local mount path.
+ If you have more than one access points, select an access point.
+ Choose **Attach**. Your file system will now be attached the next time you invoke your Lambda function.

For more details, see [Configuring Amazon S3 Files access with AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/configuration-filesystem-s3files.html).

You can monitor your file system storage, performance, client connections, and synchronization errors using [Amazon CloudWatch](s3-files-monitoring-cloudwatch.md).

# Mounting S3 file systems on Amazon EKS
<a name="s3-files-mounting-eks"></a>

You can attach an S3 file system to an Amazon EKS cluster by using the Amazon EFS Container Storage Interface (CSI) driver, which supports both dynamic provisioning and static provisioning. This involves installing the efs-csi-driver, which is the CSI driver for both Amazon EFS and S3 Files.

![\[Diagram showing the data flow between an S3 bucket, S3 file system, and Amazon EKS cluster.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/S3Files_EKS_dataflow.png)


## Prerequisites
<a name="s3-files-mounting-eks-prereqs"></a>

Before you mount an S3 file system on an EKS cluster, make sure that you have the following:
+ You have an S3 file system that has at least one mount target available.
+ You have configured the required [Security groups](s3-files-prereq-policies.md#s3-files-prereq-security-groups).
+ Your EKS cluster must be in the same VPC as your mount target.
+ The Amazon EFS CSI driver needs AWS Identity and Access Management (IAM) permissions to connect to and interact with S3 file systems. For details, see [IAM role for attaching your file system to AWS compute resources](s3-files-prereq-policies.md#s3-files-prereq-iam-compute-role).
+ AWS suggests using EKS Pod Identities. For more information, see [Overview of setting up EKS Pod Identities](https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html).
+ For information about IAM roles for service accounts and setting up an IAM OpenID Connect (OIDC) provider for your cluster, see [Create an IAM OIDC provider for your cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html).
+ The `kubectl` command line tool is installed on your device or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is 1.29, you can use `kubectl` version 1.28, 1.29, or 1.30 with it. To install or upgrade `kubectl`, see [Set up kubectl and eksctl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html).

## How to mount your S3 file system on an EKS cluster
<a name="s3-files-mounting-eks-steps"></a>

The Amazon EFS CSI driver requires IAM permissions to interact with your file system. Create an IAM role and attach the `AmazonS3FilesCSIDriverPolicy` managed policy to it. Add the EFS CSI driver to your EKS cluster and specify the IAM role to allow your CSI driver to access AWS APIs and the file system. You can use the AWS Management Console or the AWS API. For details, see [Using S3 file system storage with Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/s3files-csi.html).

You can also use S3 file systems with AWS Batch on Amazon EKS. To attach S3 file system volume to your AWS Batch on Amazon EKS job, you can use Amazon EKS pods with persistent volume claim. For more details see [persistentVolumeClaim](https://docs.aws.amazon.com/batch/latest/APIReference/API_EksVolume.html#Batch-Type-EksVolume-persistentVolumeClaim) section of [Register Job Definitions](https://docs.aws.amazon.com/batch/latest/APIReference/API_RegisterJobDefinition.html) and [EKS Persistent Volume Claim](https://docs.aws.amazon.com/batch/latest/APIReference/API_EksPersistentVolumeClaim.html) pages of the *AWS Batch API Reference Guide*.

You can monitor your file system storage, performance, client connections, and synchronization errors using [Amazon CloudWatch](s3-files-monitoring-cloudwatch.md).

# Mounting S3 file systems on Amazon ECS
<a name="s3-files-mounting-ecs"></a>

You can attach an S3 file system to an Amazon ECS task definition and then deploy the task to access your S3 data from your containers.

![\[Diagram showing the data flow between an S3 bucket, S3 file system, and Amazon ECS task.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/S3Files_ECS_dataflow.png)


In Amazon ECS, S3 Files volume support is available for AWS Fargate and ECS Managed Instances at General Availability. S3 Files volumes are not supported on the Amazon EC2 launch type. If you configure an S3 Files volume in a task definition and attempt to run it on the EC2 launch type, the task will fail.

## Prerequisites
<a name="s3-files-mounting-ecs-prereqs"></a>

Before you attach an S3 file system to an ECS task, make sure that you have the following:
+ You have an S3 file system with at least one mount target in available state.
+ The ECS task must be in the same VPC as the mount target.
+ Add the permissions to your ECS task IAM role to access S3 file systems. For details, see [IAM role for attaching your file system to AWS compute resources](s3-files-prereq-policies.md#s3-files-prereq-iam-compute-role).
+ You have configured the required [Security groups](s3-files-prereq-policies.md#s3-files-prereq-security-groups).

## How to mount your S3 file system on an ECS task
<a name="s3-files-mounting-ecs-steps"></a>
+ On the S3 Console, choose **File systems** in the left navigation pane.
+ Select the file system you want to mount.
+ In the **Overview** tab, choose **Attach** under **Attach to an ECS task**.
+ Select your desired ECS task definition from the drop down.
+ Specify the local mount path.
+ You can optionally specify an access point, a root directory, and a transit encryption port.
+ Once the file system is attached in the task definition, you can start a task using this task definition in following ways:
  + You can deploy the task as a standalone, one-time run. For details, see [Running an application as an Amazon ECS task](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/standalone-task-create.html) in the *Amazon ECS Developer Guide*.
  + You can also deploy the task definition as a service. For details, see [View service history using Amazon ECS service deployments](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-deployment.html) in the *Amazon ECS Developer Guide*.

For details, see [Using S3 file system storage with Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/s3files-volumes.html).

You can monitor your file system storage, performance, client connections, and synchronization errors using [Amazon CloudWatch](s3-files-monitoring-cloudwatch.md).

# Creating and managing S3 Files resources
<a name="s3-files-resources"></a>

This page describes how to create, configure, and manage S3 Files resources. To manage your resources using the AWS CLI, see [S3 Files API reference](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operations_Amazon_S3_Files.html).

**File systems**  
A shared file system linked to your S3 bucket. It stores a fraction of your actively used S3 data as files and directories so that your applications and users can benefit from low-latency performance. You can access your data using standard file system operations, including reading, writing, and locking files.  
+ [Creating file systems](s3-files-file-systems-creating.md)
+ [Deleting file systems](s3-files-file-systems-deleting.md)

**Mount targets**  
A mount target provides network access to your file system within a single Availability Zone in your VPC. You need at least one mount target to access your file system from compute resources, and you can create a maximum of one mount target per Availability Zone. We recommend creating one mount target in each Availability Zone you operate in so that your compute resources always have a local network path to the file system, improving both availability and latency. When you create a file system using the AWS Management Console, S3 Files automatically creates one mount target in every Availability Zone in your default VPC.  
+ [Creating mount targets](s3-files-mount-targets-creating.md)
+ [Managing mount targets](s3-files-mount-targets-managing.md)
+ [Deleting mount targets](s3-files-mount-targets-deleting.md)

**File system policies**  
A file system policy is an optional IAM resource policy that you can create for your S3 file system to control NFS client access to the file system.  
+ [Creating file system policies](s3-files-file-system-policies-creating.md)
+ [Deleting file system policies](s3-files-file-system-policies-deleting.md)

**Access points**  
Access points are application-specific entry points to a file system that simplify managing data access at scale for shared datasets. You can use access points to enforce user identities and permissions for all file system requests that are made through the access point. Additionally, access points can restrict clients to only access data within a specified root directory and its subdirectories. When you create a file system using the AWS Management Console, S3 Files automatically creates one access point for the file system.  
A file system can have a maximum of 10,000 access points unless you request an increase. For more information, see [Unsupported features, limits, and quotas](s3-files-quotas.md).  
+ [Creating access points for an S3 file system](s3-files-access-points-creating.md)
+ [Deleting access points for an S3 file system](s3-files-access-points-deleting.md)

**Tags**  
Tags are key-value pairs that you define and associate with your S3 Files resources to help organize, identify, and manage them.  
+ [Tagging S3 Files resources](s3-files-tagging.md)

## CloudFormation template
<a name="s3-files-resources-cloudformation"></a>

You can also use CloudFormation templates to create and manage S3 Files resources. See [Amazon Simple Storage Service Files (Amazon S3 Files)](https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/AWS_S3Files.html) in the *AWS CloudFormation User Guide* for all available S3 Files resource types.

**Topics**
+ [

## CloudFormation template
](#s3-files-resources-cloudformation)
+ [

# Creating file systems
](s3-files-file-systems-creating.md)
+ [

# Deleting file systems
](s3-files-file-systems-deleting.md)
+ [

# Creating mount targets
](s3-files-mount-targets-creating.md)
+ [

# Managing mount targets
](s3-files-mount-targets-managing.md)
+ [

# Deleting mount targets
](s3-files-mount-targets-deleting.md)
+ [

# Creating file system policies
](s3-files-file-system-policies-creating.md)
+ [

# Deleting file system policies
](s3-files-file-system-policies-deleting.md)
+ [

# Creating access points for an S3 file system
](s3-files-access-points-creating.md)
+ [

# Deleting access points for an S3 file system
](s3-files-access-points-deleting.md)
+ [

# Tagging S3 Files resources
](s3-files-tagging.md)

# Creating file systems
<a name="s3-files-file-systems-creating"></a>

You can create file systems by using the AWS Console, the AWS Command Line Interface (AWS CLI), or the Amazon S3 API for any existing or new S3 general purpose bucket. For information on creating a new bucket, see [Creating a general purpose bucket](create-bucket-overview.md).

## Required IAM permissions for creating file systems
<a name="s3-files-file-systems-creating-permissions"></a>

When you create an S3 file system, you must specify an IAM role that S3 Files assumes to read from and write to your S3 bucket. This role allows S3 Files to synchronize changes between your file system and your S3 bucket. When you create a file system using the AWS Console, S3 Files automatically creates this IAM role with the required permissions. If you are using the AWS CLI or S3 API, see [IAM role for accessing your bucket from the file system](s3-files-prereq-policies.md#s3-files-prereq-iam-creation-role).

For more information about managing permissions for API operations, see [How S3 Files works with IAM](s3-files-security-iam.md).

## Status of a file system
<a name="s3-files-file-systems-creating-status"></a>

A file system can have one of the status values described in the following table that you can get using the `get-file-system` command.


| File system state | Description | 
| --- | --- | 
| AVAILABLE | The file system is in a healthy state, and is reachable and available for use. | 
| CREATING | S3 Files is in the process of creating the new file system. | 
| DELETING | S3 Files is deleting the file system in response to a user-initiated delete request. | 
| DELETED | S3 Files has deleted the file system in response to a user-initiated delete request. | 
| ERROR | The file system is in a failed state and is unrecoverable. To access the file system data, restore a backup of this file system to a new file system. Check the StatusMessage field for information about the error. | 

**Note**  
S3 Files returns an error when you attempt to create a file system scoped to a prefix with a large number of objects. This error alerts you that large recursive rename or move operations may impact file system performance and increase S3 request costs, as every file requires separate copy and delete requests to your S3 bucket. If you still want to create a file system scoped to that prefix, you can add the `--AcceptBucketWarning` parameter.

## Using the S3 console
<a name="s3-files-file-systems-creating-console"></a>

This section explains how to use the Amazon S3 console to create a file system for S3 Files.
+ Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).
+ In the navigation bar at the top of the page, verify you are in your desired AWS Region.
+ In the left navigation pane, choose **File systems**.
+ Select **Create file system**.
+ On the create page, choose the S3 bucket or prefix to create your file system from. You can enter the S3 URI directly (for example, `s3://bucket-name/prefix`) or choose **Browse S3** to navigate to and select your bucket or prefix.
+ Select a VPC for your file system. S3 Files selects your default VPC automatically. This is the VPC where your compute resources connect to your file system. To use a different VPC, choose one from the dropdown.
+ Select **Create** and wait for the status of your file system to become `Available`.

**Default settings on AWS Management Console**

S3 Files will create your file system with the following configuration:
+ **Encryption** — S3 Files sets the encryption configuration from the source S3 bucket and applies it to data at rest in your file system.
+ **IAM role** — S3 Files creates a new IAM role that it assumes to manage the data synchronization between your file system and bucket.
+ **Mount targets** — S3 Files automatically creates one mount target in every Availability Zone in the VPC you choose.
+ **Access point** — S3 Files creates one access point for the file system.

## Using the AWS CLI
<a name="s3-files-file-systems-creating-cli"></a>

When you're using the AWS CLI, you create these resources in order. First, you create a file system. Then, you can create mount targets and any additional optional tags for the file system by using corresponding AWS CLI commands.

The following `create-file-system` example command shows how you can use the AWS CLI to create a file system for S3 Files.

```
aws s3files create-file-system --region aws-region --bucket s3-bucket-arn --client-token idempotency-token --role-arn iam-role
```

Replace the following with your desired values:
+ *aws-region* : The AWS Region of your bucket. For example, `us-east-1`.
+ *bucket-arn* : The ARN of your S3 bucket.
+ *idempotency-token* : An idempotency token. This is optional.
+ *iam-role* : ARN of the IAM role that S3 Files assumes to read from and write to your S3 bucket. Make sure you have added the right permissions to this IAM role. For more information, see [IAM role for accessing your bucket from the file system](s3-files-prereq-policies.md#s3-files-prereq-iam-creation-role).

After successfully creating the file system, S3 Files returns the file system description as JSON.

# Deleting file systems
<a name="s3-files-file-systems-deleting"></a>

When you delete a file system, the file system, its data, and its configuration are permanently removed. Make sure no applications are actively using the file system before deletion to avoid service disruption. Before deletion, you must delete all mount targets and access points associated with the file system first. For more information, see [Deleting mount targets](s3-files-mount-targets-deleting.md) and [Deleting access points for an S3 file system](s3-files-access-points-deleting.md).

When you delete a file system, S3 Files checks whether all changes in your file system have been synchronized with your linked S3 bucket. If there are changes that have not yet been synchronized, S3 Files returns an error and the deletion does not proceed. This ensures that all your data is safely stored in your S3 bucket before the file system is deleted. If you want to proceed with deletion and accept that any unsynchronized changes will be lost, you can retry the delete request with the 'force delete' option. In the AWS CLI, add the `--ForceDelete` flag to your delete API call. On the AWS Console, choose **Force** button in the error message that appears when you delete a file system while unsynced changes are present.

## Using the S3 console
<a name="s3-files-file-systems-deleting-console"></a>

This section explains how to use the Amazon S3 console to delete a file system for S3 Files.
+ Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).
+ In the navigation bar at the top of the page, verify you are in the AWS Region of the file system that you want to delete.
+ In the left navigation pane, choose **General purpose buckets**.
+ Choose a general purpose bucket your file system is attached to.
+ Select the **File systems** tab and select the file system you wish to delete.
+ Choose **Delete**.
+ In the confirmation window, type `confirm`.

## Using the AWS CLI
<a name="s3-files-file-systems-deleting-cli"></a>

The following `delete-file-system` example command shows how you can use the AWS CLI to delete a file system for S3 Files.

```
aws s3files delete-file-system --file-system-id file-system-id
```

# Creating mount targets
<a name="s3-files-mount-targets-creating"></a>

You need a mount target to access your file system from compute resources and you can create a maximum of one mount target per Availability Zone. We recommend creating one mount target per Availability Zone you operate in. When you create a file system using the S3 console, S3 Files automatically creates one mount target in every Availability Zone in your default VPC.

You can create mount targets for the file system in one VPC at a time. If you want to modify the VPC for your mount targets, you need to first delete all the existing mount targets for the file system and then create a mount target in a new VPC. If the VPC has multiple subnets in an Availability Zone, you can create a mount target in only one of those subnets. All EC2 instances in the Availability Zone can share the single mount target.

## Using the S3 console
<a name="s3-files-mount-targets-creating-console"></a>

This section explains how to use the Amazon S3 console to create a mount target for S3 Files.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar at the top of the page, verify you are in the AWS Region of the file system for which you want to create a mount target.

1. In the left navigation pane, choose **General purpose buckets**.

1. Choose a general purpose bucket your file system is attached to.

1. Select the **File systems** tab and select your desired file system.

1. Select the **Mount targets** tab and select **Create mount targets**.

1. On the Create mount target page, your default VPC will automatically be selected. Choose the Availability Zone and Subnet ID. The VPC, Availability Zone, and Subnet ID cannot be edited after mount target creation.
**Note**  
The IP address type must match the IP type of the subnet. Additionally, the IP address type overrides the IP addressing attribute of your subnet. For example, if the IP address type is IPv4-only and the IPv6 addressing attribute is enabled for your subnet, network interfaces created in the subnet receive an IPv4 address from the range of the subnet. For more information, see [Modify the IP addressing attributes of your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/modify-subnets.html).

1. If you know the IP address where you want to place the mount target, then enter it in the IP address box that matches the IP address type. If you don't specify a value, S3 Files selects an unused IP address from the specified subnet.

1. Choose your security groups to associate with the mount target. See [Security groups](s3-files-prereq-policies.md#s3-files-prereq-security-groups) in the prerequisites to understand the security group configurations required to start using your file system.

1. Choose **Create mount target**.

## Using the AWS CLI
<a name="s3-files-mount-targets-creating-cli"></a>

The following `create-mount-target` example command shows how you can use the AWS CLI to create a mount target for S3 Files.

```
aws s3files create-mount-target --region aws-region --file-system-id file-system-id --subnet-id subnet-id
```

Mount targets can take up to \$15 minutes to create.

# Managing mount targets
<a name="s3-files-mount-targets-managing"></a>

You can add or remove security groups associated with a mount target. Security groups define inbound and outbound access. When you change security groups associated with a mount target, make sure that you authorize necessary inbound and outbound access. Doing so enables your compute resource to communicate with the file system. See [Security groups](s3-files-prereq-policies.md#s3-files-prereq-security-groups) in the prerequisites to understand the security group configurations required to start using your file system.

## Using the S3 console
<a name="s3-files-mount-targets-managing-console"></a>

This section explains how to use the Amazon S3 console to add or remove security groups for a mount target in S3 Files.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar at the top of the page, verify you are in the desired AWS Region where your mount target exists.

1. In the left navigation pane, choose **General purpose buckets**.

1. Choose a general purpose bucket your file system is attached to.

1. Select the **File systems** tab and select your desired file system.

1. Select the **Mount targets** tab and select the mount target that you want to edit.

1. Choose **Edit**. You will see details of your mount target.

1. Add or remove security groups from the security group drop down.

1. Choose **Save**.

## Using the AWS CLI
<a name="s3-files-mount-targets-managing-cli"></a>

The following `update-mount-target` example command shows how you can use the AWS CLI to add or remove security groups for a mount target in S3 Files.

```
aws s3files update-mount-target --region aws-region --mount-target-id mount-target-id --security-groups security-group-ids-separated-by-space
```

# Deleting mount targets
<a name="s3-files-mount-targets-deleting"></a>

When you delete a mount target, the operation forcibly breaks any mounts of the file system, which might disrupt compute resources and applications using those mounts. To avoid application disruption, stop applications and unmount the file system before deleting the mount target.

You can delete mount targets for a file system by using the AWS Management Console, AWS CLI, or programmatically by using the AWS SDKs.

## Using the S3 console
<a name="s3-files-mount-targets-deleting-console"></a>

This section explains how to use the Amazon S3 console to delete a mount target for S3 Files.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar at the top of the page, verify you are in the AWS Region of the mount target that you want to delete.

1. In the left navigation pane, choose **General purpose buckets**.

1. Choose a general purpose bucket your file system is attached to.

1. Select the **File systems** tab and select your desired file system.

1. Select the **Mount targets** tab and select the mount target you wish to delete.

1. Choose **Delete**.

1. In the confirmation window, type **confirm** and choose **Delete**.

## Using the AWS CLI
<a name="s3-files-mount-targets-deleting-cli"></a>

The following `delete-mount-target` example command shows how you can use the AWS CLI to delete a mount target for S3 Files.

```
aws s3files delete-mount-target --region aws-region --mount-target-id mount-target-id
```

# Creating file system policies
<a name="s3-files-file-system-policies-creating"></a>

You can use file system policies to grant or deny permissions for NFS clients to perform operations such as mounting, writing, and root access on your file systems. A file system either has an empty (default) file system policy or exactly one explicit policy. You can update your file system policy at any time after file system creation using the AWS Management Console, AWS CLI, or AWS SDK.

You can update a file system policy by using the Amazon S3 console, the AWS CLI, programmatically with AWS SDKs, or the S3 Files API directly. These policy changes can take several minutes to take effect. S3 file system policies have a 20,000 character limit. For more information about using an S3 file system policy, supported actions, supported condition keys, and examples, see [How S3 Files works with IAM](s3-files-security-iam.md).

## Using the S3 console
<a name="s3-files-file-system-policies-creating-console"></a>

This section explains how to use the Amazon S3 console to create a file system policy for S3 Files.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar at the top of the page, verify you are in the AWS Region where your file system exists.

1. In the left navigation pane, choose **File systems**.

1. Choose your desired file system.

1. Select the **Permissions** tab and select **Edit**.

1. You can use the Policy editor to add your own file system policy.

1. After you complete editing the policy, choose **Save**.

## Using the AWS CLI
<a name="s3-files-file-system-policies-creating-cli"></a>

The following `put-file-system-policy` example command shows how you can use the AWS CLI to create a file system policy for S3 Files. The following file system policy grants only `ClientMount` (read-only) permissions to the `ReadOnly` IAM role. Replace the example AWS account ID *111122223333* with your AWS account ID.

```
aws s3files put-file-system-policy --file-system-id file-system-id --policy '{
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/ReadOnly"
            },
            "Action": [
                "s3files:ClientMount"
            ]
        }
    ]
}'
```

# Deleting file system policies
<a name="s3-files-file-system-policies-deleting"></a>

You can delete a file system policy using the Amazon S3 console and the AWS CLI.

## Using the S3 console
<a name="s3-files-file-system-policies-deleting-console"></a>

This section explains how to use the Amazon S3 console to delete a file system policy for S3 Files.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar at the top of the page, verify you are in the AWS Region where your file system exists.

1. In the left navigation pane, choose **File systems**.

1. Choose your file system.

1. Select the **Permissions** tab and select **Delete**.

1. In the confirmation window, type **confirm** and choose **Delete**.

## Using the AWS CLI
<a name="s3-files-file-system-policies-deleting-cli"></a>

The following `delete-file-system-policy` example command shows how you can use the AWS CLI to delete a file system policy for S3 Files.

```
aws s3files delete-file-system-policy --file-system-id file-system-id
```

# Creating access points for an S3 file system
<a name="s3-files-access-points-creating"></a>

Access points are application-specific entry points to a file system that simplify managing data access at scale for shared datasets. You can use access points to enforce user identities and permissions for all file system requests that are made through the access point. Additionally, access points can restrict clients to only access data within a specified root directory and its subdirectories. When you create a file system using the AWS Management Console, S3 Files automatically creates one access point for the file system.

A file system can have a maximum of 10,000 access points unless you request an increase. For more information, see [Unsupported features, limits, and quotas](s3-files-quotas.md). You can create access points using the S3 console, AWS CLI, or AWS SDK.

Access points for an S3 file system cannot be edited after creation. If you want to make updates, you have to delete the existing access point and create a new one.

## Using the S3 console
<a name="s3-files-access-points-creating-console"></a>

This section explains how to use the Amazon S3 console to create an access point for an S3 file system.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar at the top of the page, verify you are in the AWS Region of the file system for which you want to create an access point.

1. In the left navigation pane, choose **File systems**.

1. Choose your desired file system.

1. Select the **Access points** tab and select **Create access point**.

1. On the Create page, enter a **Name** for the access point.

1. (Optional) Specify a root directory path for the access point. Clients using this access point will be limited to this directory and its subdirectories. By default, S3 Files assumes the root directory for the access point to be the root directory of the file system.

1. (Optional) In the **POSIX user** panel, you can specify the full POSIX identity to use to enforce user and group information for all file operations by clients that are using the access point.
   + **User ID** – Enter a numeric POSIX user ID for the user.
   + **Group ID** – Enter a numeric POSIX group ID for the user.
   + **Secondary group IDs** – Enter an optional comma-separated list of secondary group IDs.

1. (Optional) For **Root directory creation permissions**, you can specify the permissions to use when S3 Files creates the root directory path, if specified and the root directory doesn't already exist.
**Note**  
If you don't specify any root directory ownership and permissions, and the root directory does not already exist, S3 Files will not create the root directory. Any attempts to mount the file system by using the access point will fail.
   + **Owner user ID** – Enter the numeric POSIX user ID to use as the root directory owner.
   + **Owner group ID** – Enter the numeric POSIX group ID to use as the root directory owner group.
   + **Permissions** – Enter the Unix mode of the directory. A common configuration is 755. Ensure that the execute bit is set for the access point user so that they are able to mount.

1. (Optional) Under **Tags**, you can choose to add tags to your access point.

1. Choose **Create access point**.

## Using the AWS CLI
<a name="s3-files-access-points-creating-cli"></a>

The following `create-access-point` example command shows how you can use the AWS CLI to create an access point for an S3 file system.

```
aws s3files create-access-point --file-system-id file-system-id --root-directory root-directory --posix-user posix-user
```

For example:

```
aws s3files create-access-point --file-system-id fs-abcdef0123456789a --client-token 010102020-3 \
  --root-directory "Path=/s3files/mobileapp/east,CreationInfo={OwnerUid=0,OwnerGid=11,Permissions=775}" \
  --posix-user "Uid=22,Gid=4" \
  --tags Key=Name,Value=east-users
```

**Note**  
If multiple requests to create access points on the same file system are sent in quick succession, and the file system is nearing the access points limit, you may experience a throttling response for these requests. This is to ensure that the file system does not exceed the access point quota.

# Deleting access points for an S3 file system
<a name="s3-files-access-points-deleting"></a>

When deleting an access point, make sure no applications are actively using the access point before deletion to avoid service disruption. Once deleted, the access point and its configuration are permanently removed.

## Using the S3 console
<a name="s3-files-access-points-deleting-console"></a>

This section explains how to use the Amazon S3 console to delete an access point for S3 Files.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar at the top of the page, verify you are in the AWS Region of the file system which has the access point that you want to delete.

1. In the left navigation pane, choose **General purpose buckets**.

1. Choose a general purpose bucket your file system is attached to.

1. Select the **File systems** tab and select the file system you wish to use.

1. Select the **Access points** tab and select the access point you wish to delete.

1. Choose **Delete**.

1. In the confirmation window, type **confirm** and choose **Delete**.

## Using the AWS CLI
<a name="s3-files-access-points-deleting-cli"></a>

The following `delete-access-point` example command shows how you can use the AWS CLI to delete an access point for S3 Files.

```
aws s3files delete-access-point --access-point-id access-point-id
```

# Tagging S3 Files resources
<a name="s3-files-tagging"></a>

To help you manage your S3 Files resources, you can assign your own metadata to each resource in the form of tags. With tags, you can categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This categorization is useful when you have many resources of the same type as you can quickly identify a specific resource based on the tags that you've assigned to it. You can tag S3 file system and access point resources that already exist in your account. This topic describes tags and shows you how to create them.

## Tag restrictions
<a name="s3-files-tagging-restrictions"></a>

The following basic restrictions apply to tags:
+ Maximum number of tags per resource – 50
+ For each resource, each tag key must be unique, and each tag key can have only one value.
+ Maximum key length – 128 Unicode characters in UTF-8
+ Maximum value length – 256 Unicode characters in UTF-8
+ The allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: `+ - = . _ : / @`.
+ Tag keys and values are case-sensitive.
+ The `aws:` prefix is reserved for AWS use. If a tag has a tag key with this prefix, then you can't edit or delete the tag's key or value. Tags with the `aws:` prefix do not count against your tags per resource limit.

You can't update or delete a resource based solely on its tags; you must specify the resource identifier. For example, to delete file systems that you tagged with a tag key called `DeleteMe`, you must use the `DeleteFileSystem` action with the resource identifiers of the file system, such as the file system ID.

When you tag public or shared resources, the tags that you assign are available only to your AWS account. No other AWS account will have access to those tags. For tag-based access control to shared resources, each AWS account must assign its own set of tags to control access to the resource.

## Using the S3 console
<a name="s3-files-tagging-console"></a>

You can use the S3 Files console to manage tags on your resources.
+ Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).
+ In the navigation bar at the top of the page, verify you are in your desired AWS Region.
+ In the left navigation pane, choose **File systems**.
+ You can specify tags for a resource when you create the resource, such as an S3 file system or an access point. Or, you can add, modify, or delete tags after creation by going to the properties of the resource.

## Using the AWS CLI
<a name="s3-files-tagging-cli"></a>

If you're using the S3 Files API, the AWS CLI, or an AWS SDK, you can use the `TagResource` S3 Files API action to apply tags to existing resources. Additionally, some resource-creating actions enable you to specify tags for a resource when the resource is created, such as when you create a file system.

The AWS CLI commands for managing tags, and the equivalent S3 Files API actions, are listed in the following table.


| CLI command | Description | Equivalent API operation | 
| --- | --- | --- | 
| tag-resource | Add new tags or update existing tags | TagResource | 
| list-tags-for-resource | Retrieve existing tags | ListTagsForResource | 
| untag-resource | Delete existing tags | UntagResource | 

# Understanding how synchronization works
<a name="s3-files-synchronization"></a>

S3 Files keeps your file system and the linked S3 bucket synchronized automatically. The data you actively use is copied to the file system, so you can read and write files using standard Linux file operations at low latency. S3 Files requires S3 Versioning to be enabled on the linked S3 bucket. When you edit files on the file system, S3 Files copies your changes back to the S3 bucket as new versions of the corresponding objects, making sure the old versions are preserved. When other applications add, modify, or delete objects in your S3 bucket, S3 Files automatically reflects those changes in your file system. When a conflict occurs due to concurrent changes to the same data in both the file system and the S3 bucket, S3 Files treats the [S3 bucket as the source of truth in case of conflicts](#s3-files-sync-source-of-truth).

To optimize storage costs, S3 Files removes data you have not used recently from the file system. Your data remains durably stored in the linked S3 bucket and is fetched back onto the file system the next time you access it.

## S3 bucket is accessible through the file system
<a name="s3-files-sync-bucket-accessible"></a>

After you create an S3 file system, you can [mount your S3 buckets on compute resources](s3-files-attach-compute.md) and start accessing your S3 bucket data right away. By default, when you first access a directory by listing its contents or opening a file within it, S3 Files imports the metadata for all files in that directory, along with the data for files smaller than the import size threshold (default 128 KiB) from the S3 bucket. The first access to a directory might have higher latency, but subsequent reads and writes are significantly faster. By importing metadata upfront, S3 Files enables you to browse directory contents, view file sizes, and check permissions at low latency.

For example, suppose your S3 bucket contains a prefix `data/images/` with 1,000 objects. The first time you run `ls /mnt/s3files/data/images/`, S3 Files imports metadata for all 1,000 files and asynchronously copies data for files below the import size threshold onto the file system. This initial listing may take several seconds, but subsequent commands such as `ls -la`, `stat`, or `cat` on individual files in that directory return at low latency.

For files larger than the import size threshold, S3 Files imports only metadata, while data is not copied to the file system and is instead read directly from the S3 bucket when you access it. You can adjust this threshold to better match your workload. For example, you can increase it to import more data up front for workloads that repeatedly access the same files and benefit from low-latency reads. For workloads that stream data sequentially, a lower threshold can be more cost effective, as the latency benefit of importing data up front is less meaningful when data is read sequentially in large chunks rather than in small, random reads. For more information, see [Customizing synchronization for S3 Files](s3-files-synchronization-customizing.md).

## Changes in your file system automatically reflect in your S3 bucket
<a name="s3-files-sync-changes-to-bucket"></a>

When you create, modify, or delete files in the file system, S3 Files automatically copies those changes to your S3 bucket. New files become new S3 objects, changes to existing files become new object versions, and deleted files become S3 delete markers.

POSIX permissions that you set on files and directories through the file system, such as owner (UID), group (GID), and permission bits, are stored as user-defined S3 object metadata on the corresponding S3 objects. When you change permissions using `chmod`, `chown`, or `chgrp`, S3 Files exports those changes to your S3 bucket along with any data changes. When S3 Files imports objects from your S3 bucket, it reads this metadata and applies the corresponding POSIX permissions on the file system. Objects that do not have POSIX permission metadata are assigned default permissions.

When you modify a file in the file system, S3 Files waits up to 60 seconds, aggregating any successive changes to the file in that time, before copying to your S3 bucket. This means that rapid successive writes to the same file are captured in a single S3 PUT request rather than generating a new object version for every individual change, reducing your S3 request costs and storage costs. If you continue to modify the file after S3 Files has copied your changes back to the S3 bucket, it will copy subsequent changes as needed.

For example, if an application opens a log file and appends to it 50 times over 30 seconds, S3 Files batches all 50 appends into a single S3 PUT request. If the application continues writing after the first sync, S3 Files copies the additional changes in a subsequent sync.

## Changes in your S3 bucket automatically appear in your file system
<a name="s3-files-sync-changes-from-bucket"></a>

S3 Files monitors changes in your S3 bucket using S3 Event Notifications. When another application working with the S3 API adds, modifies, or deletes objects in your S3 bucket, S3 Files automatically reflects those changes in the file system for files whose data is currently stored in the file system's high performance storage. Files whose data has been expired from the file system are not updated until the next time you access them, at which point S3 Files retrieves the latest version from the S3 bucket.

## Understanding the impact of rename and move operations
<a name="s3-files-sync-rename-move"></a>

Amazon S3 uses a flat storage structure where objects are identified by their key names. While S3 Files lets you organize your data in directories, S3 has no native concept of directories. What appears as a directory in your file system is a common prefix shared by the keys of the objects within the S3 bucket. Additionally, S3 objects are immutable and do not support atomic renames. As a result, when you rename or move a file, S3 Files must write the data to a new object with the updated key and delete the original. When you rename or move a directory, S3 Files must repeat this process for every object that shares that prefix. Therefore, when you rename or move a directory containing tens of millions of files, your S3 request costs and the synchronization time increase significantly.

S3 Files returns an error when you attempt to create a file system scoped to a prefix with a large number of objects such that a rename can take up to 4 hours (approximately up to 12 million objects). This error alerts you that large recursive rename or move operations may impact file system performance, as every file requires separate write and delete requests to your S3 bucket. If you still want to create a file system scoped to that prefix, you can add the `--AcceptBucketWarning` parameter.

Since S3 Files renames objects individually on the S3 bucket, both directories will be visible on the S3 bucket until the rename is fully completed. Objects written after the directory was renamed but before that rename is fully synchronized will not be moved. To simplify data reorganization work, we recommend you do not create new objects via the S3 bucket while renaming a matching directory.

For example, if you run `mv /mnt/s3files/projects/alpha /mnt/s3files/projects/beta`, the rename completes instantly on the file system. On the S3 bucket, S3 Files begins copying and deleting each object to its new key within the S3 bucket (replacing the `projects/alpha/` prefix with `projects/beta/`) and deleting the original. During this process, the S3 bucket temporarily contains objects under both `projects/alpha/` and `projects/beta/`. Once all objects have been moved, only `projects/beta/` remains.

## Unused data is expired from the file system to optimize storage
<a name="s3-files-sync-expiration"></a>

S3 Files optimizes storage costs by automatically removing file data that has not been read recently from the file system. Your data remains safely stored in your S3 bucket. S3 Files only removes the copy from the file system. File metadata, such as names, sizes, and permissions, is never removed from the file system so you can continue browsing your file system at low latency.

If a file in your file system has not been read for 30 days (configurable) and its changes have already been synchronized to the S3 bucket, S3 Files removes the file data from the file system. The next time you read that file, S3 Files retrieves the latest version of the corresponding object from the S3 bucket and copies it back onto the file system.

For example, suppose you process a dataset in `/mnt/s3files/data/batch-jan.parquet` in January and do not access it again. After 30 days, S3 Files removes the file data from the file system. The file still appears in directory listings with its correct size and permissions, but the data is no longer on the file system. When you read the file again in April, S3 Files retrieves it from the S3 bucket and copies it back onto the file system. The first read may have higher latency, but subsequent reads are fast.

## S3 bucket is the source of truth in case of conflicts
<a name="s3-files-sync-source-of-truth"></a>

A conflict occurs when the same file has been modified through the file system and the corresponding S3 object has also changed before S3 Files has synchronized the file system changes back to the S3 bucket. For example, you might edit a file through your mounted file system while another application uploads a new version of the corresponding object, or deletes it, directly in the linked S3 bucket.

S3 Files detects conflicts when it attempts to synchronize your file system changes back to the S3 bucket, or when it receives an S3 event notification indicating that the object has changed. Your S3 bucket serves as the long-term store for your data, so S3 Files considers the S3 bucket as the source of truth when a conflict occurs. This provides predictable consistency, ensuring that the version in your S3 bucket always takes precedence. In case of a conflict, S3 Files moves the conflicting file from its current location in your file system to a lost and found directory and imports the latest version from the linked S3 bucket into the file system.

For example, suppose you edit `/mnt/s3files/report.csv` through the file system. Before S3 Files synchronizes your changes back to the S3 bucket, another application uploads a new version of `report.csv` directly to the S3 bucket. When S3 Files detects the conflict, it moves your version of `report.csv` to the lost and found directory and replaces it with the version from the S3 bucket.

The lost and found directory is located in your file system's root directory under the name `.s3files-lost+found-file-system-id`. If you mount your file system through an access point that specifies a root directory, the lost and found directory is not visible from that mount because it is located above the access point's root directory. To access the lost and found directory, mount the file system without an access point or use an access point without a root directory restriction so that you can access the full file system's scope via the access point.

When S3 Files moves a file to the lost and found directory, it prepends the file name with an identifier to distinguish multiple versions of the same file that may be moved over time. Files in the lost and found directory are not copied to your S3 bucket. You can delete files and copy files from this directory, but you cannot move or rename files within it or delete the directory itself. If you want to keep your file system changes instead of the latest version in the S3 bucket, copy the file from the lost and found directory back to its original path. You can retrieve the file's original path from the extended attributes of the file in the lost and found directory. S3 Files will then copy it to your S3 bucket as a new version of the object. For more information, see [Troubleshooting S3 Files](s3-files-troubleshooting.md).

**Note**  
Conflicting files that S3 Files moves to the lost and found directory remain there indefinitely and count toward your file system storage costs. You should delete files from the lost and found directory to free up storage when they are no longer needed.

The default synchronization settings will work for most workloads for low-latency, file-based access to S3 data. For more details about how to configure these parameters, see [Customizing synchronization for S3 Files](s3-files-synchronization-customizing.md).

**Topics**
+ [

## S3 bucket is accessible through the file system
](#s3-files-sync-bucket-accessible)
+ [

## Changes in your file system automatically reflect in your S3 bucket
](#s3-files-sync-changes-to-bucket)
+ [

## Changes in your S3 bucket automatically appear in your file system
](#s3-files-sync-changes-from-bucket)
+ [

## Understanding the impact of rename and move operations
](#s3-files-sync-rename-move)
+ [

## Unused data is expired from the file system to optimize storage
](#s3-files-sync-expiration)
+ [

## S3 bucket is the source of truth in case of conflicts
](#s3-files-sync-source-of-truth)
+ [

# Customizing synchronization for S3 Files
](s3-files-synchronization-customizing.md)

# Customizing synchronization for S3 Files
<a name="s3-files-synchronization-customizing"></a>

S3 Files lets you control how data flows between your file system and linked S3 bucket through a synchronization configuration. The default settings balance latency and cost for most workloads, but you can tune them to match your access patterns. Importing more data up front reduces read latency at the cost of higher storage and write charges. Importing less data keeps storage costs low but means more reads are served from S3 with higher latency. Each configuration has two components: import data rules, which control what data is copied onto the file system and when, and expiration data rules, which control how long unused data stays on the file system. You can update these rules using the AWS Management Console or the PutSynchronizationConfiguration API.

## Import data rules
<a name="s3-files-sync-import-rules"></a>

Import data rules control how data is copied from your bucket to the file system. You can have a maximum of 10 import data rules per file system. Each import data rule has the following parameters:

**prefix** – The S3 prefix that the rule applies to. Specify an empty string ("") for the entire bucket (file system scope) or a specific prefix (for example, "data/ml/") within the file system. The prefix must end with a forward slash (/), unless specifying the entire bucket with "". You must include exactly one import rule for the root directory. Default: "" (entire bucket or file system scope).

**trigger** – When to import data: ON\$1DIRECTORY\$1FIRST\$1ACCESS or ON\$1FILE\$1ACCESS. Default: ON\$1DIRECTORY\$1FIRST\$1ACCESS.
+ **ON\$1DIRECTORY\$1FIRST\$1ACCESS** – File data is imported when you first access a directory. For example, when you first access a directory by listing its contents or opening a file within it, data is imported for all immediate children files in that directory smaller than the sizeLessThan threshold. This option is useful for workloads that require low latency when first accessing files.
+ **ON\$1FILE\$1ACCESS** – File data is imported only when a file is read for the first time. This option minimizes the data imported at the cost of higher latency on first read.

**sizeLessThan** – Maximum file size (in bytes) to automatically import. While S3 Files imports metadata for all files, it only imports data for files smaller than this threshold. Minimum: 0 bytes (no data imported, metadata will still be imported). Maximum: 52,673,613,135,872 bytes (48 TiB). Default: 131,072 bytes (128 KiB).

### Prefix matching behavior
<a name="s3-files-sync-prefix-matching"></a>

When multiple import data rules match a file, S3 Files applies the rule with the most specific prefix. For example, assume you have three rules:
+ Rule 1: prefix = "" (entire bucket), sizeLessThan = 64 KiB, trigger = ON\$1FILE\$1ACCESS
+ Rule 2: prefix = "hot/", sizeLessThan = 1 MiB, trigger = ON\$1DIRECTORY\$1FIRST\$1ACCESS
+ Rule 3: prefix = "hot/largeData/", sizeLessThan = 256 KiB, trigger = ON\$1DIRECTORY\$1FIRST\$1ACCESS

For a file at hot/largeData/data.txt, S3 Files applies Rule 3. For a file at hot/data.txt, S3 Files applies Rule 2. For a file at cold/data.txt, S3 Files applies Rule 1 because there is no specific rule for the cold/ prefix.

## Expiration data rules
<a name="s3-files-sync-expiration-rules"></a>

Expiration data rules control when unused data is removed from the file system to optimize storage costs. S3 Files removes data after it has not been read for a specified duration and its changes have already been synchronized to the S3 bucket. Whenever a file is read, its expiration timer resets, extending the time that data remains in the file system. You can specify the following parameter in expiration data rules:

**daysAfterLastAccess** – Number of days after last read when data is removed from the file system. Minimum: 1 day. Maximum: 365 days. Default: 30 days.

If you have long-running workloads that frequently access the same data, consider longer expiration periods (30–90 days). For temporary data, consider shorter periods (1–7 days).

## Example configurations
<a name="s3-files-sync-example-configs"></a>

**General purpose file share (default configuration)** – A team of developers and data scientists mounts an S3 file system to share code, configuration files, and small datasets. Most files are under 128 KiB and are read repeatedly throughout the day. The default configuration works well for this workload: ON\$1DIRECTORY\$1FIRST\$1ACCESS imports metadata and small file data when any file in a directory is first accessed, which works well when files in the same directory are likely to be accessed together, such as source files in a project or configuration files in a deployment. Subsequent access by any user is fast. When a user opens a large file such as a log archive, S3 Files automatically streams it directly from S3 for high throughput. The 30-day expiration window keeps actively used files on the file system without manual cleanup.

**ML training with repeated reads** – A training job reads thousands of small files (<10 MiB) repeatedly across multiple epochs. To minimize latency, set a high sizeLessThan threshold (for example, 10 MiB) with ON\$1DIRECTORY\$1FIRST\$1ACCESS so that file data is preloaded when the training script first lists each directory. Set a short expiration (for example, 3 days) so that data is removed from the file system promptly after the training job completes.

**Agentic workloads with broad file discovery** – An AI agent explores a large repository of documents, code, or knowledge base files to answer queries, reading many small files once as it searches for relevant context. Set sizeLessThan to 0 so that no data is imported onto the file system. The agent can browse the full directory tree at low latency to discover files, while each file read is served directly from S3. This keeps costs low for workloads that touch many files unpredictably but rarely revisit the same file, and scales naturally as you add more agents reading in parallel.

**Hot and cold prefixes** – A file system contains both frequently accessed configuration files under `config/` and infrequently accessed archive data under `archive/`. Create two import rules: one for `config/` with a high sizeLessThan and ON\$1DIRECTORY\$1FIRST\$1ACCESS, and one for `archive/` with sizeLessThan set to 0 and ON\$1FILE\$1ACCESS. This keeps configuration files on the file system for fast access while avoiding storage costs for archive data that is rarely read.

# Monitoring and auditing S3 Files
<a name="s3-files-monitoring-logging"></a>

S3 Files integrates with the following AWS services to help you monitor your file systems:

**Amazon CloudWatch**  
By default, S3 Files metric data is automatically sent to CloudWatch at 1-minute periods, unless noted differently for some individual metrics. You can also watch a single metric over a time period that you specify, and perform one or more actions based on the value of the metric relative to a given threshold over a number of time periods. The action is a notification sent to an Amazon Simple Notification Service (Amazon SNS) topic or Amazon EC2 Auto Scaling policy.  
For more information, see [Monitoring S3 Files with Amazon CloudWatch](s3-files-monitoring-cloudwatch.md).

**CloudTrail**  
CloudTrail captures API calls and related events made by or on behalf of your AWS account and delivers log files to an Amazon S3 bucket that you specify. S3 Files logs management events including creating file systems, creating mount targets, and mounting file systems to compute instances. S3 Files does not log data events, such as file read and write operations.  
For more information, see [Logging with CloudTrail for S3 Files](s3-files-logging-cloudtrail.md).

**Topics**
+ [

# Monitoring S3 Files with Amazon CloudWatch
](s3-files-monitoring-cloudwatch.md)
+ [

# Logging with CloudTrail for S3 Files
](s3-files-logging-cloudtrail.md)

# Monitoring S3 Files with Amazon CloudWatch
<a name="s3-files-monitoring-cloudwatch"></a>

You can monitor S3 Files file systems using [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html), which collects and processes raw data from Amazon S3 Files into readable metrics. These metrics are retained for 15 months, so you can access historical information and gain a better perspective on how your file systems are performing.

S3 Files metric data is automatically sent to CloudWatch. Most metrics are sent at 1-minute intervals, while storage metrics are sent every 15 minutes. You can create [CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Alarms.html) that send notifications when a metric exceeds a threshold you specify. You can also use CloudWatch dashboards, which are customizable home pages in the CloudWatch Console that you can use to monitor your resources in a single view. For more information, see [Creating a customized CloudWatch dashboard](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create_dashboard.html).

## S3 Files CloudWatch metrics
<a name="s3-files-monitoring-cloudwatch-metrics"></a>

S3 Files metrics use the `AWS/S3Files` namespace. All metrics are reported for a single dimension `FileSystemId`. The `AWS/S3Files` namespace includes the following metrics:


| Metric | Description | Units and valid statistics | 
| --- | --- | --- | 
| StorageBytes | The total size of the file system in bytes, which includes data and metadata. This metric is emitted to CloudWatch every 15 minutes. | Units: Bytes. Minimum, Maximum, Average | 
| Inodes | The total number of inodes (such as files, directories, symlinks) in an S3 Files file system. This metric is emitted to CloudWatch every 15 minutes. | Units: Count. Sum | 
| PendingExports | The total number of files and directories pending export to the S3 bucket. | Units: Count. Sum | 
| ImportFailures | The total number of objects that failed to import to the file system after retries (for example, incorrect IAM permissions). | Units: Count. Sum | 
| ExportFailures | Total number of files and directories that failed export and will not be retried. This metric helps you identify terminal export failures so you can troubleshoot and take action (for example, update IAM permissions). | Units: Count. Sum | 
| DataReadBytes | The number of bytes read from the file system. SampleCount gives the number of data read operations. You can calculate data read throughput by viewing this metric per unit time. | Units: Bytes (Minimum, Maximum, Average, Sum), Count (SampleCount) | 
| DataWriteBytes | The number of bytes written to the file system. SampleCount gives the number of data write operations. You can calculate data write throughput by viewing this metric per unit time. | Units: Bytes (Minimum, Maximum, Average, Sum), Count (SampleCount) | 
| MetadataReadBytes | The number of metadata bytes read from the file system. SampleCount gives the number of metadata read operations. | Units: Bytes (Minimum, Maximum, Average, Sum), Count (SampleCount) | 
| MetadataWriteBytes | The number of metadata bytes written to the file system. SampleCount gives the number of metadata write operations. | Units: Bytes (Minimum, Maximum, Average, Sum), Count (SampleCount) | 
| LostAndFoundFiles | Total number of files in the lost and found directory. The lost and found directory is located in your file system's root directory under the name .s3files-lost\$1found-file-system-id. Files in the lost and found directory are not copied to your S3 bucket. When a conflict occurs due to concurrent changes to the same data in both the file system and the S3 bucket, S3 Files treats the S3 bucket as the source of truth and moves the conflicting file to a lost and found directory. | Units: Count. Sum | 
| ClientConnections | The number of active client connections to a file system. | Units: Count. Sum | 

## Client connectivity metrics
<a name="s3-files-monitoring-cloudwatch-client-metrics"></a>

S3 Files can optimize read performance by allowing clients to read file data directly from the linked S3 bucket. To support this, the S3 Files client emits connectivity metrics that monitor whether the client can establish the necessary connections.

These metrics are emitted by the S3 Files client (amazon-efs-utils) and are published to the `efs-utils/S3Files` CloudWatch namespace. Metrics emission is enabled by default as part of the S3 Files experience.


| Metric | Description | Units and valid statistics | 
| --- | --- | --- | 
| NFSConnectionAccessible | Indicates whether the client can connect to the file system through the NFS mount. A value of 1 means the connection is accessible. A value of 0 means the connection is not accessible. | Units: None. Minimum, Maximum, Average | 
| S3BucketAccessible | Indicates whether the client has the required permissions to read data from the linked S3 bucket. A value of 1 means the client has the necessary permissions. A value of 0 means the client does not have the necessary permissions. | Units: None. Minimum, Maximum, Average | 
| S3BucketReachable | Indicates whether the linked S3 bucket and prefix exist and are reachable from the client. A value of 1 means the bucket and prefix are reachable. A value of 0 means the bucket or prefix is not reachable. | Units: None. Minimum, Maximum, Average | 

## Accessing CloudWatch metrics
<a name="s3-files-monitoring-cloudwatch-access"></a>

You can view S3 Files metrics using the CloudWatch console, the AWS CLI, or the CloudWatch API.

### To view metrics using the CloudWatch console
<a name="s3-files-monitoring-cloudwatch-access-console"></a>

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Metrics**, then choose **All metrics**.

1. Choose the **S3Files** namespace.

1. Choose **File System Metrics**.

1. Select the metrics you want to view.

1. Choose the **Graphed metrics** tab to configure the graph display.

### To view metrics using the AWS CLI
<a name="s3-files-monitoring-cloudwatch-access-cli"></a>

Use the `get-metric-statistics` command. For example, to view `DataReadBytes`:

```
aws cloudwatch get-metric-statistics \
  --namespace AWS/S3Files \
  --metric-name DataReadBytes \
  --dimensions Name=FileSystemId,Value=file-system-id \
  --start-time 2025-01-20T00:00:00Z \
  --end-time 2025-01-20T23:59:59Z \
  --period 3600 \
  --statistics Sum
```

# Logging with CloudTrail for S3 Files
<a name="s3-files-logging-cloudtrail"></a>

Amazon S3 Files is integrated with CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in S3 Files. CloudTrail captures all API calls for S3 Files as events, including calls from the S3 Files console and code calls to the S3 Files API operations.

If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for S3 Files. If you don't configure a trail, you can still view the most recent events in the CloudTrail console in **Event history**. Using the information collected by CloudTrail, you can determine the request that was made to S3 Files, the IP address from which the request was made, who made the request, when it was made, and additional details.

## S3 Files information in CloudTrail
<a name="s3-files-logging-cloudtrail-info"></a>

CloudTrail is enabled on your AWS account when you create the account. When activity occurs in Amazon S3 Files, that activity is recorded in a CloudTrail event along with other AWS service events in **Event history**. You can view, search, and download recent events in your AWS account. For more information, see [Viewing events with CloudTrail Event history](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events.html) in the *CloudTrail User Guide*.

For an ongoing record of events in your AWS account, including events for S3 Files, create a trail. A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create a trail in the console, the trail applies to all AWS Regions. The trail logs events from all Regions in the AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs.

For more information, see the following topics in the *CloudTrail User Guide*:
+ [Creating a trail for your AWS account](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html)
+ [AWS service integrations with CloudTrail logs](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-aws-service-specific-topics.html)
+ [Configuring Amazon SNS notifications for CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/configure-sns-notifications-for-cloudtrail.html)
+ [Receiving CloudTrail log files from multiple Regions](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html) and [Receiving CloudTrail log files from multiple accounts](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-receive-logs-from-multiple-accounts.html)

All [S3 Files APIs](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operations_Amazon_S3_Files.html) are logged by CloudTrail. For example, calls to the `CreateFileSystem`, `CreateMountTarget` and `TagResource` operations generate entries in the CloudTrail log files. S3 Files also generates CloudTrail logs when you mount your file system on a compute resource.

Every event or log entry contains information about who generated the request. The identity information helps you determine the following:
+ Whether the request was made with root user or IAM user credentials.
+ Whether the request was made with temporary security credentials for a role or federated user.
+ Whether the request was made by another AWS service.

For more information, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *CloudTrail User Guide*.

S3 Files does not log data events. Data events include file read and write operations performed on the file system.

## Understanding S3 Files log file entries
<a name="s3-files-logging-cloudtrail-entries"></a>

A trail is a configuration that enables delivery of events as log files to an Amazon S3 bucket that you specify. CloudTrail log files contain one or more log entries. An event represents a single request from any source and includes information about the requested action, the date and time of the action, request parameters, and so on. CloudTrail log files aren't an ordered stack trace of the public API calls, so they don't appear in any specific order.

### Example: CreateFileSystem
<a name="s3-files-logging-cloudtrail-example-createfs"></a>

The following example shows a CloudTrail log entry that demonstrates the `CreateFileSystem` action:

```
{
    "eventVersion": "1.11",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "111122223333",
        "arn": "arn:aws:sts::111122223333:assumed-role/myRole/i-0123456789abcdef0",
        "accountId": "111122223333",
        "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
        "sessionContext": {
            "sessionIssuer": {
                "type": "Role",
                "principalId": "111122223333",
                "arn": "arn:aws:iam::111122223333:role/myRole",
                "accountId": "111122223333",
                "userName": "myRole"
            },
            "attributes": {
                "creationDate": "2026-03-20T12:58:28Z",
                "mfaAuthenticated": "false"
            },
            "ec2RoleDelivery": "2.0"
        }
    },
    "eventTime": "2026-03-20T17:43:19Z",
    "eventSource": "s3files.amazonaws.com",
    "eventName": "CreateFileSystem",
    "awsRegion": "us-west-2",
    "sourceIPAddress": "192.0.2.0",
    "userAgent": "aws-cli/2.0",
    "requestParameters": {
        "bucket": "arn:aws:s3:::amzn-s3-demo-bucket",
        "prefix": "images/",
        "clientToken": "myClientToken",
        "roleArn": "arn:aws:iam::111122223333:role/myS3FilesRole"
    },
    "responseElements": {
        "creationTime": "Mar 20, 2026, 5:43:19 PM",
        "fileSystemArn": "arn:aws:s3files:us-west-2:111122223333:file-system/fs-abcd123456789ef0",
        "fileSystemId": "fs-abcd123456789ef0",
        "bucket": "arn:aws:s3:::amzn-s3-demo-bucket",
        "prefix": "images/",
        "clientToken": "myClientToken",
        "status": "creating",
        "roleArn": "arn:aws:iam::111122223333:role/myS3FilesRole",
        "ownerId": "111122223333",
        "tags": []
    },
    "requestID": "dEXAMPLE-feb4-11e6-85f0-736EXAMPLE75",
    "eventID": "eEXAMPLE-2d32-4619-bd00-657EXAMPLEe4",
    "readOnly": false,
    "resources": [
        {
            "accountId": "111122223333",
            "type": "AWS::S3Files::FileSystem",
            "ARN": "arn:aws:s3files:us-west-2:111122223333:file-system/fs-abcd123456789ef0"
        }
    ],
    "eventType": "AwsApiCall",
    "managementEvent": true,
    "recipientAccountId": "111122223333",
    "eventCategory": "Management",
    "tlsDetails": {
        "tlsVersion": "TLSv1.3",
        "cipherSuite": "TLS_AES_128_GCM_SHA256",
        "clientProvidedHostHeader": "s3files.us-west-2.api.aws"
    }
}
```

### Example: CreateMountTarget
<a name="s3-files-logging-cloudtrail-example-createmt"></a>

The following example shows a CloudTrail log entry for the `CreateMountTarget` action:

```
{
    "eventVersion": "1.11",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "111122223333",
        "arn": "arn:aws:sts::111122223333:assumed-role/myRole/i-0123456789abcdef0",
        "accountId": "111122223333",
        "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
        "sessionContext": {
            "sessionIssuer": {
                "type": "Role",
                "principalId": "111122223333",
                "arn": "arn:aws:iam::111122223333:role/myRole",
                "accountId": "111122223333",
                "userName": "myRole"
            },
            "attributes": {
                "creationDate": "2026-03-20T13:09:56Z",
                "mfaAuthenticated": "false"
            },
            "ec2RoleDelivery": "2.0"
        }
    },
    "eventTime": "2026-03-20T18:05:14Z",
    "eventSource": "s3files.amazonaws.com",
    "eventName": "CreateMountTarget",
    "awsRegion": "us-west-2",
    "sourceIPAddress": "192.0.2.0",
    "userAgent": "aws-cli/2.0",
    "requestParameters": {
        "fileSystemId": "fs-abcd123456789ef0",
        "subnetId": "subnet-01234567890abcdef",
        "securityGroups": [
            "sg-c16d65b6"
        ]
    },
    "responseElements": {
        "availabilityZoneId": "usw2-az2",
        "ownerId": "111122223333",
        "mountTargetId": "fsmt-1234567",
        "fileSystemId": "fs-abcd123456789ef0",
        "subnetId": "subnet-01234567890abcdef",
        "ipv4Address": "192.0.2.0",
        "ipv6Address": "2001:db8::1",
        "networkInterfaceId": "eni-0123456789abcdef0",
        "vpcId": "vpc-01234567",
        "securityGroups": [
            "sg-c16d65b6"
        ],
        "status": "creating"
    },
    "requestID": "dEXAMPLE-feb4-11e6-85f0-736EXAMPLE75",
    "eventID": "eEXAMPLE-2d32-4619-bd00-657EXAMPLEe4",
    "readOnly": false,
    "resources": [
        {
            "accountId": "111122223333",
            "type": "AWS::S3Files::FileSystem",
            "ARN": "arn:aws:s3files:us-west-2:111122223333:file-system/fs-abcd123456789ef0"
        },
        {
            "accountId": "111122223333",
            "type": "AWS::S3Files::MountTarget",
            "ARN": "arn:aws:s3files:us-west-2:111122223333:mount-target/fsmt-1234567"
        },
        {
            "accountId": "111122223333",
            "type": "AWS::EC2::Subnet",
            "ARN": "arn:aws:ec2:us-west-2:111122223333:subnet/subnet-01234567890abcdef"
        },
        {
            "accountId": "111122223333",
            "type": "AWS::EC2::NetworkInterface",
            "ARN": "arn:aws:ec2:us-west-2:111122223333:network-interface/eni-0123456789abcdef0"
        }
    ],
    "eventType": "AwsApiCall",
    "managementEvent": true,
    "recipientAccountId": "111122223333",
    "eventCategory": "Management",
    "tlsDetails": {
        "tlsVersion": "TLSv1.3",
        "cipherSuite": "TLS_AES_128_GCM_SHA256",
        "clientProvidedHostHeader": "s3files.us-west-2.api.aws"
    }
}
```

# Performance specifications
<a name="s3-files-performance"></a>

S3 Files automatically scales throughput and IOPS to match your workload without requiring you to provision or manage capacity. This page describes the performance characteristics of S3 Files.

## Performance summary
<a name="s3-files-performance-summary"></a>


|  |  | 
| --- |--- |
| Aggregate read throughput per file system | Up to terabytes per second | 
| Aggregate write throughput per file system | 1–5 GiB/s | 
| Maximum read IOPS per S3 bucket with S3 Files | No limit (attach multiple file systems to the same bucket) | 
| Maximum write IOPS per S3 bucket with S3 Files | No limit (attach multiple file systems to the same bucket) | 
| Maximum read IOPS per file system | 250,000 | 
| Maximum write IOPS per file system | 50,000 | 
| Maximum per-client read throughput | 3 GiB/s | 

## How S3 Files delivers performance
<a name="s3-files-performance-how"></a>

S3 Files serves data from two storage tiers, and automatically routes each operation to the tier best suited for it.

**High-performance storage** – The low-latency storage layer within your file system where actively used file data and metadata reside. S3 Files automatically manages this storage, copying data onto it when you access files and removing data that has not been read within a configurable expiration window. You pay a storage rate for data residing on the high-performance storage.

**Direct from S3** – S3 Files streams file reads directly from your S3 bucket in two cases: when the file's data is not stored in the file system's high-performance storage, and for large reads >= 1 MiB, even when the data also resides on the file system's high-performance storage. The S3 bucket is optimized for high throughput while the file system's high-performance storage layer is optimized for low-latency access. This tiering approach for streaming data directly from S3 bucket provides high throughput for sequential reads, making it well suited for analytics, media processing, and other streaming workloads. S3 Files asynchronously imports data for small files (< 128 KiB by default) to the file system's high-performance storage for low latency access on subsequent reads.

Since S3 Files automatically applies this two-tier model, you do not have to choose between latency and throughput. Small-file workloads get file system performance. Large-file workloads get S3 throughput. Mixed workloads get both.

## Read performance
<a name="s3-files-performance-read"></a>

Read throughput scales with the number of connected compute instances and the degree of parallelism within each instance. The maximum per-client read throughput is 3 GiB/s. S3 Files supports up to terabytes per second of aggregate read throughput and up to 250,000 read IOPS per file system.

## Write performance
<a name="s3-files-performance-write"></a>

Writes go to the high-performance storage and are durable immediately. Depending on the region, S3 Files supports 1–5 GiB/s of aggregate write throughput and up to 50,000 write IOPS per file system. Write performance scales elastically with workload activity.

When you modify a file in the file system, S3 Files waits approximately 60 seconds, aggregating any successive changes to the file in that time, before copying to your S3 bucket. This means that rapid successive writes to the same file are captured in a single S3 PUT request rather than generating a new object version for every individual change, reducing your S3 request costs and storage costs. If you continue to modify the file after S3 Files has copied your changes back to the S3 bucket, it will copy subsequent changes as needed.

## First access latency
<a name="s3-files-performance-first-access"></a>

The first time you access a directory, S3 Files imports metadata for all files in that directory and, depending on your import configuration, data for small files. So your initial access takes longer than subsequent operations. Once imported, all subsequent directory listings and file access return at low latency.

## Synchronization performance
<a name="s3-files-performance-sync"></a>

S3 Files synchronizes changes between your file system and S3 bucket in the background.

**Importing changes from S3** – When another application adds or modifies an object in your S3 bucket, S3 Files reflects the change in your file system typically within seconds. S3 Files processes up to 2,400 object changes per second per file system, with import data throughput of up to 700 megabytes per second.

**Exporting changes to S3** – When you write a file through the file system, S3 Files batches your changes for approximately 60 seconds to consolidate rapid successive writes into a single S3 object version, reducing your S3 request and storage version costs. After the batching window, S3 Files copies the file to your S3 bucket. S3 Files exports up to 800 files per second per file system, with export data throughput of up to 2,700 megabytes per second.


| Operation metric | Value | Unit | 
| --- | --- | --- | 
| Import from S3 bucket IOPS | 2,400 | objects per second per file system | 
| Import from S3 bucket throughput | 700 | megabytes per second | 
| Export to S3 bucket IOPS | 800 | files per second per file system | 
| Export to S3 bucket throughput | 2,700 | megabytes per second | 

Amazon S3 uses a flat storage structure where objects are identified by their key names. While S3 Files lets you organize your data in directories, S3 has no native concept of directories. What appears as a directory in your file system is a common prefix shared by the keys of the objects within the S3 bucket. Additionally, S3 objects are immutable and do not support atomic renames. As a result, when you rename or move a file, S3 Files must write the data to a new object with the updated key and delete the original. When you rename or move a directory, S3 Files must repeat this process for every object that shares that prefix. Therefore, when you rename or move a directory containing tens of millions of files, your S3 request costs and the synchronization time increase significantly. A directory rename of 100,000 files takes a few minutes to fully reflect in the S3 bucket, though the rename is instant on the file system. For more information, see [Understanding the impact of rename and move operations](s3-files-synchronization.md#s3-files-sync-rename-move).

If your workload generates changes faster than the synchronization rate, S3 Files queues the changes and processes them in order. You can monitor the count of pending exports using the `PendingExports` CloudWatch metric. For more information, see [Monitoring S3 Files with Amazon CloudWatch](s3-files-monitoring-cloudwatch.md).

## Monitoring performance
<a name="s3-files-performance-monitoring"></a>

You can monitor your file system's performance using Amazon CloudWatch. S3 Files publishes metrics including `DataReadBytes`, `DataWriteBytes`, `MetadataReadBytes`, and `MetadataWriteBytes`, which you can use to track throughput and IOPS over time. For more information, see [Monitoring S3 Files with Amazon CloudWatch](s3-files-monitoring-cloudwatch.md).

# Security for S3 Files
<a name="s3-files-security"></a>

Cloud security at AWS is the highest priority. As an AWS customer, you benefit from a data center and network architecture that is built to meet the requirements of the most security-sensitive organizations.

Security is a shared responsibility between AWS and you. The [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/) describes this as security *of* the cloud and security *in* the cloud:

**Security of the cloud**  
AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. AWS also provides you with services that you can use securely. Third-party auditors regularly test and verify the effectiveness of our security as part of the [AWS Compliance Programs](https://aws.amazon.com/compliance/programs/). To learn about the compliance programs that apply to Amazon S3 Files, see [AWS Services in Scope by Compliance Program](https://aws.amazon.com/compliance/services-in-scope/).

**Security in the cloud**  
Your responsibility is determined by the AWS service that you use. You are also responsible for other factors including the sensitivity of your data, your company's requirements, and applicable laws and regulations.

This documentation helps you understand how to apply the shared responsibility model when using Amazon S3 Files.

## Data Protection
<a name="s3-files-security-data-protection"></a>

The AWS [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/) applies to data protection in S3 Files. As described in this model, AWS is responsible for protecting the global infrastructure that runs all of the AWS Cloud. You are responsible for maintaining control over your content that is hosted on this infrastructure. You are also responsible for the security configuration and management tasks for the AWS services that you use. For more information about data privacy, see the [Data Privacy FAQ](https://aws.amazon.com/compliance/data-privacy-faq). For information about data protection in Europe, see the [AWS Shared Responsibility Model and GDPR](https://aws.amazon.com/blogs/security/the-aws-shared-responsibility-model-and-gdpr/) blog post on the *AWS Security Blog*.

For data protection purposes, we recommend that you protect AWS account credentials and set up individual users with AWS IAM Identity Center or AWS Identity and Access Management (IAM). That way, each user is given only the permissions necessary to fulfill their job duties. We also recommend that you secure your data in the following ways:
+ Use multi-factor authentication (MFA) with each account.
+ Use SSL/TLS to communicate with AWS resources. We require TLS 1.2 and recommend TLS 1.3.
+ Set up API and user activity logging with AWS CloudTrail. For information about using CloudTrail trails to capture AWS activities, see [Working with CloudTrail trails](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-trails.html) in the *AWS CloudTrail User Guide*.
+ Use AWS encryption solutions, along with all default security controls within AWS services.
+ Use advanced managed security services such as Amazon Macie, which assists in discovering and securing sensitive data that is stored in Amazon S3.
+ If you require FIPS 140-3 validated cryptographic modules when accessing AWS through a command line interface or an API, use a FIPS endpoint. For more information about the available FIPS endpoints, see [Federal Information Processing Standard (FIPS) 140-3](https://aws.amazon.com/compliance/fips/).

We strongly recommend that you never put confidential or sensitive information, such as your customers' email addresses, into tags or free-form text fields such as a **Name** field. This includes when you work with S3 Files or other AWS services using the console, API, AWS CLI, or AWS SDKs. Any data that you enter into tags or free-form text fields used for names may be used for billing or diagnostic logs. If you provide a URL to an external server, we strongly recommend that you do not include credentials information in the URL to validate your request to that server.

**Topics**
+ [

## Data Protection
](#s3-files-security-data-protection)
+ [

# Encryption
](s3-files-encryption.md)
+ [

# How S3 Files works with IAM
](s3-files-security-iam.md)

# Encryption
<a name="s3-files-encryption"></a>

S3 Files provides comprehensive encryption capabilities to protect your data both at rest and in transit.

## Encryption at rest
<a name="s3-files-encryption-at-rest"></a>

Your S3 bucket is encrypted using Amazon S3's encryption mechanisms. For information on encryption of data in S3, see [Protecting data with encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingEncryption.html).

S3 Files encrypts data at rest in the S3 file system using server-side encryption. Server-side encryption is the encryption of data at its destination by the application or service that receives it. In S3 file systems, data and metadata are encrypted by default before being written to storage and are automatically decrypted when read. These processes are handled transparently by S3 Files, so you don't need to modify your applications. All data at rest in the file system is encrypted using AWS Key Management Service (KMS) keys using one of the following methods:
+ (Default) Server-side encryption with AWS owned KMS keys (SSE-KMS)
+ Server-side encryption with Customer managed KMS keys (SSE-KMS-CMK)

There are additional charges for using AWS KMS keys. For more information, see [AWS KMS key concepts](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html) in the *AWS Key Management Service Developer Guide* and [AWS KMS pricing](https://aws.amazon.com/kms/pricing/).

### Server-side encryption with AWS owned KMS keys (SSE-KMS)
<a name="s3-files-encryption-aws-owned-key"></a>

This is the default key for encrypting data at rest in your S3 file system. AWS owned keys are a collection of KMS keys that an AWS service owns and manages. S3 Files owns and manages encryption of your data and metadata at rest in your S3 file system when you use an AWS owned key. For more details on AWS owned keys, visit [AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html).

### Server-side encryption with customer managed AWS KMS keys (SSE-KMS-CMK)
<a name="s3-files-encryption-sse-kms"></a>

While creating your file system, you can choose to configure an AWS Key Management Service (AWS KMS) key that you manage. When you use SSE-KMS encryption with an S3 file system the AWS KMS keys must be in the same Region as the file system.

## S3 Files key policies for AWS KMS
<a name="s3-files-encryption-key-policies"></a>

Key policies are the primary way to control access to customer managed keys. For more information on key policies, see [Key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) in the *AWS Key Management Service Developer Guide*. The following list describes all the AWS KMS–related permissions that are supported by S3 Files for encrypting file systems at rest:

kms:Encrypt  
(Optional) Encrypts plaintext into ciphertext. This permission is included in the default key policy.

kms:Decrypt  
(Required) Decrypts ciphertext. Ciphertext is plaintext that has been previously encrypted. This permission is included in the default key policy.

kms:ReEncrypt  
(Optional) Encrypts data on the server side with a new customer managed key, without exposing the plaintext of the data on the client side. The data is first decrypted and then re-encrypted. This permission is included in the default key policy.

kms:GenerateDataKeyWithoutPlaintext  
(Required) Returns a data encryption key encrypted under a customer managed key. This permission is included in the default key policy under kms:GenerateDataKey\$1.

kms:CreateGrant  
(Required) Adds a grant to a key to specify who can use the key and under what conditions. Grants are alternate permission mechanisms to key policies. For more information on grants, see [Grants in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/grants.html) in the *AWS Key Management Service Developer Guide*. This permission is included in the default key policy.

kms:DescribeKey  
(Required) Provides detailed information about the specified customer managed key. This permission is included in the default key policy.

kms:ListAliases  
(Optional) Lists all of the key aliases in the account. When you use the console to create an encrypted file system, this permission populates the Select KMS key list. We recommend using this permission to provide the best user experience. This permission is included in the default key policy.

## Key states and their effects
<a name="s3-files-encryption-key-states"></a>

The state of your KMS key directly affects access to your encrypted file system:

Enabled  
Normal operation - full read and write access to the file system.

Disabled  
File system becomes inaccessible after some time. Can be re-enabled.

Pending deletion  
File system becomes inaccessible. Deletion can be canceled during the waiting period. Note that after cancelling key deletion, the key needs to be moved to enabled state.

Deleted  
File system permanently inaccessible. This action cannot be reversed.

**Warning**  
If you disable or delete the KMS key used for your file system, or revoke S3 Files access to the key, your file system will become inaccessible. This can result in data loss if you don't have backups. Always ensure you have proper backup procedures in place before making changes to encryption keys.

## Encryption in transit
<a name="s3-files-encryption-in-transit"></a>

S3 Files requires encryption of data in transit using Transport Layer Security (TLS). When you mount your file system using the mount helper, all data traveling between your client and the file system is encrypted using TLS. The mount helper initializes efs-proxy process to establish a secure TLS connection with your file system. The mount helper also creates a process called amazon-efs-mount-watchdog that monitors the health of mounts, and is started automatically the first time an S3 file system is mounted. It ensures that each mount's efs-proxy process is running, and stops the process when the file system is unmounted. If for some reason the process is terminated unexpectedly, the watchdog process restarts it.

The following describes how TLS encryption in transit works:

1. A secure TLS connection is established between your client and the file system

1. All NFS traffic is routed through this encrypted connection

1. Data is encrypted before transmission and decrypted upon receipt

Encryption of data in transit changes your NFS client setup. When you inspect your actively mounted file systems, you see one mounted to 127.0.0.1, or localhost, as in the following example.

```
$ mount | column -t
127.0.0.1:/  on  /home/ec2-user/s3files        type  nfs4         (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=20127,timeo=600,retrans=2,sec=sys,clientaddr=127.0.0.1,local_lock=none,addr=127.0.0.1)
```

You mount your file system using the mount helper, which always encrypts data in transit using TLS. Therefore, while mounting, your NFS client is reconfigured to mount to a local port.

# How S3 Files works with IAM
<a name="s3-files-security-iam"></a>

This page describes how AWS Identity and Access Management (IAM) works with S3 Files and how you can use IAM policies to control access to your file systems.

S3 Files uses IAM for two distinct types of access control:
+ **API access** — Controls who can create, manage, and delete S3 Files resources such as file systems, mount targets, and access points. You control this access using identity-based policies attached to IAM users, groups, or roles.
+ **Client access** — Controls what clients (your mounted compute resources) can do with the file system once they connect, such as reading, writing, or accessing files as the root user. You control this access using a combination of resource-based policies, identity-based policies, access points, and POSIX permissions.

Using IAM, you can permit clients to perform specific actions on a file system, including read-only, write, and root access. An "allow" permission on an action in either an IAM identity policy or a file system resource policy allows access for that action. The permission does not need to be granted in both an identity and a resource policy.

Your S3 bucket policies on your linked S3 bucket also govern access from your compute resource and your file system to your S3 bucket. You must also make sure that the bucket policies of your source bucket don't deny access from your compute resource or file system. For more details, see [Bucket policies for Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-policies.html).

## Identity-based policies
<a name="s3-files-security-iam-identity-based"></a>

Identity-based policies are JSON policies that you attach to IAM users, groups, or roles. You can provide these permissions by writing custom policies or by attaching an AWS managed policy. For more information about available managed policies for both API access and client access, see [AWS managed policies for Amazon S3 Files](s3-files-security-iam-awsmanpol.md).

S3 Files also optimizes read performance by allowing clients to read file data directly from the source S3 bucket. When you mount an S3 file system on your compute resource, you must add an inline policy to the IAM role of your compute resource which grants permissions to read objects from the specified S3 bucket. The mount helper uses these permissions to read the S3 data. For more details on this policy, see [IAM role for attaching your file system to AWS compute resources](s3-files-prereq-policies.md#s3-files-prereq-iam-compute-role).

## Resource-based policies
<a name="s3-files-security-iam-resource-based"></a>

A file system policy is an IAM resource-based policy that you attach directly to a file system to control client access. You can use file system policies to grant or deny permissions for clients to perform operations such as mounting, writing, and root access.

A file system either has an empty (default) file system policy or exactly one explicit policy. S3 file system policies have a 20,000 character limit. For information on creating and managing file system policies, see [Creating file system policies](s3-files-file-system-policies-creating.md).

## S3 Files actions for clients
<a name="s3-files-security-iam-client-actions"></a>

You can specify the following actions in a file system policy to control client access:


| Action | Description | 
| --- | --- | 
| s3files:ClientMount | Provides read-only access to a file system. | 
| s3files:ClientWrite | Provides write permissions on a file system. | 
| s3files:ClientRootAccess | Provides use of the root user when accessing a file system. | 

## S3 Files condition keys for clients
<a name="s3-files-security-iam-condition-keys"></a>

You can use the following condition keys in the `Condition` element of a file system policy to further refine access control:


| Condition key | Description | Operator | 
| --- | --- | --- | 
| s3files:AccessPointArn | ARN of the S3 Files access point to which the client is connecting. | String | 

## File system policy examples
<a name="s3-files-security-iam-policy-examples"></a>

### Example: Grant read-only access
<a name="s3-files-security-iam-policy-example-readonly"></a>

The following file system policy grants only `ClientMount` (read-only) permissions to the `ReadOnly` IAM role. Replace *111122223333* with your AWS account ID.

```
{
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/ReadOnly"
            },
            "Action": [
                "s3files:ClientMount"
            ]
        }
    ]
}
```

### Example: Grant access to an S3 Files access point
<a name="s3-files-security-iam-policy-example-accesspoint"></a>

The following file system policy uses a condition element to grant a specific access point full access to the file system when mounting via the access point specified. Replace the access point ARN and account ID with your values. For more information, see [Creating access points for an S3 file system](s3-files-access-points-creating.md).

```
{
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::555555555555:role/S3FilesAccessPointFullAccess"
            },
            "Action": [
                "s3files:Client*"
            ],
            "Condition": {
                "StringEquals": {
                    "s3files:AccessPointArn": "arn:partition:s3files:region:account-id:file-system/fs-1234567890/access-point/fsap-0987654321"
                }
            }
        }
    ]
}
```

## POSIX permissions
<a name="s3-files-security-iam-posix"></a>

After IAM authorization succeeds, S3 Files enforces standard POSIX (Unix-style) permissions at the file and directory level. POSIX permissions control access based on the user ID (UID), group ID (GID), and permission bits (read, write, execute) associated with each file and directory. Access points can enforce a specific POSIX user identity for all requests, simplifying access management for shared datasets. For more information, see [Creating access points for an S3 file system](s3-files-access-points-creating.md).

## Security groups
<a name="s3-files-security-iam-security-groups"></a>

Security groups act as a network-level firewall that controls traffic between your compute resources and the file system's mount targets. For details on configuring security groups to get started on S3 Files, see [Security groups](s3-files-prereq-policies.md#s3-files-prereq-security-groups).

# AWS managed policies for Amazon S3 Files
<a name="s3-files-security-iam-awsmanpol"></a>

An AWS managed policy is a standalone policy that is created and administered by AWS. AWS managed policies are designed to provide permissions for many common use cases so that you can start assigning permissions to users, groups, and roles.

Keep in mind that AWS managed policies might not grant least-privilege permissions for your specific use cases because they're available for all AWS customers to use. We recommend that you reduce permissions further by defining [ customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#customer-managed-policies) that are specific to your use cases.

You cannot change the permissions defined in AWS managed policies. If AWS updates the permissions defined in an AWS managed policy, the update affects all principal identities (users, groups, and roles) that the policy is attached to. AWS is most likely to update an AWS managed policy when a new AWS service is launched or new API operations become available for existing services.

For more information, see [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) in the *IAM User Guide*.

## AWS managed policy: AmazonS3FilesFullAccess
<a name="s3-files-security-iam-awsmanpol-amazons3filesfullaccess"></a>

You can attach the `AmazonS3FilesFullAccess` policy to your IAM identities. This policy grants full access to Amazon S3 Files, including permissions to create and manage file systems, mount targets, and access points. For more information about this policy, see [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3FilesFullAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3FilesFullAccess.html) in the AWS Managed Policy Reference.

## AWS managed policy: AmazonS3FilesReadOnlyAccess
<a name="s3-files-security-iam-awsmanpol-amazons3filesreadonlyaccess"></a>

You can attach the `AmazonS3FilesReadOnlyAccess` policy to your IAM identities. This policy grants read-only access to Amazon S3 Files, including permissions to view file systems, mount targets, access points, and related configurations. For more information about this policy, see [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3FilesReadOnlyAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3FilesReadOnlyAccess.html) in the AWS Managed Policy Reference.

## AWS managed policy: AmazonS3FilesClientFullAccess
<a name="s3-files-security-iam-awsmanpol-amazons3filesclientfullaccess"></a>

You can attach the `AmazonS3FilesClientFullAccess` policy to your IAM identities. This policy grants full client access to S3 Files file systems, including the ability to mount, read, write, and access files as the root user. For more information about this policy, see [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3FilesClientFullAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3FilesClientFullAccess.html) in the AWS Managed Policy Reference.

## AWS managed policy: AmazonS3FilesClientReadWriteAccess
<a name="s3-files-security-iam-awsmanpol-amazons3filesclientreadwriteaccess"></a>

You can attach the `AmazonS3FilesClientReadWriteAccess` policy to your IAM identities. This policy grants read and write client access to S3 Files file systems, including the ability to mount, read, and write. This policy does not grant root access. For more information about this policy, see [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3FilesClientReadWriteAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3FilesClientReadWriteAccess.html) in the AWS Managed Policy Reference.

## AWS managed policy: AmazonS3FilesClientReadOnlyAccess
<a name="s3-files-security-iam-awsmanpol-amazons3filesclientreadonlyaccess"></a>

You can attach the `AmazonS3FilesClientReadOnlyAccess` policy to your IAM identities. This policy grants read-only client access to S3 Files file systems, including the ability to mount and read from the file system. For more information about this policy, see [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3FilesClientReadOnlyAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3FilesClientReadOnlyAccess.html) in the AWS Managed Policy Reference.

## AWS managed policy: AmazonS3FilesCSIDriverPolicy
<a name="s3-files-security-iam-awsmanpol-amazons3filescsidriverpolicy"></a>

You can attach the `AmazonS3FilesCSIDriverPolicy` policy to your IAM identities. This policy grants permissions for the Amazon EFS Container Storage Interface (CSI) driver to manage S3 Files access points on behalf of Amazon EKS clusters. For more information about this policy, see [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3FilesCSIDriverPolicy.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3FilesCSIDriverPolicy.html) in the AWS Managed Policy Reference.

## AWS managed policy: AmazonElasticFileSystemUtils
<a name="s3-files-security-iam-awsmanpol-amazonelasticfilesystemutils"></a>

You can attach the `AmazonElasticFileSystemUtils` policy to your IAM identities. This policy grants permissions for the S3 Files client utilities (amazon-efs-utils) to perform operations such as describing mount targets, publishing CloudWatch metrics and logs, and communicating with AWS Systems Manager. For more information about this policy, see [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonElasticFileSystemUtils.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonElasticFileSystemUtils.html) in the AWS Managed Policy Reference.

## Amazon S3 Files updates to AWS managed policies
<a name="s3-files-security-iam-awsmanpol-updates"></a>

View details about updates to AWS managed policies for Amazon S3 Files since S3 Files began tracking these changes.


| Change | Description | Date | 
| --- | --- | --- | 
|  `AmazonElasticFileSystemUtils` — Updated  |  Added Amazon CloudWatch PutMetricData permissions to support publishing client connectivity metrics.  | April 7, 2026 | 
|  `AmazonS3FilesCSIDriverPolicy` — Added  |  New managed policy that grants permissions for the Amazon EFS CSI driver to manage S3 Files access points on behalf of Amazon EKS clusters.  | April 7, 2026 | 
|  `AmazonS3FilesClientReadOnlyAccess` — Added  |  New managed policy that grants read-only client access to S3 Files file systems.  | April 7, 2026 | 
|  `AmazonS3FilesClientReadWriteAccess` — Added  |  New managed policy that grants read and write client access to S3 Files file systems.  | April 7, 2026 | 
|  `AmazonS3FilesClientFullAccess` — Added  |  New managed policy that grants full client access to S3 Files file systems, including root access.  | April 7, 2026 | 
|  `AmazonS3FilesReadOnlyAccess` — Added  |  New managed policy that grants read-only access to S3 Files resources.  | April 7, 2026 | 
|  `AmazonS3FilesFullAccess` — Added  |  New managed policy that grants full access to S3 Files resources.  | April 7, 2026 | 
|  S3 Files started tracking changes  |  Amazon S3 Files started tracking changes for its AWS managed policies.  | April 7, 2026 | 

# How S3 Files is metered
<a name="s3-files-metering"></a>

S3 Files pricing is based on two dimensions: the amount of data stored on your file system's high performance storage, and the file system operations that your applications and the synchronization process perform. This page explains how each dimension is metered so you can understand and optimize your costs.

For current pricing, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

## How file system storage is metered
<a name="s3-files-metering-storage"></a>

When you work with files through your S3 file system, S3 Files stores the data you actively use from your S3 bucket onto the file system's high performance storage. You pay for the amount of data residing on the file system's high performance storage, measured in GB-month. This includes data that has been copied from your S3 bucket, data you have written through the file system, and the metadata for your files and directories.

If a file in your file system has not been read within a configurable window (1 to 365 days, default 30 days) and its changes have already been synchronized to your S3 bucket, S3 Files removes that file's data from the file system's high performance storage automatically. This keeps your storage costs proportional to your active working data set rather than the total size of your S3 bucket. Your data remains safely stored in your S3 bucket. S3 Files only removes the copy from the file system's high performance storage. The next time you read that file, S3 Files retrieves the latest version of the corresponding object from the S3 bucket and copies it back onto the file system's high performance storage. For more information, see [Understanding how synchronization works](s3-files-synchronization.md).

## How file system operations are metered
<a name="s3-files-metering-operations"></a>

S3 Files meters every file system operation as either a read or a write. Each operation has a minimum metered size.

**Data reads** such as reading a file's contents are metered at the size of the data read, with a minimum of 32 KiB per read operation. There are also cases when a read is served directly from your S3 bucket (see below) for performance optimization, and such operations are not metered for data read and are only metered for a 4 KiB metadata read.

**Data writes** such as writing or appending to a file are metered at the size of the data written, with a minimum of 32 KiB per write operation.

**Metadata operations** such as listing a directory, viewing file attributes, creating or deleting files and directories, renaming, and changing permissions are metered as 4 KiB metadata read operation. The commit operation (triggered by `fsync` or closing file after write) is the only metadata operation metered as a metadata write, at 4 KiB. Note that a metadata read operation is charged same as a data read operation and a metadata write operation is charged the same as a data write operation.

All metered sizes are rounded up to the next 1 KiB boundary.

## How reads are metered when served directly from Amazon S3
<a name="s3-files-metering-s3-reads"></a>

S3 Files streams file reads directly from your S3 bucket in two cases: when the file's data is not stored in the file system's high-performance storage, and for large reads >= 1 MiB, even when the data also resides on the file system's high-performance storage. The S3 bucket is optimized for high throughput while the file system's high-performance storage layer is optimized for low-latency access. S3 Files asynchronously imports data for small files (< 128 KiB by default) to the file system's high-performance storage for low latency access on subsequent reads.

In such cases, you pay for S3 GET requests instead of file system data reads. S3 Files meters only a 4 KiB metadata read operation for such reads.

## How synchronization is metered
<a name="s3-files-metering-sync"></a>

S3 Files keeps your file system and the linked S3 bucket synchronized automatically. These synchronization operations are metered as file system operations, in addition to the standard S3 request charges that S3 Files incurs on your behalf.

**Importing data onto the file system:** When S3 Files copies data from your S3 bucket onto the file system's high performance storage, the operation is metered as a file system write. This includes the data that is copied when you first access a directory, when you read a file whose data is not on the file system's high performance storage, and when S3 Files reflects changes made directly to your S3 bucket. The metered size is the amount of data written to the file system's high performance storage.

**Exporting changes to your S3 bucket:** When S3 Files copies your file system changes back to your S3 bucket, the operation is metered as a file system read. Only the data that is read from the file system counts toward this charge. If the file that you changed contains data that was never copied onto the file system's high performance storage, that part of the data is read from your S3 bucket at S3 GET request pricing and does not incur a file system read charge. For example, if you append data to a file, S3 Files uses multipart uploads to avoid importing the entire object into the file system's high performance storage before appending data to it. This optimizes your file system storage cost.

**Rename and move operations:** S3 has no native concept of directories. What appears as a directory in your file system is a common prefix shared by the keys of the objects within the S3 bucket. Additionally, S3 objects are immutable and do not support atomic renames. As a result, when you rename or move a file, S3 Files must write the data to a new object with the updated key (metered as an S3 PUT request) and delete the original. The synchronization of rename operations also meters as a file system read for any data read from the file system. If the file's data was never copied onto the file system's high performance storage, the file system meters only for a 4 KiB metadata read operation. When you rename or move a directory, S3 Files must repeat this process (and meter) for every object that shares that prefix. For more information, see [Understanding the impact of rename and move operations](s3-files-synchronization.md#s3-files-sync-rename-move).

**Data expiration:** When S3 Files removes unused data from the file system, no file system operation charges apply.

## Metering examples
<a name="s3-files-metering-examples"></a>

**Listing a large directory for the first time**

When you first list a directory, S3 Files imports metadata for all files in that directory. Each file's metadata import is metered as a 4 KiB write. Depending on your import configuration, S3 Files may also prefetch and copy data for small files in that directory on to the file system's high performance storage to optimize performance. Each file's data import is metered as a write at the file's size (32 KiB minimum). You can control which files have their data imported by configuring your import rules. For more information, see [Customizing synchronization for S3 Files](s3-files-synchronization-customizing.md).

**Reading a small file that is not in the file system's high performance storage**

S3 Files reads the data from S3 bucket and serves to the client, and asynchronously imports the data into the file system's high performance storage so that future reads are faster. This is metered as a file system read at the size of the data transferred (32 KiB minimum). The asynchronous import of data into the file system's high performance storage is metered as a write at the size of the data transferred. A similar process is followed when you read a file whose data has been expired from the file system. The expiration of data does not incur any additional file system operation charges.

**Writing a file**

Your write is metered as a file system write at the size of the data written (32 KiB minimum). Approximately 60 seconds after your last write, S3 Files copies the file to your S3 bucket. This is because when you modify a file in the file system, S3 Files waits up to 60 seconds, aggregating any successive changes to the file in that time, before copying to your S3 bucket. This means that rapid successive writes to the same file are captured in a single S3 PUT request rather than generating a new object version for every individual change, reducing your S3 request costs and storage costs. This synchronization is metered as a file system read for data read from the file system's high performance storage, plus a standard S3 PUT request.

# S3 Files best practices
<a name="s3-files-best-practices"></a>

This page describes the recommended best practices for working with S3 file systems.

## Performance and cost optimization
<a name="s3-files-best-practices-performance"></a>
+ **Parallelize your workloads** – S3 Files is designed to support highly parallel workloads. Distributing reads across multiple files and multiple compute instances helps maximize aggregate throughput. You can also create multiple file systems scoped to different specific prefixes within the same bucket (instead of creating one file system over the entire bucket) to scale horizontally and improve aggregate throughput.
+ **Scope your file system to the smallest prefix your workload needs to minimize impact of renames** – S3 has no native concept of directories. When you rename or move a directory, S3 Files must write the data to a new object with the updated key and delete the original for every file in that directory. Renaming directories with tens of millions of files can significantly increase S3 request costs and synchronization time. Scope your file system to your active dataset, or structure your data so that directories you expect to rename contain fewer files. For more information, see [Understanding the impact of rename and move operations](s3-files-synchronization.md#s3-files-sync-rename-move).
+ **Use large IO sizes** – S3 Files meters each read and write operation at a minimum of 32 KB. Using larger IO sizes (1 MB or more) amortizes per-operation overhead and is more cost effective than many small reads or writes. When using the mount helper, the default NFS read and write buffer sizes are set to 1 MB for optimal performance.
+ **Tune your sizeLessThan value in import configuration to match your file sizes** – By default, S3 Files caches data for files smaller than 128 KB when you first access a directory. Files larger than this threshold are read directly from S3. If your workload performs small, latency-sensitive reads on larger files, increase the sizeLessThan threshold to match the file sizes you need on the file system's high performance storage for low-latency access. For more information, see [Customizing synchronization for S3 Files](s3-files-synchronization-customizing.md).
+ **Set expiration windows to match your workload lifecycle** – Data that has not been read within the expiration window is automatically removed from the file system. For short-lived workloads such as batch jobs or training runs, use a shorter expiration (1–7 days) to minimize storage costs. For workloads that revisit the same data over weeks, use a longer expiration (30–90 days) to continue benefiting from the low latency. For more information, see [Customizing synchronization for S3 Files](s3-files-synchronization-customizing.md).
+ **Use prefix-scoped rules for mixed workloads** – If your bucket contains both frequently accessed and infrequently accessed data, create separate import rules for each prefix. This lets you import data aggressively for hot prefixes while keeping cold prefixes metadata-only. For more information, see [Customizing synchronization for S3 Files](s3-files-synchronization-customizing.md).
+ **Create a mount target in every Availability Zone** – We recommend creating one mount target in each Availability Zone you operate in so that you can reduce cross-AZ data transfer costs and improve performance. This ensures that your compute resources always have a local network path to the file system, improving both availability and latency. When you create a file system using the AWS Management Console, S3 Files automatically creates one mount target in every Availability Zone in your selected VPC.

## Synchronization
<a name="s3-files-best-practices-sync"></a>
+ **Understand the S3 Files consistency model** – When a file in the file system is modified at the same time as its corresponding object in the S3 bucket, S3 Files treats the S3 bucket as the source of truth and moves the file to the lost and found directory. To avoid conflicts, designate one path (file system or S3) as the primary writer.
+ **Monitor synchronization health** – Use CloudWatch metrics to track the status of synchronization between your file system and S3 bucket. A growing `PendingExports` indicates that your workload is generating changes faster than the synchronization rate, which means synchronization will take longer to complete. A non-zero `ExportFailures` CloudWatch metric indicates files that could not be exported and require action. For more information, see [Troubleshooting S3 Files](s3-files-troubleshooting.md).

## Access control
<a name="s3-files-best-practices-access"></a>
+ **Follow the principle of least privilege** – Grant only the minimum permissions required for each IAM role and file system policy. For example, if a compute resource only needs to read data from the file system, attach the `AmazonS3FilesClientReadOnlyAccess` managed policy instead of `AmazonS3FilesClientFullAccess`. Additionally, consider creating your file system scoped to a specific prefix rather than the entire bucket, so that clients can only access data within that prefix.
+ **Do not modify the S3 Files IAM role** – Do not modify or delete the IAM role that S3 Files assumes to synchronize with your S3 bucket. Changing or removing this role can break synchronization between your file system and S3 bucket.
+ **Do not modify the S3 Files EventBridge rule** – S3 Files creates an EventBridge rule (prefixed with DO-NOT-DELETE-S3-Files) to detect changes in your S3 bucket. Do not disable, modify, or delete this rule. Removing it prevents S3 Files from detecting new or changed objects in your bucket, causing your file system to become stale.
+ **Consider restricting access to logs written by `efs-utils`** – `efs-utils` writes S3 object key names directly in logs which it stores in the directory `/var/log/amazon/efs`. If your S3 key names contain sensitive information, you should restrict access to this directory via POSIX permissions. For example, you could restrict access via the command `sudo chmod 700 /var/log/amazon/efs`.

## Monitoring
<a name="s3-files-best-practices-monitoring"></a>
+ **Set alarms on synchronization failures** – Create CloudWatch alarms on `ImportFailures` and `ExportFailures` to be notified when files fail to synchronize. Failed exports may indicate permission issues, encryption key problems, or path length limits. For more information, see [Troubleshooting S3 Files](s3-files-troubleshooting.md).

# Unsupported features, limits, and quotas
<a name="s3-files-quotas"></a>

This page describes the limitations and quotas when using S3 Files.

## Unsupported S3 features
<a name="s3-files-unsupported-s3-features"></a>
+ **Archival storage classes** – Objects in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage classes cannot be accessed through the file system. Similarly, objects in the S3 Intelligent-Tiering Archive Access or Deep Archive Access tiers cannot be accessed through the file system. You must first restore these objects using the S3 API.
+ **S3 Access Control Lists (ACLs)** – S3 ACLs are not preserved after changes are made through the file system. We recommend that you keep ACLs disabled and use policies to control access to all objects in your bucket, regardless of who uploaded the objects to your bucket. For more information, see [Managing access with ACLs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/acls.html).
+ **Custom S3 object metadata** – Custom user-defined metadata on S3 objects is not preserved after changes are made through the file system.

## File system limitations
<a name="s3-files-filesystem-limitations"></a>
+ **Hard links** – S3 Files does not support hard links. Each file in the file system corresponds to a single S3 object key.
+ **Path component size** – S3 Files does not support objects where any single directory or file name in the path exceeds 255 bytes. A path component is a directory or file name derived from an S3 object key. For example, the key `dir1/dir2/file.txt` has three path components: `dir1`, `dir2`, and `file.txt`.
+ **S3 key size limit** – Files or directories whose full path name exceeds the 1,024-byte S3 object key size limit cannot be exported to S3.
+ **Symlink targets** – If an S3 object is a symlink, its target must be a valid path that is not empty and does not exceed 4,080 bytes.
+ **POSIX permissions metadata size** – Files or directories where the POSIX permission metadata exceeds 2 KB cannot be exported to S3.

## Quotas
<a name="s3-files-quotas-resource"></a>

The following quotas can be increased by contacting AWS Support.


| Resource | Default quota | 
| --- | --- | 
| File systems per AWS account | 1,000 | 
| Access points per file system | 10,000 | 

To request an increase, open the [AWS Support Center](https://console.aws.amazon.com/support/home), choose **Create Case**, then choose **Service Limit Increase**.

The following quotas cannot be changed.


| Resource | Quota | 
| --- | --- | 
| Connections per file system | 25,000 | 
| Mount targets per file system per Availability Zone | 1 | 
| Security groups per mount target | 5 | 
| Tags per file system | 50 | 
| VPCs per file system | 1 | 

There are also quotas specific to individual file systems.


| Resource | Quota | 
| --- | --- | 
| Maximum file size | 52,673,613,135,872 bytes (48 TiB) | 
| Maximum directory depth | 1,000 levels | 
| Maximum file name length | 255 bytes | 
| Maximum symlink target length | 4,080 bytes | 
| Maximum S3 object key length | 1,024 bytes | 
| Maximum open files per client | 32,768 | 
| Maximum active user accounts per client | 128 | 
| Maximum locks per file | 512 across all connected instances | 
| Maximum locks per mount | 8,192 across up to 256 file-process pairs | 
| File system policy size limit | 20,000 characters | 

## Unsupported NFS features
<a name="s3-files-unsupported-nfs"></a>

S3 Files supports NFSv4.1 and NFSv4.2, with the following exceptions:
+ pNFS
+ Client delegation or callbacks
+ Mandatory locking (all locks are advisory)
+ Deny share
+ Access control lists (ACLs)
+ Kerberos-based security
+ NFSv4.1 data retention
+ SetUID on directories
+ Block devices, character devices, attribute directories, and named attributes
+ The `nconnect` mount option

# Troubleshooting S3 Files
<a name="s3-files-troubleshooting"></a>

This page helps you diagnose and resolve common issues with S3 Files.
+ [Mount command fails](#s3-files-troubleshooting-mount-fails)
+ [Permission denied on file operations](#s3-files-troubleshooting-permission-denied)
+ [Intelligent read routing is not working](#s3-files-troubleshooting-read-routing)
+ [File system consistently returns NFS server error](#s3-files-troubleshooting-encrypted-fs)
+ [Missing object in S3 bucket after file system write](#s3-files-troubleshooting-missing-object)
+ [Files appearing in the lost and found directory](#s3-files-troubleshooting-lost-found)
+ [Synchronization falling behind](#s3-files-troubleshooting-sync-behind)
+ [Enabling client debug logs](#s3-files-troubleshooting-debug-logs)

## Mount command fails
<a name="s3-files-troubleshooting-mount-fails"></a>

The `mount -t s3files` command fails with an error.

**Common causes and actions:**
+ **"mount.s3files: command not found"** – The S3 Files client (amazon-efs-utils) is not installed or is below version 3.0.0. Install or upgrade the client. For more information, see [Prerequisites for S3 Files](s3-files-prereq-policies.md).
+ **"Failed to resolve file system DNS name"** – There is no mount target in the Availability Zone where your EC2 instance is running. Create a mount target in that Availability Zone, or launch your instance in an Availability Zone that has a mount target. For more information, see [Creating mount targets](s3-files-mount-targets-creating.md).
+ **Connection timed out** – The security group configuration is not allowing NFS traffic. Verify that the mount target's security group allows inbound TCP on port 2049 from your instance's security group, and that your instance's security group allows outbound TCP on port 2049 to the mount target's security group. For more information, see [Prerequisites for S3 Files](s3-files-prereq-policies.md).
+ **"Access denied" during mount** – The IAM role attached to your compute resource does not have the required S3 Files permissions. Verify that the role has the `AmazonS3FilesClientFullAccess` or `AmazonS3FilesClientReadOnlyAccess` managed policy attached, or at minimum the `s3files:ClientMount` permission. For more information, see [Prerequisites for S3 Files](s3-files-prereq-policies.md).
+ **botocore not installed** – The mount helper requires botocore to interact with AWS services. Install botocore following the instructions in the amazon-efs-utils README on GitHub.

## Permission denied on file operations
<a name="s3-files-troubleshooting-permission-denied"></a>

You can mount the file system but receive "Permission denied" or "Operation not permitted" errors when reading, writing, or accessing files.

**Common causes and actions:**
+ **Missing write permission** – If you can read but not write, verify that the IAM role attached to your compute resource includes the `s3files:ClientWrite` permission, or attach the `AmazonS3FilesClientReadWriteAccess` or `AmazonS3FilesClientFullAccess` managed policy. For more information, see [AWS managed policies for Amazon S3 Files](s3-files-security-iam-awsmanpol.md).
+ **Missing root access** – If you receive permission errors when accessing files owned by root (UID 0), your IAM role may not have the `s3files:ClientRootAccess` permission. Without this permission, all operations are performed as the NFS anonymous user (typically nfsnobody), which may not have access to the files. Attach the `AmazonS3FilesClientFullAccess` managed policy or add `s3files:ClientRootAccess` to your policy.
+ **File system policy denying access** – If you have attached a file system policy, verify that it does not deny the actions your clients need. An "allow" in either the identity-based policy or the file system policy is sufficient for access. For more information, see [How S3 Files works with IAM](s3-files-security-iam.md).
+ **POSIX permission mismatch** – S3 Files enforces standard POSIX permissions (owner, group, others) on files and directories. If your application runs as a user that does not match the file's owner or group, access may be denied even if IAM permissions are correct. Use an access point to enforce a specific UID/GID for all requests. For more information, see [Creating access points for an S3 file system](s3-files-access-points-creating.md).

## Intelligent read routing is not working
<a name="s3-files-troubleshooting-read-routing"></a>

S3 Files performs intelligent read routing as it automatically routes read requests to the storage layer best suited for them, while maintaining full file system semantics including consistency, locking, and POSIX permissions. Small, random reads of actively used files are served from the high-performance storage for low latency. Large sequential reads and reads of data not on the file system are served directly from your S3 bucket for high throughput, with no file system data charge.

Intelligent read routing may not be working if one of the client connectivity metrics (`NFSConnectionAccessible`, `S3BucketAccessible`, and `S3BucketReachable`) shows 0, or if you are not seeing the expected read throughput.

**Common causes and actions:**
+ **Missing S3 inline policy on compute role** – The IAM role attached to your compute resource must include an inline policy granting `s3:GetObject` and `s3:GetObjectVersion` on the linked S3 bucket. Without this policy, the mount helper cannot read directly from S3 and all reads go through the file system. For more information, see [Prerequisites for S3 Files](s3-files-prereq-policies.md).
+ **S3 bucket not reachable** – Check the `S3BucketReachable` metric. If it shows 0, verify that your compute resource has network access to S3 (for example, through a VPC endpoint or NAT gateway).
+ **File has been modified** – Reads are only served directly from S3 when the file has not been modified through the file system. If you have written to the file and the changes have not yet been synchronized to S3, reads go through the file system until synchronization completes.

## File system consistently returns NFS server error
<a name="s3-files-troubleshooting-encrypted-fs"></a>

An encrypted file system consistently returns NFS server errors. These errors can occur when S3 Files cannot retrieve your KMS key from AWS KMS for one of the following reasons:
+ The key was disabled.
+ The key was deleted.
+ Permission for S3 Files to use the key was revoked.
+ AWS KMS is temporarily unavailable.

**Action to take**

First, confirm that the AWS KMS key is enabled. You can view your keys in the AWS KMS console. For more information, see [Viewing Keys](https://docs.aws.amazon.com/kms/latest/developerguide/viewing-keys.html) in the *AWS Key Management Service Developer Guide*.

If the key is not enabled, enable it. For more information, see [Enabling and Disabling Keys](https://docs.aws.amazon.com/kms/latest/developerguide/enabling-keys.html) in the *AWS Key Management Service Developer Guide*.

If the key is pending deletion, cancel the deletion and re-enable the key. For more information, see [Scheduling and Canceling Key Deletion](https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys.html) in the *AWS Key Management Service Developer Guide*.

If the key is enabled and you are still experiencing issues, contact AWS Support.

## Missing object in S3 bucket after file system write
<a name="s3-files-troubleshooting-missing-object"></a>

You wrote a file through the file system and expected it to appear as an object in your S3 bucket, but the object is not there. S3 Files batches changes for approximately 60 seconds before copying them to S3. If the object still does not appear, the export may have failed. In such a case, you see the `FailedExports` CloudWatch metric increase.

**Action to take**

Check the file's export status using extended attributes:

```
getfattr -n "user.s3files.status;$(date -u +%s)" missing-file.txt --only-values
```

The timestamp in the attribute name ensures you get the latest status. Example output:

```
S3Key: s3://bucket/prefix/missing-file.txt
ExportError: PathTooLong
```

`ExportError` is not displayed if there is no export failure. `S3Key` is empty if an S3 object was never linked to the file.

The following table lists all possible `ExportError` values:


| Error | Cause | 
| --- | --- | 
| S3AccessDenied | The IAM role that S3 Files assumes does not have sufficient permissions to write to the S3 bucket. For more information, see [Prerequisites for S3 Files](s3-files-prereq-policies.md). | 
| S3BucketNotFound | The source S3 bucket no longer exists or has been renamed. Verify it exists in the expected AWS Region and account. | 
| InternalError | There was an internal system error. | 
| S3UserMetadataTooLarge | S3 user metadata size limit exceeded. See [Unsupported features, limits, and quotas](s3-files-quotas.md) for information on these limits. | 
| FileSizeExceedsS3Limit | File size exceeds S3 object size limit. See [Unsupported features, limits, and quotas](s3-files-quotas.md) for information on these limits. | 
| EncryptionKeyInaccessible | The encryption key used by the S3 bucket is inaccessible to S3 Files. Grant S3 Files access to your encryption key. For more information, see [Encryption](s3-files-encryption.md). | 
| RoleAssumptionFailed | Could not assume the role. Check your trust policies. For more information, see [Prerequisites for S3 Files](s3-files-prereq-policies.md). | 
| KeyTooLongToBreakCycle | S3 Files could not resolve a circular dependency (for example, due to renaming two files to each other's names) because the file path exceeds the S3 key length limit. Shorten the directory path to resolve this error. | 
| PathTooLong | Your file path exceeds the S3 key length limit. See [Unsupported features, limits, and quotas](s3-files-quotas.md) for information on these limits. | 
| DependencyExportFailed | A parent or a dependency has a non-retryable export failure. Check the status for the parent or any dependencies using getfattr. | 
| S3ObjectArchived | S3 object is archived (S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive) and cannot be read. Restore the object first using the S3 APIs. | 

S3 Files automatically retries failed exports. `ExportError` is shown only for non-retryable errors.

## Files appearing in the lost and found directory
<a name="s3-files-troubleshooting-lost-found"></a>

Files have appeared in the `.s3files-lost+found-file-system-id` directory in your file system's root directory. In this case, you see the `LostAndFoundFiles` CloudWatch metric increase. This occurs when a synchronization conflict arises. A conflict occurs when the same file is modified through the file system and the corresponding S3 object changes before S3 Files synchronizes the file system changes back to S3. S3 Files treats the S3 bucket as the source of truth, moves the conflicting file to the lost and found directory, and imports the latest version from the S3 bucket into the file system.

**Identifying files in the lost and found directory**

When S3 Files moves a file to the lost and found directory, it prepends the file name with a hexadecimal identifier to distinguish multiple versions of the same file that may be moved over time. File names longer than 100 characters are truncated to make room for this identifier. The file's original directory path is not preserved in the lost and found directory.

**Action to take**

Get the file's original path and the corresponding S3 object key:

```
getfattr -n "user.s3files.status;$(date -u +%s)" .s3files-lost+found-fs-12345678/abcdef1234_report.csv --only-values
```

Example output:

```
S3Key: s3://bucket/prefix/report.csv
FilePath: /data/report.csv
```


| Field | Description | 
| --- | --- | 
| S3Key | Full S3 path of the object that caused the conflict, or empty if the object was deleted in the S3 bucket. | 
| FilePath | Relative path of the file before the conflict. | 

You can then either keep the latest version from your S3 bucket and delete the file from the lost and found directory, or copy the file from the lost and found directory back to its original path to overwrite the S3 version.

**Note**  
Files in the lost and found directory remain there indefinitely and count toward your file system storage costs. Delete files from the lost and found directory when they are no longer needed.

## Synchronization falling behind
<a name="s3-files-troubleshooting-sync-behind"></a>

The `PendingExports` CloudWatch metric is growing, indicating that your workload is generating changes faster than S3 Files can synchronize them to S3.

This means that your workload may be exceeding the synchronization rate. S3 Files exports up to 800 files per second per file system. Consider reducing the rate of file modifications or distributing work across multiple file systems. Monitor the `PendingExports` metric over time. If it stabilizes or decreases, S3 Files is catching up. If it continues to grow, contact AWS Support.

## Enabling client debug logs
<a name="s3-files-troubleshooting-debug-logs"></a>

If you are troubleshooting mount, connectivity, or read bypass issues, you can enable debug-level logging on the S3 Files client to capture more detail.

**Mount helper and watchdog logs**

Edit `/etc/amazon/efs/s3files-utils.conf` and change the logging level from INFO to DEBUG:

```
[DEFAULT]
logging_level = DEBUG
```

Unmount and remount the file system for the change to take effect:

```
sudo umount /mnt/s3files
sudo mount -t s3files file-system-id:/ /mnt/s3files
```

Logs are written to `/var/log/amazon/efs/`. The mount helper log is `mount.log`.

**Proxy (efs-proxy) logs**

The proxy handles NFS traffic and S3 read bypass. To enable debug logging for the proxy, edit `/etc/amazon/efs/s3files-utils.conf`:

```
[proxy]
proxy_logging_level = DEBUG
```

Unmount and remount for the change to take effect. Proxy logs are written to `/var/log/amazon/efs/`.

**TLS tunnel (stunnel) logs**

TLS tunnel logs are disabled by default. To enable them, edit `/etc/amazon/efs/s3files-utils.conf` and set the following:

```
[mount]
stunnel_debug_enabled = true
```

To save all stunnel logs for a file system to a single file, also uncomment the `stunnel_logs_file` line:

```
stunnel_logs_file = /var/log/amazon/efs/{fs_id}.stunnel.log
```

**Log size limits**

Log files are rotated automatically. You can configure the maximum size and number of rotated files in `s3files-utils.conf`:

```
[DEFAULT]
logging_max_bytes = 1048576
logging_file_count = 10
```

The default is 1 MB per log file with 10 rotated files, for a maximum of 10 MB per log type.

**Sharing logs with AWS Support**

When contacting AWS Support, collect the client logs and configuration into a single archive:

```
sudo tar -czf /tmp/s3files-support-logs.tar.gz \
  /var/log/amazon/efs/ \
  /etc/amazon/efs/s3files-utils.conf
```

Include `/tmp/s3files-support-logs.tar.gz` with your support case.