

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Use application data storage for your cluster
<a name="storage"></a>

You can use a range of AWS storage services with Amazon EKS for the storage needs of your applications. Through an AWS-supported breadth of Container Storage Interface (CSI) drivers, you can easily use Amazon EBS, Amazon S3, Amazon S3 Files, Amazon EFS, Amazon FSx, and Amazon File Cache for the storage needs of your applications running on Amazon EKS. To manage backups of your Amazon EKS cluster, see [AWS Backup support for Amazon EKS](https://docs.aws.amazon.com/aws-backup/latest/devguide/working-with-supported-services.html#working-with-eks).

This chapter covers storage options for Amazon EKS clusters.

**Topics**
+ [Use Kubernetes volume storage with Amazon EBS](ebs-csi.md)
+ [Use elastic file system storage with Amazon EFS](efs-csi.md)
+ [Use Amazon S3 file system storage with the Amazon EFS CSI driver](s3files-csi.md)
+ [Use high-performance app storage with Amazon FSx for Lustre](fsx-csi.md)
+ [Use high-performance app storage with FSx for NetApp ONTAP](fsx-ontap.md)
+ [Use data storage with Amazon FSx for OpenZFS](fsx-openzfs-csi.md)
+ [Minimize latency with Amazon File Cache](file-cache-csi.md)
+ [Access Amazon S3 objects with Mountpoint for Amazon S3 CSI driver](s3-csi.md)
+ [Enable snapshot functionality for CSI volumes](csi-snapshot-controller.md)

# Use Kubernetes volume storage with Amazon EBS
<a name="ebs-csi"></a>

**Note**  
 **New:** Amazon EKS Auto Mode automates routine tasks for block storage. Learn how to [Deploy a sample stateful workload to EKS Auto Mode](sample-storage-workload.md).

The [Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/) manages the lifecycle of Amazon EBS volumes as storage for the Kubernetes Volumes that you create. The Amazon EBS CSI driver makes Amazon EBS volumes for these types of Kubernetes volumes: generic [ephemeral volumes](https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/) and [persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/).

## Considerations
<a name="ebs-csi-considerations"></a>
+ You do not need to install the Amazon EBS CSI controller on EKS Auto Mode clusters.
+ You can’t mount Amazon EBS volumes to Fargate Pods.
+ You can run the Amazon EBS CSI controller on Fargate nodes, but the Amazon EBS CSI node `DaemonSet` can only run on Amazon EC2 instances.
+ Amazon EBS volumes and the Amazon EBS CSI driver are not compatible with Amazon EKS Hybrid Nodes.
+ Support will be provided for the latest add-on version and one prior version. Fixes for bugs or vulnerabilities found in the latest version will be backported to the previous release as a new minor version.
+ EKS Auto Mode requires storage classes to use `ebs.csi.eks.amazonaws.com` as the provisioner. The standard Amazon EBS CSI Driver (`ebs.csi.aws.com`) manages its own volumes separately. To use existing volumes with EKS Auto Mode, migrate them using volume snapshots to a storage class that uses the Auto Mode provisioner.

**Important**  
To use the snapshot functionality of the Amazon EBS CSI driver, you must first install the CSI snapshot controller. For more information, see [Enable snapshot functionality for CSI volumes](csi-snapshot-controller.md).

## Prerequisites
<a name="ebs-csi-prereqs"></a>
+ An existing cluster. To see the required platform version, run the following command.

  ```
  aws eks describe-addon-versions --addon-name aws-ebs-csi-driver
  ```
+ The EBS CSI driver needs AWS IAM Permissions.
  +  AWS suggests using EKS Pod Identities. For more information, see [Overview of setting up EKS Pod Identities](pod-identities.md#pod-id-setup-overview).
  + For information about IAM Roles for Service Accounts, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).

## Step 1: Create an IAM role
<a name="csi-iam-role"></a>

The Amazon EBS CSI plugin requires IAM permissions to make calls to AWS APIs on your behalf. If you don’t do these steps, attempting to install the add-on and running `kubectl describe pvc` will show `failed to provision volume with StorageClass` along with a `could not create volume in EC2: UnauthorizedOperation` error. For more information, see [Set up driver permission](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/install.md#set-up-driver-permissions) on GitHub.

**Note**  
Pods will have access to the permissions that are assigned to the IAM role unless you block access to IMDS. For more information, see [Secure Amazon EKS clusters with best practices](security-best-practices.md).

The following procedure shows you how to create an IAM role and attach the AWS managed policy to it. To implement this procedure, you can use one of these tools:
+  [`eksctl`](#eksctl_store_app_data) 
+  [AWS Management Console](#console_store_app_data) 
+  [AWS CLI](#awscli_store_app_data) 

**Note**  
You can create a self-managed policy with further scoped-down permissions. Review [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEBSCSIDriverPolicyV2.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEBSCSIDriverPolicyV2.html) and create a custom IAM Policy with reduced permissions. If migrating from `AmazonEBSCSIDriverPolicy`, please see [EBS CSI Driver policy migration](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/2918).

**Note**  
The specific steps in this procedure are written for using the driver as an Amazon EKS add-on. Different steps are needed to use the driver as a self-managed add-on. For more information, see [Set up driver permissions](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/install.md#set-up-driver-permissions) on GitHub.

### `eksctl`
<a name="eksctl_store_app_data"></a>

1. Create an IAM role and attach a policy. AWS maintains an AWS managed policy or you can create your own custom policy. You can create an IAM role and attach the AWS managed policy with the following command. Replace *my-cluster* with the name of your cluster. The command deploys an AWS CloudFormation stack that creates an IAM role and attaches the IAM policy to it.

   ```
   eksctl create iamserviceaccount \
           --name ebs-csi-controller-sa \
           --namespace kube-system \
           --cluster my-cluster \
           --role-name AmazonEKS_EBS_CSI_DriverRole \
           --role-only \
           --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicyV2 \
           --approve
   ```

1. You can skip this step if you do not use a custom [KMS key](https://aws.amazon.com/kms/). If you use one for encryption on your Amazon EBS volumes, customize the IAM role as needed. For example, do the following:

   1. Copy and paste the following code into a new `kms-key-for-encryption-on-ebs.json` file. Replace *custom-key-arn* with the custom [KMS key ARN](https://docs.aws.amazon.com/service-authorization/latest/reference/list_awskeymanagementservice.html#awskeymanagementservice-key).

      ```
      {
            "Version":"2012-10-17",		 	 	 
            "Statement": [
              {
                "Effect": "Allow",
                "Action": [
                  "kms:CreateGrant",
                  "kms:ListGrants",
                  "kms:RevokeGrant"
                ],
                "Resource": ["arn:aws:kms:us-east-1:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab"],
                "Condition": {
                  "Bool": {
                    "kms:GrantIsForAWSResource": "true"
                  }
                }
              },
              {
                "Effect": "Allow",
                "Action": [
                  "kms:Encrypt",
                  "kms:Decrypt",
                  "kms:ReEncrypt*",
                  "kms:GenerateDataKey*",
                  "kms:DescribeKey"
                ],
                "Resource": ["arn:aws:kms:us-east-1:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab"]
              }
            ]
          }
      ```

   1. Create the policy. You can change *KMS\$1Key\$1For\$1Encryption\$1On\$1EBS\$1Policy* to a different name. However, if you do, make sure to change it in later steps, too.

      ```
      aws iam create-policy \
            --policy-name KMS_Key_For_Encryption_On_EBS_Policy \
            --policy-document file://kms-key-for-encryption-on-ebs.json
      ```

   1. Attach the IAM policy to the role with the following command. Replace *111122223333* with your account ID.

      ```
      aws iam attach-role-policy \
            --policy-arn arn:aws:iam::111122223333:policy/KMS_Key_For_Encryption_On_EBS_Policy \
            --role-name AmazonEKS_EBS_CSI_DriverRole
      ```

### AWS Management Console
<a name="console_store_app_data"></a>

1. Open the IAM console at https://console.aws.amazon.com/iam/.

1. In the left navigation pane, choose **Roles**.

1. On the **Roles** page, choose **Create role**.

1. On the **Select trusted entity** page, do the following:

   1. In the **Trusted entity type** section, choose **Web identity**.

   1. For **Identity provider**, choose the **OpenID Connect provider URL** for your cluster (as shown under **Overview** in Amazon EKS).

   1. For **Audience**, choose `sts.amazonaws.com`.

   1. Choose **Next**.

1. On the **Add permissions** page, do the following:

   1. In the **Filter policies** box, enter `AmazonEBSCSIDriverPolicyV2`.

   1. Select the check box to the left of the `AmazonEBSCSIDriverPolicyV2` returned in the search.

   1. Choose **Next**.

1. On the **Name, review, and create** page, do the following:

   1. For **Role name**, enter a unique name for your role, such as *AmazonEKS\$1EBS\$1CSI\$1DriverRole*.

   1. Under **Add tags (Optional)**, add metadata to the role by attaching tags as key-value pairs. For more information about using tags in IAM, see [Tagging IAM resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) in the *IAM User Guide*.

   1. Choose **Create role**.

1. After the role is created, choose the role in the console to open it for editing.

1. Choose the **Trust relationships** tab, and then choose **Edit trust policy**.

1. Find the line that looks similar to the following line:

   ```
   "oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com"
   ```

   Add a comma to the end of the previous line, and then add the following line after the previous line. Replace *region-code* with the AWS Region that your cluster is in. Replace *EXAMPLED539D4633E53DE1B71EXAMPLE* with your cluster’s OIDC provider ID.

   ```
   "oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:kube-system:ebs-csi-controller-sa"
   ```

1. Choose **Update policy** to finish.

1. If you use a custom [KMS key](https://aws.amazon.com/kms/) for encryption on your Amazon EBS volumes, customize the IAM role as needed. For example, do the following:

   1. In the left navigation pane, choose **Policies**.

   1. On the **Policies** page, choose **Create Policy**.

   1. On the **Create policy** page, choose the **JSON** tab.

   1. Copy and paste the following code into the editor, replacing *custom-key-arn* with the custom [KMS key ARN](https://docs.aws.amazon.com/service-authorization/latest/reference/list_awskeymanagementservice.html#awskeymanagementservice-key).

      ```
      {
            "Version":"2012-10-17",		 	 	 
            "Statement": [
              {
                "Effect": "Allow",
                "Action": [
                  "kms:CreateGrant",
                  "kms:ListGrants",
                  "kms:RevokeGrant"
                ],
                "Resource": ["arn:aws:kms:us-east-1:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab"],
                "Condition": {
                  "Bool": {
                    "kms:GrantIsForAWSResource": "true"
                  }
                }
              },
              {
                "Effect": "Allow",
                "Action": [
                  "kms:Encrypt",
                  "kms:Decrypt",
                  "kms:ReEncrypt*",
                  "kms:GenerateDataKey*",
                  "kms:DescribeKey"
                ],
                "Resource": ["arn:aws:kms:us-east-1:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab"]
              }
            ]
          }
      ```

   1. Choose **Next: Tags**.

   1. On the **Add tags (Optional)** page, choose **Next: Review**.

   1. For **Name**, enter a unique name for your policy (for example, *KMS\$1Key\$1For\$1Encryption\$1On\$1EBS\$1Policy*).

   1. Choose **Create policy**.

   1. In the left navigation pane, choose **Roles**.

   1. Choose the ** *AmazonEKS\$1EBS\$1CSI\$1DriverRole* ** in the console to open it for editing.

   1. From the **Add permissions** dropdown list, choose **Attach policies**.

   1. In the **Filter policies** box, enter *KMS\$1Key\$1For\$1Encryption\$1On\$1EBS\$1Policy*.

   1. Select the check box to the left of the *KMS\$1Key\$1For\$1Encryption\$1On\$1EBS\$1Policy* that was returned in the search.

   1. Choose **Attach policies**.

### AWS CLI
<a name="awscli_store_app_data"></a>

1. View your cluster’s OIDC provider URL. Replace *my-cluster* with your cluster name. If the output from the command is `None`, review the **Prerequisites**.

   ```
   aws eks describe-cluster --name my-cluster --query "cluster.identity.oidc.issuer" --output text
   ```

   An example output is as follows.

   ```
   https://oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE
   ```

1. Create the IAM role, granting the `AssumeRoleWithWebIdentity` action.

   1. Copy the following contents to a file that’s named `aws-ebs-csi-driver-trust-policy.json`. Replace *111122223333* with your account ID. Replace *EXAMPLED539D4633E53DE1B71EXAMPLE* and *region-code* with the values returned in the previous step.

      ```
      {
            "Version":"2012-10-17",		 	 	 
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Federated": "arn:aws:iam::111122223333:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
                },
                "Action": "sts:AssumeRoleWithWebIdentity",
                "Condition": {
                  "StringEquals": {
                    "oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com",
                    "oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:kube-system:ebs-csi-controller-sa"
                  }
                }
              }
            ]
          }
      ```

   1. Create the role. You can change *AmazonEKS\$1EBS\$1CSI\$1DriverRole* to a different name. If you change it, make sure to change it in later steps.

      ```
      aws iam create-role \
            --role-name AmazonEKS_EBS_CSI_DriverRole \
            --assume-role-policy-document file://"aws-ebs-csi-driver-trust-policy.json"
      ```

1. Attach a policy. AWS maintains an AWS managed policy or you can create your own custom policy. Attach the AWS managed policy to the role with the following command.

   ```
   aws iam attach-role-policy \
         --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicyV2 \
         --role-name AmazonEKS_EBS_CSI_DriverRole
   ```

1. If you use a custom [KMS key](https://aws.amazon.com/kms/) for encryption on your Amazon EBS volumes, customize the IAM role as needed. For example, do the following:

   1. Copy and paste the following code into a new `kms-key-for-encryption-on-ebs.json` file. Replace *custom-key-arn* with the custom [KMS key ARN](https://docs.aws.amazon.com/service-authorization/latest/reference/list_awskeymanagementservice.html#awskeymanagementservice-key).

      ```
      {
            "Version":"2012-10-17",		 	 	 
            "Statement": [
              {
                "Effect": "Allow",
                "Action": [
                  "kms:CreateGrant",
                  "kms:ListGrants",
                  "kms:RevokeGrant"
                ],
                "Resource": ["arn:aws:kms:us-east-1:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab"],
                "Condition": {
                  "Bool": {
                    "kms:GrantIsForAWSResource": "true"
                  }
                }
              },
              {
                "Effect": "Allow",
                "Action": [
                  "kms:Encrypt",
                  "kms:Decrypt",
                  "kms:ReEncrypt*",
                  "kms:GenerateDataKey*",
                  "kms:DescribeKey"
                ],
                "Resource": ["arn:aws:kms:us-east-1:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab"]
              }
            ]
          }
      ```

   1. Create the policy. You can change *KMS\$1Key\$1For\$1Encryption\$1On\$1EBS\$1Policy* to a different name. However, if you do, make sure to change it in later steps, too.

      ```
      aws iam create-policy \
            --policy-name KMS_Key_For_Encryption_On_EBS_Policy \
            --policy-document file://kms-key-for-encryption-on-ebs.json
      ```

   1. Attach the IAM policy to the role with the following command. Replace *111122223333* with your account ID.

      ```
      aws iam attach-role-policy \
            --policy-arn arn:aws:iam::111122223333:policy/KMS_Key_For_Encryption_On_EBS_Policy \
            --role-name AmazonEKS_EBS_CSI_DriverRole
      ```

Now that you have created the Amazon EBS CSI driver IAM role, you can continue to the next section. When you deploy the add-on with this IAM role, it creates and is configured to use a service account that’s named `ebs-csi-controller-sa`. The service account is bound to a Kubernetes `clusterrole` that’s assigned the required Kubernetes permissions.

## Step 2: Get the Amazon EBS CSI driver
<a name="managing-ebs-csi"></a>

We recommend that you install the Amazon EBS CSI driver through the Amazon EKS add-on to improve security and reduce the amount of work. To add an Amazon EKS add-on to your cluster, see [Create an Amazon EKS add-on](creating-an-add-on.md). For more information about add-ons, see [Amazon EKS add-ons](eks-add-ons.md).

**Important**  
Before adding the Amazon EBS driver as an Amazon EKS add-on, confirm that you don’t have a self-managed version of the driver installed on your cluster. If so, see [Uninstalling a self-managed Amazon EBS CSI driver](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/install.md#uninstalling-the-ebs-csi-driver) on GitHub.

**Note**  
By default, the RBAC role used by the EBS CSI has permissions to mutate nodes to support its taint removal feature. Due to limitations of Kubernetes RBAC, this also allows it to mutate any other Node in the cluster. The Helm chart has a parameter (`node.serviceAccount.disableMutation`) that disables mutating Node RBAC permissions for the ebs-csi-node service account. When enabled, driver features such as taint removal will not function.

Alternatively, if you want a self-managed installation of the Amazon EBS CSI driver, see [Installation](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/install.md) on GitHub.

## Step 3: Deploy a sample application
<a name="ebs-sample-app"></a>

You can deploy a variety of sample apps and modify them as needed. For more information, see [Kubernetes Examples](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/tree/master/examples/kubernetes) on GitHub.

# Use elastic file system storage with Amazon EFS
<a name="efs-csi"></a>

 [Amazon Elastic File System](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) (Amazon EFS) provides serverless, fully elastic file storage so that you can share file data without provisioning or managing storage capacity and performance. The [Amazon EFS Container Storage Interface (CSI) driver](https://github.com/kubernetes-sigs/aws-efs-csi-driver) allows Kubernetes clusters running on AWS to mount Amazon EFS file systems as persistent volumes. This topic shows you how to deploy the Amazon EFS CSI driver to your Amazon EKS cluster.

## Considerations
<a name="efs-csi-considerations"></a>
+ The Amazon EFS CSI driver isn’t compatible with Windows-based container images.
+ You can’t use [dynamic provisioning](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/efs/dynamic_provisioning/README.md) for persistent volumes with Fargate nodes, but you can use [static provisioning](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/efs/static_provisioning/README.md).
+  [Dynamic provisioning](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/efs/dynamic_provisioning/README.md) requires [1.2](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/CHANGELOG-1.x.md#v12) or later of the driver. You can use [static provisioning](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/efs/static_provisioning/README.md) for persistent volumes using version `1.1` of the driver on any supported Amazon EKS cluster version (see [Amazon EKS supported versions](https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html)).
+ Version [1.3.2](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/CHANGELOG-1.x.md#v132) or later of this driver supports the Arm64 architecture, including Amazon EC2 Graviton-based instances.
+ Version [1.4.2](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/CHANGELOG-1.x.md#v142) or later of this driver supports using FIPS for mounting file systems.
+ Take note of the resource quotas for Amazon EFS. For more information, see [Amazon EFS quotas](https://docs.aws.amazon.com/efs/latest/ug/limits.html).
+ Starting in version [2.0.0](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/CHANGELOG-2.x.md#v200), this driver switched from using `stunnel` to `efs-proxy` for TLS connections. When `efs-proxy` is used, it will open a number of threads equal to one plus the number of cores for the node it’s running on.
+ The Amazon EFS CSI driver isn’t compatible with Amazon EKS Hybrid Nodes.

## Prerequisites
<a name="efs-csi-prereqs"></a>
+ The Amazon EFS CSI driver needs AWS Identity and Access Management (IAM) permissions.
  +  AWS suggests using EKS Pod Identities. For more information, see [Overview of setting up EKS Pod Identities](pod-identities.md#pod-id-setup-overview).
  + For information about IAM roles for service accounts and setting up an IAM OpenID Connect (OIDC) provider for your cluster, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).
+ Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.
+ The `kubectl` command line tool is installed on your device or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is `1.29`, you can use `kubectl` version `1.28`, `1.29`, or `1.30` with it. To install or upgrade `kubectl`, see [Set up `kubectl` and `eksctl`](install-kubectl.md).

**Note**  
A Pod running on Fargate automatically mounts an Amazon EFS file system, without needing manual driver installation steps.

## Step 1: Create an IAM role
<a name="efs-create-iam-resources"></a>

The Amazon EFS CSI driver requires IAM permissions to interact with your file system. Create an IAM role and attach the ` arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy` managed policy to it.

**Note**  
If you want to use both Amazon EFS and Amazon S3 file system storage, you must attach both the `AmazonEFSCSIDriverPolicy` and the `AmazonS3FilesCSIDriverPolicy` managed policies to your IAM role. For more information about Amazon S3 file system storage, see [Use Amazon S3 file system storage with the Amazon EFS CSI driver](s3files-csi.md).

To implement this procedure, you can use one of these tools:
+  [`eksctl`](#eksctl_efs_store_app_data) 
+  [AWS Management Console](#console_efs_store_app_data) 
+  [AWS CLI](#awscli_efs_store_app_data) 

**Note**  
The specific steps in this procedure are written for using the driver as an Amazon EKS add-on. For details on self-managed installations, see [Set up driver permission](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/install.md#set-up-driver-permission) on GitHub.

### `eksctl`
<a name="eksctl_efs_store_app_data"></a>

#### If using Pod Identities
<a name="efs-eksctl-pod-identities"></a>

Run the following commands to create an IAM role and Pod Identity association with `eksctl`. Replace `my-cluster` with your cluster name. You can also replace `AmazonEKS_EFS_CSI_DriverRole` with a different name.

```
export cluster_name=my-cluster
export role_name=AmazonEKS_EFS_CSI_DriverRole
eksctl create podidentityassociation \
    --service-account-name efs-csi-controller-sa \
    --namespace kube-system \
    --cluster $cluster_name \
    --role-name $role_name \
    --permission-policy-arns arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy
```

#### If using IAM roles for service accounts
<a name="efs-eksctl-irsa"></a>

Run the following commands to create an IAM role with `eksctl`. Replace `my-cluster` with your cluster name. You can also replace `AmazonEKS_EFS_CSI_DriverRole` with a different name.

```
export cluster_name=my-cluster
export role_name=AmazonEKS_EFS_CSI_DriverRole
eksctl create iamserviceaccount \
    --name efs-csi-controller-sa \
    --namespace kube-system \
    --cluster $cluster_name \
    --role-name $role_name \
    --role-only \
    --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy \
    --approve
TRUST_POLICY=$(aws iam get-role --output json --role-name $role_name --query 'Role.AssumeRolePolicyDocument' | \
    sed -e 's/efs-csi-controller-sa/efs-csi-*/' -e 's/StringEquals/StringLike/')
aws iam update-assume-role-policy --role-name $role_name --policy-document "$TRUST_POLICY"
```

### AWS Management Console
<a name="console_efs_store_app_data"></a>

Run the following to create an IAM role with AWS Management Console.

1. Open the IAM console at https://console.aws.amazon.com/iam/.

1. In the left navigation pane, choose **Roles**.

1. On the **Roles** page, choose **Create role**.

1. On the **Select trusted entity** page, do the following:

   1. If using EKS Pod Identities:

      1. In the **Trusted entity type** section, choose ** AWS service**.

      1. In the **Service or use case** drop down, choose **EKS**.

      1. In the **Use case** section, choose **EKS - Pod Identity**.

      1. Choose **Next**.

   1. If using IAM roles for service accounts:

      1. In the **Trusted entity type** section, choose **Web identity**.

      1. For **Identity provider**, choose the **OpenID Connect provider URL** for your cluster (as shown under **Overview** in Amazon EKS).

      1. For **Audience**, choose `sts.amazonaws.com`.

      1. Choose **Next**.

1. On the **Add permissions** page, do the following:

   1. In the **Filter policies** box, enter `AmazonEFSCSIDriverPolicy`.

   1. Select the check box to the left of the `AmazonEFSCSIDriverPolicy` returned in the search.

   1. Choose **Next**.

1. On the **Name, review, and create** page, do the following:

   1. For **Role name**, enter a unique name for your role, such as `AmazonEKS_EFS_CSI_DriverRole`.

   1. Under **Add tags (Optional)**, add metadata to the role by attaching tags as key-value pairs. For more information about using tags in IAM, see [Tagging IAM resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) in the *IAM User Guide*.

   1. Choose **Create role**.

1. After the role is created:

   1. If using EKS Pod Identities:

      1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

      1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the EKS Pod Identity association for.

      1. Choose the **Access** tab.

      1. In **Pod Identity associations**, choose **Create**.

      1. Choose the **IAM role** dropdown and select your newly created role.

      1. Choose the **Kubernetes namespace** field and input `kube-system`.

      1. Choose the **Kubernetes service account** field and input `efs-csi-controller-sa`.

      1. Choose **Create**.

      1. For more information on creating Pod Identity associations, see [Create a Pod Identity association (AWS Console)](pod-id-association.md#pod-id-association-create).

   1. If using IAM roles for service accounts:

      1. Choose the role to open it for editing.

      1. Choose the **Trust relationships** tab, and then choose **Edit trust policy**.

      1. Find the line that looks similar to the following line:

         ```
         "oidc.eks.region-code.amazonaws.com/id/<EXAMPLED539D4633E53DE1B71EXAMPLE>:aud": "sts.amazonaws.com"
         ```

         Add the following line above the previous line. Replace `<region-code>` with the AWS Region that your cluster is in. Replace `<EXAMPLED539D4633E53DE1B71EXAMPLE>` with your cluster’s OIDC provider ID.

         ```
         "oidc.eks.<region-code>.amazonaws.com/id/<EXAMPLED539D4633E53DE1B71EXAMPLE>:sub": "system:serviceaccount:kube-system:efs-csi-*",
         ```

      1. Modify the `Condition` operator from `"StringEquals"` to `"StringLike"`.

      1. Choose **Update policy** to finish.

### AWS CLI
<a name="awscli_efs_store_app_data"></a>

Run the following commands to create an IAM role with AWS CLI.

#### If using Pod Identities
<a name="efs-cli-pod-identities"></a>

1. Create the IAM role that grants the `AssumeRole` and `TagSession` actions to the `pods.eks.amazonaws.com` service.

   1. Copy the following contents to a file named `aws-efs-csi-driver-trust-policy-pod-identity.json`.

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
                  "Effect": "Allow",
                  "Principal": {
                      "Service": "pods.eks.amazonaws.com"
                  },
                  "Action": [
                      "sts:AssumeRole",
                      "sts:TagSession"
                  ]
              }
          ]
      }
      ```

   1. Create the role. Replace `my-cluster` with your cluster name. You can also replace `AmazonEKS_EFS_CSI_DriverRole` with a different name.

      ```
      export cluster_name=my-cluster
      export role_name=AmazonEKS_EFS_CSI_DriverRole
      aws iam create-role \
        --role-name $role_name \
        --assume-role-policy-document file://"aws-efs-csi-driver-trust-policy-pod-identity.json"
      ```

1. Attach the required AWS managed policy to the role with the following command.

   ```
   aws iam attach-role-policy \
     --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy \
     --role-name $role_name
   ```

1. Run the following command to create the Pod Identity association. Replace `<111122223333>` with your account ID.

   ```
   aws eks create-pod-identity-association --cluster-name $cluster_name --role-arn {arn-aws}iam::<111122223333>:role/$role_name --namespace kube-system --service-account efs-csi-controller-sa
   ```

1. For more information on creating Pod Identity associations, see [Create a Pod Identity association (AWS Console)](pod-id-association.md#pod-id-association-create).

#### If using IAM roles for service accounts
<a name="efs-cli-irsa"></a>

1. View your cluster’s OIDC provider URL. Replace `my-cluster` with your cluster name. You can also replace `AmazonEKS_EFS_CSI_DriverRole` with a different name.

   ```
   export cluster_name=my-cluster
   export role_name=AmazonEKS_EFS_CSI_DriverRole
   aws eks describe-cluster --name $cluster_name --query "cluster.identity.oidc.issuer" --output text
   ```

   An example output is as follows.

   ```
   https://oidc.eks.<region-code>.amazonaws.com/id/<EXAMPLED539D4633E53DE1B71EXAMPLE>
   ```

   If the output from the command is `None`, review the **Prerequisites**.

1. Create the IAM role that grants the `AssumeRoleWithWebIdentity` action.

   1. Copy the following contents to a file named `aws-efs-csi-driver-trust-policy.json`. Replace `<111122223333>` with your account ID. Replace `<EXAMPLED539D4633E53DE1B71EXAMPLE>` and `<region-code>` with the values returned in the previous step.

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "Federated": "arn:aws:iam::123456789012:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
              "StringLike": {
                "oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:kube-system:efs-csi-*",
                "oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com"
              }
            }
          }
        ]
      }
      ```

   1. Create the role.

      ```
      aws iam create-role \
        --role-name $role_name \
        --assume-role-policy-document file://"aws-efs-csi-driver-trust-policy.json"
      ```

1. Attach the required AWS managed policy to the role with the following command.

   ```
   aws iam attach-role-policy \
     --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy \
     --role-name $role_name
   ```

## Step 2: Get the Amazon EFS CSI driver
<a name="efs-install-driver"></a>

We recommend that you install the Amazon EFS CSI driver through the Amazon EKS add-on. To add an Amazon EKS add-on to your cluster, see [Create an Amazon EKS add-on](creating-an-add-on.md). For more information about add-ons, see [Amazon EKS add-ons](eks-add-ons.md). If you’re unable to use the Amazon EKS add-on, we encourage you to submit an issue about why you can’t to the [Containers roadmap GitHub repository](https://github.com/aws/containers-roadmap/issues).

**Important**  
Before adding the Amazon EFS driver as an Amazon EKS add-on, confirm that you don’t have a self-managed version of the driver installed on your cluster. If so, see [Uninstalling the Amazon EFS CSI Driver](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/install.md#uninstalling-the-amazon-efs-csi-driver) on GitHub.

Alternatively, if you want a self-managed installation of the Amazon EFS CSI driver, see [Installation](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/install.md) on GitHub.

## Step 3: Create an Amazon EFS file system
<a name="efs-create-filesystem"></a>

To create an Amazon EFS file system, see [Create an Amazon EFS file system for Amazon EKS](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/efs-create-filesystem.md) on GitHub.

## Step 4: Deploy a sample application
<a name="efs-sample-app"></a>

You can deploy a variety of sample apps and modify them as needed. For more information, see [Examples](https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/examples/kubernetes) on GitHub.

# Use Amazon S3 file system storage with the Amazon EFS CSI driver
<a name="s3files-csi"></a>

S3 Files is a shared file system that connects any AWS compute directly with your data in Amazon S3. It provides fast, direct access to all of your S3 data as files with full file system semantics and low-latency performance, without your data ever leaving S3. That means file-based applications, agents, and teams can access and work with S3 data as a file system using the tools they already depend on. The [Amazon EFS Container Storage Interface (CSI) driver](https://github.com/kubernetes-sigs/aws-efs-csi-driver) allows Kubernetes clusters running on AWS to mount Amazon S3 file systems as persistent volumes starting from version [3.0.0](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/CHANGELOG-3.x.md#v300). This topic shows you how to use the Amazon EFS CSI driver to manage Amazon S3 file system on your Amazon EKS cluster.

## Considerations
<a name="s3files-csi-considerations"></a>
+ The Amazon EFS CSI driver isn’t compatible with Windows-based container images.
+ EKS Fargate doesn’t support S3 Files.
+ The Amazon EFS CSI driver isn’t compatible with Amazon EKS Hybrid Nodes.
+ Amazon S3 Files support in Amazon EFS CSI driver starts from version [3.0.0](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/CHANGELOG-3.x.md#v300).

## Prerequisites
<a name="s3files-csi-prereqs"></a>
+ The Amazon EFS CSI driver needs AWS Identity and Access Management (IAM) permissions.
  +  AWS suggests using EKS Pod Identities. For more information, see [Overview of setting up EKS Pod Identities](pod-identities.md#pod-id-setup-overview).
  + For information about IAM roles for service accounts and setting up an IAM OpenID Connect (OIDC) provider for your cluster, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).
+ Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.
+ The `kubectl` command line tool is installed on your device or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is `1.29`, you can use `kubectl` version `1.28`, `1.29`, or `1.30` with it. To install or upgrade `kubectl`, see [Set up `kubectl` and `eksctl`](install-kubectl.md).

## Step 1: Create IAM roles
<a name="s3files-create-iam-resources"></a>

The Amazon EFS CSI driver requires IAM permissions to interact with your file system. The EFS CSI driver uses two service accounts with separate IAM roles:
+  `efs-csi-controller-sa` — used by the controller, requires `AmazonS3FilesCSIDriverPolicy`.
+  `efs-csi-node-sa` — used by the node daemonset, requires:
  +  `AmazonS3ReadOnlyAccess` — enables streaming reads directly from your S3 bucket for higher throughput.
  +  `AmazonElasticFileSystemsUtils` — enables publishing efs-utils logs to Amazon CloudWatch for visibility into mount operations and easier troubleshooting.

**Note**  
If you want to use both Amazon S3 file system and Amazon EFS storage, you must attach both the `AmazonS3FilesCSIDriverPolicy` and the `AmazonEFSCSIDriverPolicy` managed policies to the controller role. For more information about Amazon EFS storage, see [Use elastic file system storage with Amazon EFS](efs-csi.md).

To implement this procedure, you can use one of these tools:
+  [`eksctl`](#eksctl_s3files_store_app_data) 
+  [AWS Management Console](#console_s3files_store_app_data) 
+  [AWS CLI](#awscli_s3files_store_app_data) 

**Note**  
The specific steps in this procedure are written for using the driver as an Amazon EKS add-on. For details on self-managed installations, see [Set up driver permission](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/install.md#set-up-driver-permission) on GitHub.

### `eksctl`
<a name="eksctl_s3files_store_app_data"></a>

#### If using Pod Identities
<a name="s3files-eksctl-pod-identities"></a>

Run the following commands to create IAM roles and Pod Identity associations with `eksctl`. Replace *my-cluster* with your value.

```
export cluster_name=my-cluster

# Create the controller role
eksctl create podidentityassociation \
    --service-account-name efs-csi-controller-sa \
    --namespace kube-system \
    --cluster $cluster_name \
    --role-name AmazonEKS_EFS_CSI_ControllerRole \
    --permission-policy-arns arn:aws:iam::aws:policy/service-role/AmazonS3FilesCSIDriverPolicy

# Create the node role
eksctl create podidentityassociation \
    --service-account-name efs-csi-node-sa \
    --namespace kube-system \
    --cluster $cluster_name \
    --role-name AmazonEKS_EFS_CSI_NodeRole \
    --permission-policy-arns arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess,arn:aws:iam::aws:policy/AmazonElasticFileSystemsUtils
```

#### If using IAM roles for service accounts
<a name="s3files-eksctl-irsa"></a>

Run the following commands to create IAM roles with `eksctl`. Replace *my-cluster* with your cluster name and *region-code* with your AWS Region code.

```
export cluster_name=my-cluster
export region_code=region-code

# Create the controller role
export controller_role_name=AmazonEKS_EFS_CSI_ControllerRole
eksctl create iamserviceaccount \
    --name efs-csi-controller-sa \
    --namespace kube-system \
    --cluster $cluster_name \
    --role-name $controller_role_name \
    --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonS3FilesCSIDriverPolicy \
    --approve \
    --region $region_code

# Create the node role
export node_role_name=AmazonEKS_EFS_CSI_NodeRole
eksctl create iamserviceaccount \
    --name efs-csi-node-sa \
    --namespace kube-system \
    --cluster $cluster_name \
    --role-name $node_role_name \
    --attach-policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \
    --attach-policy-arn arn:aws:iam::aws:policy/AmazonElasticFileSystemsUtils \
    --approve \
    --region $region_code
```

### AWS Management Console
<a name="console_s3files_store_app_data"></a>

Run the following to create an IAM role with AWS Management Console.

1. Open the IAM console at https://console.aws.amazon.com/iam/.

1. In the left navigation pane, choose **Roles**.

1. On the **Roles** page, choose **Create role**.

1. On the **Select trusted entity** page, do the following:

   1. If using EKS Pod Identities:

      1. In the **Trusted entity type** section, choose ** AWS service**.

      1. In the **Service or use case** drop down, choose **EKS**.

      1. In the **Use case** section, choose **EKS - Pod Identity**.

      1. Choose **Next**.

   1. If using IAM roles for service accounts:

      1. In the **Trusted entity type** section, choose **Web identity**.

      1. For **Identity provider**, choose the **OpenID Connect provider URL** for your cluster (as shown under **Overview** in Amazon EKS).

      1. For **Audience**, choose `sts.amazonaws.com`.

      1. Choose **Next**.

1. On the **Add permissions** page, do the following:

   1. In the **Filter policies** box, enter `AmazonS3FilesCSIDriverPolicy`.

   1. Select the check box to the left of the policy returned in the search.

   1. Choose **Next**.

1. On the **Name, review, and create** page, do the following:

   1. For **Role name**, enter a unique name for your role, such as `AmazonEKS_EFS_CSI_ControllerRole`.

   1. Under **Add tags (Optional)**, add metadata to the role by attaching tags as key-value pairs. For more information about using tags in IAM, see [Tagging IAM resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) in the *IAM User Guide*.

   1. Choose **Create role**.

1. After the role is created:

   1. If using EKS Pod Identities:

      1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

      1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the EKS Pod Identity association for.

      1. Choose the **Access** tab.

      1. In **Pod Identity associations**, choose **Create**.

      1. Choose the **IAM role** dropdown and select your newly created role.

      1. Choose the **Kubernetes namespace** field and input `kube-system`.

      1. Choose the **Kubernetes service account** field and input `efs-csi-controller-sa`.

      1. Choose **Create**.

      1. For more information on creating Pod Identity associations, see [Create a Pod Identity association (AWS Console)](pod-id-association.md#pod-id-association-create).

      1. Repeat the above steps to create a second role for the node service account. On the **Add permissions** page, attach `AmazonS3ReadOnlyAccess` and `AmazonElasticFileSystemsUtils` instead. Then create a Pod Identity association with `efs-csi-node-sa` for the **Kubernetes service account** field.

   1. If using IAM roles for service accounts:

      1. Choose the role to open it for editing.

      1. Choose the **Trust relationships** tab, and then choose **Edit trust policy**.

      1. Find the line that looks similar to the following line:

         ```
         "oidc.eks.region-code.amazonaws.com/id/<EXAMPLED539D4633E53DE1B71EXAMPLE>:aud": "sts.amazonaws.com"
         ```

         Add the following line above the previous line. Replace `<region-code>` with the AWS Region that your cluster is in. Replace `<EXAMPLED539D4633E53DE1B71EXAMPLE>` with your cluster’s OIDC provider ID.

         ```
         "oidc.eks.<region-code>.amazonaws.com/id/<EXAMPLED539D4633E53DE1B71EXAMPLE>:sub": "system:serviceaccount:kube-system:efs-csi-controller-sa",
         ```

      1. Choose **Update policy** to finish.

      1. Repeat the above steps to create a second role for the node service account. On the **Add permissions** page, attach `AmazonS3ReadOnlyAccess` and `AmazonElasticFileSystemsUtils` instead. In the trust policy, use `efs-csi-node-sa` for the `:sub` condition value.

### AWS CLI
<a name="awscli_s3files_store_app_data"></a>

Run the following commands to create IAM roles with AWS CLI.

#### If using Pod Identities
<a name="s3files-cli-pod-identities"></a>

1. Create the IAM role that grants the `AssumeRole` and `TagSession` actions to the `pods.eks.amazonaws.com` service.

   1. Copy the following contents to a file named `aws-efs-csi-driver-trust-policy-pod-identity.json`.

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
                  "Effect": "Allow",
                  "Principal": {
                      "Service": "pods.eks.amazonaws.com"
                  },
                  "Action": [
                      "sts:AssumeRole",
                      "sts:TagSession"
                  ]
              }
          ]
      }
      ```

   1. Create the role. Replace *my-cluster* with your cluster name.

      ```
      export cluster_name=my-cluster
      export controller_role_name=AmazonEKS_EFS_CSI_ControllerRole
      aws iam create-role \
        --role-name $controller_role_name \
        --assume-role-policy-document file://"aws-efs-csi-driver-trust-policy-pod-identity.json"
      ```

1. Attach the required AWS managed policy to the controller role.

   ```
   aws iam attach-role-policy \
     --policy-arn arn:aws:iam::aws:policy/service-role/AmazonS3FilesCSIDriverPolicy \
     --role-name $controller_role_name
   ```

1. Create the node IAM role using the same trust policy.

   ```
   export node_role_name=AmazonEKS_EFS_CSI_NodeRole
   aws iam create-role \
     --role-name $node_role_name \
     --assume-role-policy-document file://"aws-efs-csi-driver-trust-policy-pod-identity.json"
   ```

1. Attach the required AWS managed policies to the node role.

   ```
   aws iam attach-role-policy \
     --policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \
     --role-name $node_role_name
   
   aws iam attach-role-policy \
     --policy-arn arn:aws:iam::aws:policy/AmazonElasticFileSystemsUtils \
     --role-name $node_role_name
   ```

1. Run the following commands to create the Pod Identity associations. Replace `<111122223333>` with your account ID.

   ```
   aws eks create-pod-identity-association --cluster-name $cluster_name --role-arn {arn-aws}iam::<111122223333>:role/$controller_role_name --namespace kube-system --service-account efs-csi-controller-sa
   aws eks create-pod-identity-association --cluster-name $cluster_name --role-arn {arn-aws}iam::<111122223333>:role/$node_role_name --namespace kube-system --service-account efs-csi-node-sa
   ```

1. For more information on creating Pod Identity associations, see [Create a Pod Identity association (AWS Console)](pod-id-association.md#pod-id-association-create).

#### If using IAM roles for service accounts
<a name="s3files-cli-irsa"></a>

1. View your cluster’s OIDC provider URL. Replace *my-cluster* with your cluster name.

   ```
   export cluster_name=my-cluster
   aws eks describe-cluster --name $cluster_name --query "cluster.identity.oidc.issuer" --output text
   ```

   An example output is as follows.

   ```
   https://oidc.eks.<region-code>.amazonaws.com/id/<EXAMPLED539D4633E53DE1B71EXAMPLE>
   ```

   If the output from the command is `None`, review the **Prerequisites**.

1. Create the IAM role for the controller service account.

   1. Copy the following contents to a file named `controller-trust-policy.json`. Replace `<111122223333>` with your account ID. Replace `<EXAMPLED539D4633E53DE1B71EXAMPLE>` and `<region-code>` with the values returned in the previous step.

      ```
      {
          "Version": "2012-10-17", 		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": {
                      "Federated": "arn:aws:iam::<111122223333>:oidc-provider/oidc.eks.<region-code>.amazonaws.com/id/<EXAMPLED539D4633E53DE1B71EXAMPLE>"
                  },
                  "Action": "sts:AssumeRoleWithWebIdentity",
                  "Condition": {
                      "StringEquals": {
                          "oidc.eks.<region-code>.amazonaws.com/id/<EXAMPLED539D4633E53DE1B71EXAMPLE>:aud": "sts.amazonaws.com",
                          "oidc.eks.<region-code>.amazonaws.com/id/<EXAMPLED539D4633E53DE1B71EXAMPLE>:sub": "system:serviceaccount:kube-system:efs-csi-controller-sa"
                      }
                  }
              }
          ]
      }
      ```

   1. Create the role.

      ```
      export controller_role_name=AmazonEKS_EFS_CSI_ControllerRole
      aws iam create-role \
        --role-name $controller_role_name \
        --assume-role-policy-document file://"controller-trust-policy.json"
      ```

1. Attach the required AWS managed policy to the controller role.

   ```
   aws iam attach-role-policy \
     --policy-arn arn:aws:iam::aws:policy/service-role/AmazonS3FilesCSIDriverPolicy \
     --role-name $controller_role_name
   ```

1. Create the IAM role for the node service account.

   1. Copy the following contents to a file named `node-trust-policy.json`. Replace `<111122223333>` with your account ID. Replace `<EXAMPLED539D4633E53DE1B71EXAMPLE>` and `<region-code>` with the values returned in step 1.

      ```
      {
          "Version": "2012-10-17", 		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": {
                      "Federated": "arn:aws:iam::<111122223333>:oidc-provider/oidc.eks.<region-code>.amazonaws.com/id/<EXAMPLED539D4633E53DE1B71EXAMPLE>"
                  },
                  "Action": "sts:AssumeRoleWithWebIdentity",
                  "Condition": {
                      "StringEquals": {
                          "oidc.eks.<region-code>.amazonaws.com/id/<EXAMPLED539D4633E53DE1B71EXAMPLE>:sub": "system:serviceaccount:kube-system:efs-csi-node-sa",
                          "oidc.eks.<region-code>.amazonaws.com/id/<EXAMPLED539D4633E53DE1B71EXAMPLE>:aud": "sts.amazonaws.com"
                      }
                  }
              }
          ]
      }
      ```

   1. Create the role.

      ```
      export node_role_name=AmazonEKS_EFS_CSI_NodeRole
      aws iam create-role \
        --role-name $node_role_name \
        --assume-role-policy-document file://"node-trust-policy.json"
      ```

1. Attach the required AWS managed policies to the node role.

   ```
   aws iam attach-role-policy \
     --policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \
     --role-name $node_role_name
   
   aws iam attach-role-policy \
     --policy-arn arn:aws:iam::aws:policy/AmazonElasticFileSystemsUtils \
     --role-name $node_role_name
   ```

**Note**  
The `AmazonS3ReadOnlyAccess` policy grants read access to all S3 buckets. To constrain access to specific buckets, you can detach it and replace it with a tag-based inline policy. See [Amazon EFS CSI driver IAM policy documentation](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/iam-policy-create.md) on GitHub for details.

## Step 2: Get the Amazon EFS CSI driver
<a name="s3files-install-driver"></a>

We recommend that you install the Amazon EFS CSI driver through the Amazon EKS add-on. To add an Amazon EKS add-on to your cluster, see [Create an Amazon EKS add-on](creating-an-add-on.md). For more information about add-ons, see [Amazon EKS add-ons](eks-add-ons.md). If you’re unable to use the Amazon EKS add-on, we encourage you to submit an issue about why you can’t to the [Containers roadmap GitHub repository](https://github.com/aws/containers-roadmap/issues).

**Important**  
Before adding the Amazon EFS driver as an Amazon EKS add-on, confirm that you don’t have a self-managed version of the driver installed on your cluster. If so, see [Uninstalling the Amazon EFS CSI Driver](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/install.md#uninstalling-the-amazon-efs-csi-driver) on GitHub.

Alternatively, if you want a self-managed installation of the Amazon EFS CSI driver, see [Installation](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/install.md) on GitHub.

## Step 3: Create an Amazon S3 file system
<a name="s3files-create-filesystem"></a>

To create an Amazon S3 file system, see [Create an Amazon S3 file system for Amazon EKS](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/s3files-create-filesystem.md) on GitHub.

## Step 4: Deploy a sample application
<a name="s3files-sample-app"></a>

You can deploy a variety of sample apps and modify them as needed. For more information, see [Examples](https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/examples/kubernetes) on GitHub.

# Use high-performance app storage with Amazon FSx for Lustre
<a name="fsx-csi"></a>

The [Amazon FSx for Lustre Container Storage Interface (CSI) driver](https://github.com/kubernetes-sigs/aws-fsx-csi-driver) provides a CSI interface that allows Amazon EKS clusters to manage the lifecycle of Amazon FSx for Lustre file systems. For more information, see the [Amazon FSx for Lustre User Guide](https://docs.aws.amazon.com/fsx/latest/LustreGuide/what-is.html).

For details on how to deploy the Amazon FSx for Lustre CSI driver to your Amazon EKS cluster and verify that it works, see [Deploy the FSx for Lustre driver](fsx-csi-create.md).

# Deploy the FSx for Lustre driver
<a name="fsx-csi-create"></a>

This topic shows you how to deploy the [FSx for Lustre CSI driver](fsx-csi.md) to your Amazon EKS cluster and verify that it works. We recommend using the latest version of the driver. For available versions, see [CSI Specification Compatibility Matrix](https://github.com/kubernetes-sigs/aws-fsx-csi-driver/blob/master/docs/README.md#csi-specification-compatibility-matrix) on GitHub.

**Note**  
The driver isn’t supported on Fargate.

For detailed descriptions of the available parameters and complete examples that demonstrate the driver’s features, see the [FSx for Lustre Container Storage Interface (CSI) driver](https://github.com/kubernetes-sigs/aws-fsx-csi-driver) project on GitHub.

## Prerequisites
<a name="fsx-csi-prereqs"></a>
+ An existing cluster.
+ The Amazon FSx CSI Driver EKS add-on supports authentication through either EKS Pod Identity or IAM Roles for Service Accounts (IRSA). To use EKS Pod Identity, install the Pod Identity agent before or after deploying the FSx CSI Driver add-on. For more information, see [Set up the Amazon EKS Pod Identity Agent](pod-id-agent-setup.md). To use IRSA instead, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).
+ Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.
+ Version `0.215.0` or later of the `eksctl` command line tool installed on your device or AWS CloudShell. To install or update `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.
+ The `kubectl` command line tool is installed on your device or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is `1.29`, you can use `kubectl` version `1.28`, `1.29`, or `1.30` with it. To install or upgrade `kubectl`, see [Set up `kubectl` and `eksctl`](install-kubectl.md).

## Step 1: Create an IAM role
<a name="fsx-create-iam-role"></a>

The Amazon FSx CSI plugin requires IAM permissions to make calls to AWS APIs on your behalf.

**Note**  
Pods will have access to the permissions that are assigned to the IAM role unless you block access to IMDS. For more information, see [Secure Amazon EKS clusters with best practices](security-best-practices.md).

The following procedure shows you how to create an IAM role and attach the AWS managed policy to it.

1. Create an IAM role and attach the AWS managed policy with the following command. Replace `my-cluster` with the name of the cluster you want to use. The command deploys an AWS CloudFormation stack that creates an IAM role and attaches the IAM policy to it.

   ```
   eksctl create iamserviceaccount \
       --name fsx-csi-controller-sa \
       --namespace kube-system \
       --cluster my-cluster \
       --role-name AmazonEKS_FSx_CSI_DriverRole \
       --role-only \
       --attach-policy-arn arn:aws:iam::aws:policy/AmazonFSxFullAccess \
       --approve
   ```

   You’ll see several lines of output as the service account is created. The last lines of output are similar to the following.

   ```
   [ℹ]  1 task: {
       2 sequential sub-tasks: {
           create IAM role for serviceaccount "kube-system/fsx-csi-controller-sa",
           create serviceaccount "kube-system/fsx-csi-controller-sa",
       } }
   [ℹ]  building iamserviceaccount stack "eksctl-my-cluster-addon-iamserviceaccount-kube-system-fsx-csi-controller-sa"
   [ℹ]  deploying stack "eksctl-my-cluster-addon-iamserviceaccount-kube-system-fsx-csi-controller-sa"
   [ℹ]  waiting for CloudFormation stack "eksctl-my-cluster-addon-iamserviceaccount-kube-system-fsx-csi-controller-sa"
   [ℹ]  created serviceaccount "kube-system/fsx-csi-controller-sa"
   ```

   Note the name of the AWS CloudFormation stack that was deployed. In the previous example output, the stack is named `eksctl-my-cluster-addon-iamserviceaccount-kube-system-fsx-csi-controller-sa`.

Now that you have created the Amazon FSx CSI driver IAM role, you can continue to the next section. When you deploy the add-on with this IAM role, it creates and is configured to use a service account that’s named `fsx-csi-controller-sa`. The service account is bound to a Kubernetes `clusterrole` that’s assigned the required Kubernetes permissions.

## Step 2: Install the Amazon FSx CSI driver
<a name="fsx-csi-deploy-driver"></a>

We recommend that you install the Amazon FSx CSI driver through the Amazon EKS add-on to improve security and reduce the amount of work. To add an Amazon EKS add-on to your cluster, see [Create an Amazon EKS add-on](creating-an-add-on.md). For more information about add-ons, see [Amazon EKS add-ons](eks-add-ons.md).

**Important**  
Pre-existing Amazon FSx CSI driver installations in the cluster can cause add-on installation failures. When you attempt to install the Amazon EKS add-on version while a non-EKS FSx CSI Driver exists, the installation will fail due to resource conflicts. Use the `OVERWRITE` flag during installation to resolve this issue.  

```
aws eks create-addon --addon-name aws-fsx-csi-driver --cluster-name my-cluster --resolve-conflicts OVERWRITE
```

Alternatively, if you want a self-managed installation of the Amazon FSx CSI driver, see [Installation](https://github.com/kubernetes-sigs/aws-fsx-csi-driver/blob/master/docs/install.md) on GitHub.

## Step 3: Deploy a storage class, persistent volume claim, and sample app
<a name="fsx-csi-deploy-storage-class"></a>

This procedure uses the [FSx for Lustre Container Storage Interface (CSI) driver](https://github.com/kubernetes-sigs/aws-fsx-csi-driver) GitHub repository to consume a dynamically-provisioned FSx for Lustre volume.

1. Note the security group for your cluster. You can see it in the AWS Management Console under the **Networking** section or by using the following AWS CLI command. Replace `my-cluster` with the name of the cluster you want to use.

   ```
   aws eks describe-cluster --name my-cluster --query cluster.resourcesVpcConfig.clusterSecurityGroupId
   ```

1. Create a security group for your Amazon FSx file system according to the criteria shown in [Amazon VPC Security Groups](https://docs.aws.amazon.com/fsx/latest/LustreGuide/limit-access-security-groups.html#fsx-vpc-security-groups) in the Amazon FSx for Lustre User Guide. For the **VPC**, select the VPC of your cluster as shown under the **Networking** section. For "the security groups associated with your Lustre clients", use your cluster security group. You can leave the outbound rules alone to allow **All traffic**.

1. Download the storage class manifest with the following command.

   ```
   curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-fsx-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/storageclass.yaml
   ```

1. Edit the parameters section of the `storageclass.yaml` file. Replace every example value with your own values.

   ```
   parameters:
     subnetId: subnet-0eabfaa81fb22bcaf
     securityGroupIds: sg-068000ccf82dfba88
     deploymentType: PERSISTENT_1
     automaticBackupRetentionDays: "1"
     dailyAutomaticBackupStartTime: "00:00"
     copyTagsToBackups: "true"
     perUnitStorageThroughput: "200"
     dataCompressionType: "NONE"
     weeklyMaintenanceStartTime: "7:09:00"
     fileSystemTypeVersion: "2.12"
   ```
   +  ** `subnetId` ** – The subnet ID that the Amazon FSx for Lustre file system should be created in. Amazon FSx for Lustre isn’t supported in all Availability Zones. Open the Amazon FSx for Lustre console at https://console.aws.amazon.com/fsx/ to confirm that the subnet that you want to use is in a supported Availability Zone. The subnet can include your nodes, or can be a different subnet or VPC:
     + You can check for the node subnets in the AWS Management Console by selecting the node group under the **Compute** section.
     + If the subnet that you specify isn’t the same subnet that you have nodes in, then your VPCs must be [connected](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/amazon-vpc-to-amazon-vpc-connectivity-options.html), and you must ensure that you have the necessary ports open in your security groups.
   +  ** `securityGroupIds` ** – The ID of the security group you created for the file system.
   +  ** `deploymentType` (optional)** – The file system deployment type. Valid values are `SCRATCH_1`, `SCRATCH_2`, `PERSISTENT_1`, and `PERSISTENT_2`. For more information about deployment types, see [Create your Amazon FSx for Lustre file system](https://docs.aws.amazon.com/fsx/latest/LustreGuide/getting-started-step1.html).
   +  **other parameters (optional)** – For information about the other parameters, see [Edit StorageClass](https://github.com/kubernetes-sigs/aws-fsx-csi-driver/tree/master/examples/kubernetes/dynamic_provisioning#edit-storageclass) on GitHub.

1. Create the storage class manifest.

   ```
   kubectl apply -f storageclass.yaml
   ```

   An example output is as follows.

   ```
   storageclass.storage.k8s.io/fsx-sc created
   ```

1. Download the persistent volume claim manifest.

   ```
   curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-fsx-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/claim.yaml
   ```

1. (Optional) Edit the `claim.yaml` file. Change `1200Gi` to one of the following increment values, based on your storage requirements and the `deploymentType` that you selected in a previous step.

   ```
   storage: 1200Gi
   ```
   +  `SCRATCH_2` and `PERSISTENT` – `1.2 TiB`, `2.4 TiB`, or increments of 2.4 TiB over 2.4 TiB.
   +  `SCRATCH_1` – `1.2 TiB`, `2.4 TiB`, `3.6 TiB`, or increments of 3.6 TiB over 3.6 TiB.

1. Create the persistent volume claim.

   ```
   kubectl apply -f claim.yaml
   ```

   An example output is as follows.

   ```
   persistentvolumeclaim/fsx-claim created
   ```

1. Confirm that the file system is provisioned.

   ```
   kubectl describe pvc
   ```

   An example output is as follows.

   ```
   Name:          fsx-claim
   Namespace:     default
   StorageClass:  fsx-sc
   Status:        Bound
   [...]
   ```
**Note**  
The `Status` may show as `Pending` for 5-10 minutes, before changing to `Bound`. Don’t continue with the next step until the `Status` is `Bound`. If the `Status` shows `Pending` for more than 10 minutes, use warning messages in the `Events` as reference for addressing any problems.

1. Deploy the sample application.

   ```
   kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-fsx-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/pod.yaml
   ```

1. Verify that the sample application is running.

   ```
   kubectl get pods
   ```

   An example output is as follows.

   ```
   NAME      READY   STATUS              RESTARTS   AGE
   fsx-app   1/1     Running             0          8s
   ```

1. Verify that the file system is mounted correctly by the application.

   ```
   kubectl exec -ti fsx-app -- df -h
   ```

   An example output is as follows.

   ```
   Filesystem                   Size  Used Avail Use% Mounted on
   overlay                       80G  4.0G   77G   5% /
   tmpfs                         64M     0   64M   0% /dev
   tmpfs                        3.8G     0  3.8G   0% /sys/fs/cgroup
   192.0.2.0@tcp:/abcdef01      1.1T  7.8M  1.1T   1% /data
   /dev/nvme0n1p1                80G  4.0G   77G   5% /etc/hosts
   shm                           64M     0   64M   0% /dev/shm
   tmpfs                        6.9G   12K  6.9G   1% /run/secrets/kubernetes.io/serviceaccount
   tmpfs                        3.8G     0  3.8G   0% /proc/acpi
   tmpfs                        3.8G     0  3.8G   0% /sys/firmware
   ```

1. Verify that data was written to the FSx for Lustre file system by the sample app.

   ```
   kubectl exec -it fsx-app -- ls /data
   ```

   An example output is as follows.

   ```
   out.txt
   ```

   This example output shows that the sample app successfully wrote the `out.txt` file to the file system.

**Note**  
Before deleting the cluster, make sure to delete the FSx for Lustre file system. For more information, see [Clean up resources](https://docs.aws.amazon.com/fsx/latest/LustreGuide/getting-started-step4.html) in the *FSx for Lustre User Guide*.

## Performance tuning for FSx for Lustre
<a name="_performance_tuning_for_fsx_for_lustre"></a>

When using FSx for Lustre with Amazon EKS, you can optimize performance by applying Lustre tunings during node initialization. The recommended approach is to use launch template user data to ensure consistent configuration across all nodes.

These tunings include:
+ Network and RPC optimizations
+ Lustre module management
+ LRU (Lock Resource Unit) tunings
+ Client cache control settings
+ RPC controls for OST and MDC

For detailed instructions on implementing these performance tunings:
+ For optimizing performance for standard nodes (non-EFA), see [Optimize Amazon FSx for Lustre performance on nodes (non-EFA)](fsx-csi-tuning-non-efa.md) for a complete script that can be added to your launch template user data.
+ For optimizing performance for EFA-enabled nodes, see [Optimize Amazon FSx for Lustre performance on nodes (EFA)](fsx-csi-tuning-efa.md).

# Optimize Amazon FSx for Lustre performance on nodes (EFA)
<a name="fsx-csi-tuning-efa"></a>

This topic describes how to set up Elastic Fabric Adapter (EFA) tuning with Amazon EKS and Amazon FSx for Lustre.

**Note**  
For information on creating and deploying the FSx for Lustre CSI driver, see [Deploy the FSx for Lustre driver](fsx-csi-create.md).
For optimizing standard nodes without EFA, see [Optimize Amazon FSx for Lustre performance on nodes (non-EFA)](fsx-csi-tuning-non-efa.md).

## Step 1. Create EKS cluster
<a name="create-eks-cluster"></a>

Create a cluster using the provided configuration file:

```
# Create cluster using efa-cluster.yaml
eksctl create cluster -f efa-cluster.yaml
```

Example `efa-cluster.yaml`:

```
#efa-cluster.yaml

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: csi-fsx
  region: us-east-1
  version: "1.30"

iam:
  withOIDC: true

availabilityZones: ["us-east-1a", "us-east-1d"]

managedNodeGroups:
  - name: my-efa-ng
    instanceType: c6gn.16xlarge
    minSize: 1
    desiredCapacity: 1
    maxSize: 1
    availabilityZones: ["us-east-1b"]
    volumeSize: 300
    privateNetworking: true
    amiFamily: Ubuntu2204
    efaEnabled: true
    preBootstrapCommands:
      - |
        #!/bin/bash
        eth_intf="$(/sbin/ip -br -4 a sh | grep $(hostname -i)/ | awk '{print $1}')"
        efa_version=$(modinfo efa | awk '/^version:/ {print $2}' | sed 's/[^0-9.]//g')
        min_efa_version="2.12.1"

        if [[ "$(printf '%s\n' "$min_efa_version" "$efa_version" | sort -V | head -n1)" != "$min_efa_version" ]]; then
            sudo curl -O https://efa-installer.amazonaws.com/aws-efa-installer-1.37.0.tar.gz
            tar -xf aws-efa-installer-1.37.0.tar.gz && cd aws-efa-installer
            echo "Installing EFA driver"
            sudo apt-get update && apt-get upgrade -y
            sudo apt install -y pciutils environment-modules libnl-3-dev libnl-route-3-200 libnl-route-3-dev dkms
            sudo ./efa_installer.sh -y
            modinfo efa
        else
            echo "Using EFA driver version $efa_version"
        fi

        echo "Installing Lustre client"
        sudo wget -O - https://fsx-lustre-client-repo-public-keys.s3.amazonaws.com/fsx-ubuntu-public-key.asc | gpg --dearmor | sudo tee /usr/share/keyrings/fsx-ubuntu-public-key.gpg > /dev/null
        sudo echo "deb [signed-by=/usr/share/keyrings/fsx-ubuntu-public-key.gpg] https://fsx-lustre-client-repo.s3.amazonaws.com/ubuntu jammy main" > /etc/apt/sources.list.d/fsxlustreclientrepo.list
        sudo apt update | tail
        sudo apt install -y lustre-client-modules-$(uname -r) amazon-ec2-utils | tail
        modinfo lustre

        echo "Loading Lustre/EFA modules..."
        sudo /sbin/modprobe lnet
        sudo /sbin/modprobe kefalnd ipif_name="$eth_intf"
        sudo /sbin/modprobe ksocklnd
        sudo lnetctl lnet configure

        echo "Configuring TCP interface..."
        sudo lnetctl net del --net tcp 2> /dev/null
        sudo lnetctl net add --net tcp --if $eth_intf

        # For P5 instance type which supports 32 network cards,
        # by default add 8 EFA interfaces selecting every 4th device (1 per PCI bus)
        echo "Configuring EFA interface(s)..."
        instance_type="$(ec2-metadata --instance-type | awk '{ print $2 }')"
        num_efa_devices="$(ls -1 /sys/class/infiniband | wc -l)"
        echo "Found $num_efa_devices available EFA device(s)"

        if [[ "$instance_type" == "p5.48xlarge" || "$instance_type" == "p5e.48xlarge" ]]; then
           for intf in $(ls -1 /sys/class/infiniband | awk 'NR % 4 == 1'); do
               sudo lnetctl net add --net efa --if $intf --peer-credits 32
          done
        else
        # Other instances: Configure 2 EFA interfaces by default if the instance supports multiple network cards,
        # or 1 interface for single network card instances
        # Can be modified to add more interfaces if instance type supports it
            sudo lnetctl net add --net efa --if $(ls -1 /sys/class/infiniband | head -n1) --peer-credits 32
            if [[ $num_efa_devices -gt 1 ]]; then
               sudo lnetctl net add --net efa --if $(ls -1 /sys/class/infiniband | tail -n1) --peer-credits 32
            fi
        fi

        echo "Setting discovery and UDSP rule"
        sudo lnetctl set discovery 1
        sudo lnetctl udsp add --src efa --priority 0
        sudo /sbin/modprobe lustre

        sudo lnetctl net show
        echo "Added $(sudo lnetctl net show | grep -c '@efa') EFA interface(s)"
```

## Step 2. Create node group
<a name="create-node-group"></a>

Create an EFA-enabled node group:

```
# Create node group using efa-ng.yaml
eksctl create nodegroup -f efa-ng.yaml
```

**Important**  
=== Adjust these values for your environment in section `# 5. Mount FSx filesystem`.

```
FSX_DNS="<your-fsx-filesystem-dns>" # Needs to be adjusted.
MOUNT_NAME="<your-mount-name>" # Needs to be adjusted.
MOUNT_POINT="</your/mount/point>" # Needs to be adjusted.
```

===

Example `efa-ng.yaml`:

```
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: final-efa
  region: us-east-1

managedNodeGroups:
  - name: ng-1
    instanceType: c6gn.16xlarge
    minSize: 1
    desiredCapacity: 1
    maxSize: 1
    availabilityZones: ["us-east-1a"]
    volumeSize: 300
    privateNetworking: true
    amiFamily: Ubuntu2204
    efaEnabled: true
    preBootstrapCommands:
      - |
        #!/bin/bash
        exec 1> >(logger -s -t $(basename $0)) 2>&1

        #########################################################################################
        #                                    Configuration Section                              #
        #########################################################################################

        # File System Configuration
        FSX_DNS="<your-fsx-filesystem-dns>" # Needs to be adjusted.
        MOUNT_NAME="<your-mount-name>" # Needs to be adjusted.
        MOUNT_POINT="</your/mount/point>" # Needs to be adjusted.

        # Lustre Tuning Parameters
        LUSTRE_LRU_MAX_AGE=600000
        LUSTRE_MAX_CACHED_MB=64
        LUSTRE_OST_MAX_RPC=32
        LUSTRE_MDC_MAX_RPC=64
        LUSTRE_MDC_MOD_RPC=50

        # File paths
        FUNCTIONS_SCRIPT="/usr/local/bin/lustre_functions.sh"
        TUNINGS_SCRIPT="/usr/local/bin/apply_lustre_tunings.sh"
        SERVICE_FILE="/etc/systemd/system/lustre-tunings.service"

        #EFA
        MIN_EFA_VERSION="2.12.1"

        # Function to check if a command was successful
        check_success() {
            if [ $? -eq 0 ]; then
                echo "SUCCESS: $1"
            else
                echo "FAILED: $1"
                return 1
            fi
        }

        echo "********Starting FSx for Lustre configuration********"

        # 1. Install Lustre client
        if grep -q '^ID=ubuntu' /etc/os-release; then
            echo "Detected Ubuntu, proceeding with Lustre setup..."
            # Add Lustre repository
            sudo wget -O - https://fsx-lustre-client-repo-public-keys.s3.amazonaws.com/fsx-ubuntu-public-key.asc | sudo gpg --dearmor | sudo tee /usr/share/keyrings/fsx-ubuntu-public-key.gpg > /dev/null

            echo "deb [signed-by=/usr/share/keyrings/fsx-ubuntu-public-key.gpg] https://fsx-lustre-client-repo.s3.amazonaws.com/ubuntu jammy main" | sudo tee /etc/apt/sources.list.d/fsxlustreclientrepo.list

            sudo apt-get update
            sudo apt-get install -y lustre-client-modules-$(uname -r)
            sudo apt-get install -y lustre-client
        else
            echo "Not Ubuntu, exiting"
            exit 1
        fi

        check_success "Install Lustre client"

        # Ensure Lustre tools are in the PATH
        export PATH=$PATH:/usr/sbin

        # 2. Apply network and RPC tunings
        echo "********Applying network and RPC tunings********"
        if ! grep -q "options ptlrpc ptlrpcd_per_cpt_max" /etc/modprobe.d/modprobe.conf; then
            echo "options ptlrpc ptlrpcd_per_cpt_max=64" | sudo tee -a /etc/modprobe.d/modprobe.conf
            check_success "Set ptlrpcd_per_cpt_max"
        else
            echo "ptlrpcd_per_cpt_max already set in modprobe.conf"
        fi

        if ! grep -q "options ksocklnd credits" /etc/modprobe.d/modprobe.conf; then
            echo "options ksocklnd credits=2560" | sudo tee -a /etc/modprobe.d/modprobe.conf
            check_success "Set ksocklnd credits"
        else
            echo "ksocklnd credits already set in modprobe.conf"
        fi

        # 3. Load Lustre modules
        manage_lustre_modules() {
            echo "Checking for existing Lustre modules..."
            if lsmod | grep -q lustre; then
                echo "Existing Lustre modules found."

                # Check for mounted Lustre filesystems
                echo "Checking for mounted Lustre filesystems..."
                if mount | grep -q "type lustre"; then
                    echo "Found mounted Lustre filesystems. Attempting to unmount..."
                    mounted_fs=$(mount | grep "type lustre" | awk '{print $3}')
                    for fs in $mounted_fs; do
                        echo "Unmounting $fs"
                        sudo umount $fs
                        check_success "Unmount filesystem $fs"
                    done
                else
                    echo "No Lustre filesystems mounted."
                fi

                # After unmounting, try to remove modules
                echo "Attempting to remove Lustre modules..."
                sudo lustre_rmmod
                if [ $? -eq 0 ]; then
                    echo "SUCCESS: Removed existing Lustre modules"
                else
                    echo "WARNING: Could not remove Lustre modules. They may still be in use."
                    echo "Please check for any remaining Lustre processes or mounts."
                    return 1
                fi
            else
                echo "No existing Lustre modules found."
            fi

            echo "Loading Lustre modules..."
            sudo modprobe lustre
            check_success "Load Lustre modules" || exit 1

            echo "Checking loaded Lustre modules:"
            lsmod | grep lustre
        }

        # Managing Lustre kernel modules
        echo "********Managing Lustre kernel modules********"
        manage_lustre_modules

        # 4. Initializing Lustre networking
        echo "********Initializing Lustre networking********"
        sudo lctl network up
        check_success "Initialize Lustre networking" || exit 1

        # 4.5 EFA Setup and Configuration
        setup_efa() {
            echo "********Starting EFA Setup********"

            # Get interface and version information
            eth_intf="$(/sbin/ip -br -4 a sh | grep $(hostname -i)/ | awk '{print $1}')"
            efa_version=$(modinfo efa | awk '/^version:/ {print $2}' | sed 's/[^0-9.]//g')
            min_efa_version=$MIN_EFA_VERSION

            # Install or verify EFA driver
            if [[ "$(printf '%s\n' "$min_efa_version" "$efa_version" | sort -V | head -n1)" != "$min_efa_version" ]]; then
                echo "Installing EFA driver..."
                sudo curl -O https://efa-installer.amazonaws.com/aws-efa-installer-1.37.0.tar.gz
                tar -xf aws-efa-installer-1.37.0.tar.gz && cd aws-efa-installer

                # Install dependencies
                sudo apt-get update && apt-get upgrade -y
                sudo apt install -y pciutils environment-modules libnl-3-dev libnl-route-3-200 libnl-route-3-dev dkms

                # Install EFA
                sudo ./efa_installer.sh -y
                modinfo efa
            else
                echo "Using existing EFA driver version $efa_version"
            fi
        }

        configure_efa_network() {
            echo "********Configuring EFA Network********"

            # Load required modules
            echo "Loading network modules..."
            sudo /sbin/modprobe lnet
            sudo /sbin/modprobe kefalnd ipif_name="$eth_intf"
            sudo /sbin/modprobe ksocklnd

            # Initialize LNet
            echo "Initializing LNet..."
            sudo lnetctl lnet configure

            # Configure TCP interface
            echo "Configuring TCP interface..."
            sudo lnetctl net del --net tcp 2> /dev/null
            sudo lnetctl net add --net tcp --if $eth_intf

            # For P5 instance type which supports 32 network cards,
            # by default add 8 EFA interfaces selecting every 4th device (1 per PCI bus)
            echo "Configuring EFA interface(s)..."
            instance_type="$(ec2-metadata --instance-type | awk '{ print $2 }')"
            num_efa_devices="$(ls -1 /sys/class/infiniband | wc -l)"
            echo "Found $num_efa_devices available EFA device(s)"

            if [[ "$instance_type" == "p5.48xlarge" || "$instance_type" == "p5e.48xlarge" ]]; then
                # P5 instance configuration
                for intf in $(ls -1 /sys/class/infiniband | awk 'NR % 4 == 1'); do
                    sudo lnetctl net add --net efa --if $intf --peer-credits 32
                done
            else
                # Standard configuration
                # Other instances: Configure 2 EFA interfaces by default if the instance supports multiple network cards,
                # or 1 interface for single network card instances
                # Can be modified to add more interfaces if instance type supports it
                sudo lnetctl net add --net efa --if $(ls -1 /sys/class/infiniband | head -n1) --peer-credits 32
                if [[ $num_efa_devices -gt 1 ]]; then
                    sudo lnetctl net add --net efa --if $(ls -1 /sys/class/infiniband | tail -n1) --peer-credits 32
                fi
            fi

            # Configure discovery and UDSP
            echo "Setting up discovery and UDSP rules..."
            sudo lnetctl set discovery 1
            sudo lnetctl udsp add --src efa --priority 0
            sudo /sbin/modprobe lustre

            # Verify configuration
            echo "Verifying EFA network configuration..."
            sudo lnetctl net show
            echo "Added $(sudo lnetctl net show | grep -c '@efa') EFA interface(s)"
        }

        # Main execution
        setup_efa
        configure_efa_network

        # 5. Mount FSx filesystem
        if [ ! -z "$FSX_DNS" ] && [ ! -z "$MOUNT_NAME" ]; then
            echo "********Creating mount point********"
            sudo mkdir -p $MOUNT_POINT
            check_success "Create mount point"

            echo "Mounting FSx filesystem..."
            sudo mount -t lustre ${FSX_DNS}@tcp:/${MOUNT_NAME} ${MOUNT_POINT}
            check_success "Mount FSx filesystem"
        else
            echo "Skipping FSx mount as DNS or mount name is not provided"
        fi

        # 6. Applying Lustre performance tunings
        echo "********Applying Lustre performance tunings********"

        # Get number of CPUs
        NUM_CPUS=$(nproc)

        # Calculate LRU size (100 * number of CPUs)
        LRU_SIZE=$((100 * NUM_CPUS))

        #Apply LRU tunings
        echo "Apply LRU tunings"
        sudo lctl set_param ldlm.namespaces.*.lru_max_age=${LUSTRE_LRU_MAX_AGE}
        check_success "Set lru_max_age"
        sudo lctl set_param ldlm.namespaces.*.lru_size=$LRU_SIZE
        check_success "Set lru_size"

        # Client Cache Control
        sudo lctl set_param llite.*.max_cached_mb=${LUSTRE_MAX_CACHED_MB}
        check_success "Set max_cached_mb"

        # RPC Controls
        sudo lctl set_param osc.*OST*.max_rpcs_in_flight=${LUSTRE_OST_MAX_RPC}
        check_success "Set OST max_rpcs_in_flight"

        sudo lctl set_param mdc.*.max_rpcs_in_flight=${LUSTRE_MDC_MAX_RPC}
        check_success "Set MDC max_rpcs_in_flight"

        sudo lctl set_param mdc.*.max_mod_rpcs_in_flight=${LUSTRE_MDC_MOD_RPC}
        check_success "Set MDC max_mod_rpcs_in_flight"

        # 7. Verify all tunings
        echo "********Verifying all tunings********"

        # Function to verify parameter value
        verify_param() {
            local param=$1
            local expected=$2
            local actual=$3

            if [ "$actual" == "$expected" ]; then
                echo "SUCCESS: $param is correctly set to $expected"
            else
                echo "WARNING: $param is set to $actual (expected $expected)"
            fi
        }

        echo "Verifying all parameters:"

        # LRU tunings
        actual_lru_max_age=$(lctl get_param -n ldlm.namespaces.*.lru_max_age | head -1)
        verify_param "lru_max_age" "600000" "$actual_lru_max_age"

        actual_lru_size=$(lctl get_param -n ldlm.namespaces.*.lru_size | head -1)
        verify_param "lru_size" "$LRU_SIZE" "$actual_lru_size"

        # Client Cache
        actual_max_cached_mb=$(lctl get_param -n llite.*.max_cached_mb | grep "max_cached_mb:" | awk '{print $2}')
        verify_param "max_cached_mb" "64" "$actual_max_cached_mb"

        # RPC Controls
        actual_ost_rpcs=$(lctl get_param -n osc.*OST*.max_rpcs_in_flight | head -1)
        verify_param "OST max_rpcs_in_flight" "32" "$actual_ost_rpcs"

        actual_mdc_rpcs=$(lctl get_param -n mdc.*.max_rpcs_in_flight | head -1)
        verify_param "MDC max_rpcs_in_flight" "64" "$actual_mdc_rpcs"

        actual_mdc_mod_rpcs=$(lctl get_param -n mdc.*.max_mod_rpcs_in_flight | head -1)
        verify_param "MDC max_mod_rpcs_in_flight" "50" "$actual_mdc_mod_rpcs"

        # Network and RPC configurations from modprobe.conf
        actual_ptlrpc=$(grep "ptlrpc ptlrpcd_per_cpt_max" /etc/modprobe.d/modprobe.conf | awk '{print $3}')
        verify_param "ptlrpcd_per_cpt_max" "ptlrpcd_per_cpt_max=64" "$actual_ptlrpc"

        actual_ksocklnd=$(grep "ksocklnd credits" /etc/modprobe.d/modprobe.conf | awk '{print $3}')
        verify_param "ksocklnd credits" "credits=2560" "$actual_ksocklnd"

        # 8. Setup persistence
        setup_persistence() {
            # Create functions file
            cat << EOF > $FUNCTIONS_SCRIPT
        #!/bin/bash

        apply_lustre_tunings() {
            local NUM_CPUS=\$(nproc)
            local LRU_SIZE=\$((100 * NUM_CPUS))

            echo "Applying Lustre performance tunings..."
            lctl set_param ldlm.namespaces.*.lru_max_age=$LUSTRE_LRU_MAX_AGE
            lctl set_param ldlm.namespaces.*.lru_size=\$LRU_SIZE
            lctl set_param llite.*.max_cached_mb=$LUSTRE_MAX_CACHED_MB
            lctl set_param osc.*OST*.max_rpcs_in_flight=$LUSTRE_OST_MAX_RPC
            lctl set_param mdc.*.max_rpcs_in_flight=$LUSTRE_MDC_MAX_RPC
            lctl set_param mdc.*.max_mod_rpcs_in_flight=$LUSTRE_MDC_MOD_RPC
        }
        EOF

            # Create tuning script
            cat << EOF > $TUNINGS_SCRIPT
        #!/bin/bash
        exec 1> >(logger -s -t \$(basename \$0)) 2>&1

        source $FUNCTIONS_SCRIPT

        # Function to check if Lustre is mounted
        is_lustre_mounted() {
            mount | grep -q "type lustre"
        }

        # Function to mount Lustre
        mount_lustre() {
            echo "Mounting Lustre filesystem..."
            mkdir -p $MOUNT_POINT
            mount -t lustre ${FSX_DNS}@tcp:/${MOUNT_NAME} $MOUNT_POINT
            return \$?
        }

        # Main execution
        # Try to mount if not already mounted
        if ! is_lustre_mounted; then
            echo "Lustre filesystem not mounted, attempting to mount..."
            mount_lustre
        fi

        # Wait for successful mount (up to 5 minutes)
        for i in {1..30}; do
            if is_lustre_mounted; then
                echo "Lustre filesystem mounted, applying tunings..."
                apply_lustre_tunings
                exit 0
            fi
            echo "Waiting for Lustre filesystem to be mounted... (attempt $i/30)"
            sleep 10
        done

        echo "Timeout waiting for Lustre filesystem mount"
        exit 1
        EOF

        # Create systemd service

        # Create systemd directory if it doesn't exist
        sudo mkdir -p /etc/systemd/system/

            # Create service file directly for Ubuntu
            cat << EOF > $SERVICE_FILE
        [Unit]
        Description=Apply Lustre Performance Tunings
        After=network.target remote-fs.target

        [Service]
        Type=oneshot
        ExecStart=/bin/bash -c 'source $FUNCTIONS_SCRIPT && $TUNINGS_SCRIPT'
        RemainAfterExit=yes

        [Install]
        WantedBy=multi-user.target
        EOF


            # Make scripts executable and enable service
            sudo chmod +x $FUNCTIONS_SCRIPT
            sudo chmod +x $TUNINGS_SCRIPT
            systemctl enable lustre-tunings.service
            systemctl start lustre-tunings.service
        }

        echo "********Setting up persistent tuning********"
        setup_persistence

        echo "FSx for Lustre configuration completed."
```

## (Optional) Step 3. Verify EFA setup
<a name="verify-efa-setup"></a>

SSH into node:

```
# Get instance ID from EKS console or {aws} CLI
ssh -i /path/to/your-key.pem ec2-user@<node-internal-ip>
```

Verify EFA configuration:

```
sudo lnetctl net show
```

Check setup logs:

```
sudo cat /var/log/cloud-init-output.log
```

Here’s example expected output for `lnetctl net show`:

```
net:
    - net type: tcp
      ...
    - net type: efa
      local NI(s):
        - nid: xxx.xxx.xxx.xxx@efa
          status: up
```

## Example deployments
<a name="example-deployments"></a>

### a. Create claim.yaml
<a name="_a_create_claim_yaml"></a>

```
#claim.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fsx-claim-efa
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 4800Gi
  volumeName: fsx-pv
```

Apply the claim:

```
kubectl apply -f claim.yaml
```

### b. Create pv.yaml
<a name="_b_create_pv_yaml"></a>

Update the `<replaceable-placeholders>`:

```
#pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: fsx-pv
spec:
  capacity:
    storage: 4800Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  mountOptions:
    - flock
  persistentVolumeReclaimPolicy: Recycle
  csi:
    driver: fsx.csi.aws.com
    volumeHandle: fs-<1234567890abcdef0>
    volumeAttributes:
      dnsname: fs-<1234567890abcdef0>.fsx.us-east-1.amazonaws.com
      mountname: <abcdef01>
```

Apply the persistent volume:

```
kubectl apply -f pv.yaml
```

### c. Create pod.yaml
<a name="_c_create_pod_yaml"></a>

```
#pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: fsx-efa-app
spec:
  containers:
  - name: app
    image: amazonlinux:2
    command: ["/bin/sh"]
    args: ["-c", "while true; do dd if=/dev/urandom bs=100M count=20 > data/test_file; sleep 10; done"]
    resources:
      requests:
        vpc.amazonaws.com/efa: 1
      limits:
        vpc.amazonaws.com/efa: 1
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: fsx-claim-efa
```

Apply the Pod:

```
kubectl apply -f pod.yaml
```

## Additional verification commands
<a name="verification-commands"></a>

Verify Pod mounts and writes to filesystem:

```
kubectl exec -ti fsx-efa-app -- df -h | grep data
# Expected output:
# <192.0.2.0>@tcp:/<abcdef01>  4.5T  1.2G  4.5T   1% /data

kubectl exec -ti fsx-efa-app -- ls /data
# Expected output:
# test_file
```

SSH onto the node to verify traffic is going over EFA:

```
sudo lnetctl net show -v
```

The expected output will show EFA interfaces with traffic statistics.

## Related information
<a name="_related_information"></a>
+  [Deploy the FSx for Lustre driver](fsx-csi-create.md) 
+  [Optimize Amazon FSx for Lustre performance on nodes (non-EFA)](fsx-csi-tuning-non-efa.md) 
+  [Amazon FSx for Lustre Performance](https://docs.aws.amazon.com/fsx/latest/LustreGuide/performance.html) 
+  [Elastic Fabric Adapter](https://docs.aws.amazon.com/ec2/latest/userguide/efa.html) 

# Optimize Amazon FSx for Lustre performance on nodes (non-EFA)
<a name="fsx-csi-tuning-non-efa"></a>

You can optimize Amazon FSx for Lustre performance by applying tuning parameters during node initialization using launch template user data.

**Note**  
For information on creating and deploying the FSx for Lustre CSI driver, see [Deploy the FSx for Lustre driver](fsx-csi-create.md). For optimizing performance with EFA-enabled nodes, see [Optimize Amazon FSx for Lustre performance on nodes (EFA)](fsx-csi-tuning-efa.md).

## Why use launch template user data?
<a name="_why_use_launch_template_user_data"></a>
+ Applies tunings automatically during node initialization.
+ Ensures consistent configuration across all nodes.
+ Eliminates the need for manual node configuration.

## Example script overview
<a name="_example_script_overview"></a>

The example script defined in this topic performs these operations:

### `# 1. Install Lustre client`
<a name="_1_install_lustre_client"></a>
+ Automatically detects your Amazon Linux (AL) OS version.
+ Installs the appropriate Lustre client package.

### `# 2. Apply network and RPC tunings`
<a name="_2_apply_network_and_rpc_tunings"></a>
+ Sets `ptlrpcd_per_cpt_max=64` for parallel RPC processing.
+ Configures `ksocklnd credits=2560` to optimize network buffers.

### `# 3. Load Lustre modules`
<a name="_3_load_lustre_modules"></a>
+ Safely removes existing Lustre modules if present.
+ Handles unmounting of existing filesystems.
+ Loads fresh Lustre modules.

### `# 4. Lustre Network Initialization`
<a name="_4_lustre_network_initialization"></a>
+ Initializes Lustre networking configuration.
+ Sets up required network parameters.

### `# 5. Mount FSx filesystem`
<a name="_5_mount_fsx_filesystem"></a>
+ You must adjust the values for your environment in this section.

### `# 6. Apply tunings`
<a name="_6_apply_tunings"></a>
+ LRU (Lock Resource Unit) tunings:
  +  `lru_max_age=600000` 
  +  `lru_size` calculated based on CPU count
+ Client Cache Control: `max_cached_mb=64` 
+ RPC Controls:
  + OST `max_rpcs_in_flight=32` 
  + MDC `max_rpcs_in_flight=64` 
  + MDC `max_mod_rpcs_in_flight=50` 

### `# 7. Verify tunings`
<a name="_7_verify_tunings"></a>
+ Verifies all applied tunings.
+ Reports success or warning for each parameter.

### `# 8. Setup persistence`
<a name="_8_setup_persistence"></a>
+ You must adjust the values for your environment in this section as well.
+ Automatically detects your OS version (AL2023) to determine which `Systemd` service to apply.
+ System starts.
+  `Systemd` starts `lustre-tunings` service (due to `WantedBy=multi-user.target`).
+ Service runs `apply_lustre_tunings.sh` which:
  + Checks if filesystem is mounted.
  + Mounts filesystem if not mounted.
  + Waits for successful mount (up to five minutes).
  + Applies tuning parameters after successful mount.
+ Settings remain active until reboot.
+ Service exits after script completion.
  + Systemd marks service as "active (exited)".
+ Process repeats on next reboot.

## Create a launch template
<a name="_create_a_launch_template"></a>

1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.

1. Choose **Launch Templates**.

1. Choose **Create launch template**.

1. In **Advanced details**, locate the **User data** section.

1. Paste the script below, updating anything as needed.
**Important**  
Adjust these values for your environment in both section `# 5. Mount FSx filesystem` and the `setup_persistence()` function of `apply_lustre_tunings.sh` in section `# 8. Setup persistence`:  

   ```
   FSX_DNS="<your-fsx-filesystem-dns>" # Needs to be adjusted.
   MOUNT_NAME="<your-mount-name>" # Needs to be adjusted.
   MOUNT_POINT="</your/mount/point>" # Needs to be adjusted.
   ```

   ```
   MIME-Version: 1.0
   Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="
   --==MYBOUNDARY==
   Content-Type: text/x-shellscript; charset="us-ascii"
   #!/bin/bash
   exec 1> >(logger -s -t $(basename $0)) 2>&1
   # Function definitions
   check_success() {
       if [ $? -eq 0 ]; then
           echo "SUCCESS: $1"
       else
           echo "FAILED: $1"
           return 1
       fi
   }
   apply_tunings() {
       local NUM_CPUS=$(nproc)
       local LRU_SIZE=$((100 * NUM_CPUS))
       local params=(
           "ldlm.namespaces.*.lru_max_age=600000"
           "ldlm.namespaces.*.lru_size=$LRU_SIZE"
           "llite.*.max_cached_mb=64"
           "osc.*OST*.max_rpcs_in_flight=32"
           "mdc.*.max_rpcs_in_flight=64"
           "mdc.*.max_mod_rpcs_in_flight=50"
       )
       for param in "${params[@]}"; do
           lctl set_param $param
           check_success "Set ${param%%=*}"
       done
   }
   verify_param() {
       local param=$1
       local expected=$2
       local actual=$3
   
       if [ "$actual" == "$expected" ]; then
           echo "SUCCESS: $param is correctly set to $expected"
       else
           echo "WARNING: $param is set to $actual (expected $expected)"
       fi
   }
   verify_tunings() {
       local NUM_CPUS=$(nproc)
       local LRU_SIZE=$((100 * NUM_CPUS))
       local params=(
           "ldlm.namespaces.*.lru_max_age:600000"
           "ldlm.namespaces.*.lru_size:$LRU_SIZE"
           "llite.*.max_cached_mb:64"
           "osc.*OST*.max_rpcs_in_flight:32"
           "mdc.*.max_rpcs_in_flight:64"
           "mdc.*.max_mod_rpcs_in_flight:50"
       )
       echo "Verifying all parameters:"
       for param in "${params[@]}"; do
           name="${param%%:*}"
           expected="${param#*:}"
           actual=$(lctl get_param -n $name | head -1)
           verify_param "${name##*.}" "$expected" "$actual"
       done
   }
   setup_persistence() {
       # Create functions file
       cat << 'EOF' > /usr/local/bin/lustre_functions.sh
   #!/bin/bash
   apply_lustre_tunings() {
       local NUM_CPUS=$(nproc)
       local LRU_SIZE=$((100 * NUM_CPUS))
   
       echo "Applying Lustre performance tunings..."
       lctl set_param ldlm.namespaces.*.lru_max_age=600000
       lctl set_param ldlm.namespaces.*.lru_size=$LRU_SIZE
       lctl set_param llite.*.max_cached_mb=64
       lctl set_param osc.*OST*.max_rpcs_in_flight=32
       lctl set_param mdc.*.max_rpcs_in_flight=64
       lctl set_param mdc.*.max_mod_rpcs_in_flight=50
   }
   EOF
       # Create tuning script
       cat << 'EOF' > /usr/local/bin/apply_lustre_tunings.sh
   #!/bin/bash
   exec 1> >(logger -s -t $(basename $0)) 2>&1
   # Source the functions
   source /usr/local/bin/lustre_functions.sh
   # FSx details
   FSX_DNS="<your-fsx-filesystem-dns>" # Needs to be adjusted.
   MOUNT_NAME="<your-mount-name>" # Needs to be adjusted.
   MOUNT_POINT="</your/mount/point>" # Needs to be adjusted.
   # Function to check if Lustre is mounted
   is_lustre_mounted() {
       mount | grep -q "type lustre"
   }
   # Function to mount Lustre
   mount_lustre() {
       echo "Mounting Lustre filesystem..."
       mkdir -p ${MOUNT_POINT}
       mount -t lustre ${FSX_DNS}@tcp:/${MOUNT_NAME} ${MOUNT_POINT}
       return $?
   }
   # Main execution
   # Try to mount if not already mounted
   if ! is_lustre_mounted; then
       echo "Lustre filesystem not mounted, attempting to mount..."
       mount_lustre
   fi
   # Wait for successful mount (up to 5 minutes)
   for i in {1..30}; do
       if is_lustre_mounted; then
           echo "Lustre filesystem mounted, applying tunings..."
           apply_lustre_tunings
           exit 0
       fi
       echo "Waiting for Lustre filesystem to be mounted... (attempt $i/30)"
       sleep 10
   done
   echo "Timeout waiting for Lustre filesystem mount"
   exit 1
   EOF
       # Create systemd service
       cat << 'EOF' > /etc/systemd/system/lustre-tunings.service
   [Unit]
   Description=Apply Lustre Performance Tunings
   After=network.target remote-fs.target
   StartLimitIntervalSec=0
   [Service]
   Type=oneshot
   ExecStart=/usr/local/bin/apply_lustre_tunings.sh
   RemainAfterExit=yes
   Restart=on-failure
   RestartSec=30
   [Install]
   WantedBy=multi-user.target
   EOF
       chmod +x /usr/local/bin/lustre_functions.sh
       chmod +x /usr/local/bin/apply_lustre_tunings.sh
       systemctl enable lustre-tunings.service
       systemctl start lustre-tunings.service
   }
   echo "Starting FSx for Lustre configuration..."
   # 1. Install Lustre client
   if grep -q 'VERSION="2"' /etc/os-release; then
       amazon-linux-extras install -y lustre
   elif grep -q 'VERSION="2023"' /etc/os-release; then
       dnf install -y lustre-client
   fi
   check_success "Install Lustre client"
   # 2. Apply network and RPC tunings
   export PATH=$PATH:/usr/sbin
   echo "Applying network and RPC tunings..."
   if ! grep -q "options ptlrpc ptlrpcd_per_cpt_max" /etc/modprobe.d/modprobe.conf; then
       echo "options ptlrpc ptlrpcd_per_cpt_max=64" | tee -a /etc/modprobe.d/modprobe.conf
       echo "options ksocklnd credits=2560" | tee -a /etc/modprobe.d/modprobe.conf
   fi
   # 3. Load Lustre modules
   modprobe lustre
   check_success "Load Lustre modules" || exit 1
   # 4. Lustre Network Initialization
   lctl network up
   check_success "Initialize Lustre networking" || exit 1
   # 5. Mount FSx filesystem
   FSX_DNS="<your-fsx-filesystem-dns>" # Needs to be adjusted.
   MOUNT_NAME="<your-mount-name>" # Needs to be adjusted.
   MOUNT_POINT="</your/mount/point>" # Needs to be adjusted.
   if [ ! -z "$FSX_DNS" ] && [ ! -z "$MOUNT_NAME" ]; then
       mkdir -p $MOUNT_POINT
       mount -t lustre ${FSX_DNS}@tcp:/${MOUNT_NAME} ${MOUNT_POINT}
       check_success "Mount FSx filesystem"
   fi
   # 6. Apply tunings
   apply_tunings
   # 7. Verify tunings
   verify_tunings
   # 8. Setup persistence
   setup_persistence
   echo "FSx for Lustre configuration completed."
   --==MYBOUNDARY==--
   ```

1. When creating Amazon EKS node groups, select this launch template. For more information, see [Create a managed node group for your cluster](create-managed-node-group.md).

## Related information
<a name="_related_information"></a>
+  [Deploy the FSx for Lustre driver](fsx-csi-create.md) 
+  [Optimize Amazon FSx for Lustre performance on nodes (EFA)](fsx-csi-tuning-efa.md) 
+  [Amazon FSx for Lustre Performance](https://docs.aws.amazon.com/fsx/latest/LustreGuide/performance.html) 

# Use high-performance app storage with FSx for NetApp ONTAP
<a name="fsx-ontap"></a>

The NetApp Trident provides dynamic storage orchestration using a Container Storage Interface (CSI) compliant driver. This allows Amazon EKS clusters to manage the lifecycle of persistent volumes (PVs) backed by Amazon FSx for NetApp ONTAP file systems. Note that the Amazon FSx for NetApp ONTAP CSI driver is not compatible with Amazon EKS Hybrid Nodes. To get started, see [Use Trident with Amazon FSx for NetApp ONTAP](https://docs.netapp.com/us-en/trident/trident-use/trident-fsx.html) in the NetApp Trident documentation.

Amazon FSx for NetApp ONTAP is a storage service that allows you to launch and run fully managed ONTAP file systems in the cloud. ONTAP is NetApp’s file system technology that provides a widely adopted set of data access and data management capabilities. FSx for ONTAP provides the features, performance, and APIs of on-premises NetApp file systems with the agility, scalability, and simplicity of a fully managed AWS service. For more information, see the [FSx for ONTAP User Guide](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/what-is-fsx-ontap.html).

**Important**  
If you are using Amazon FSx for NetApp ONTAP alongside the Amazon EBS CSI driver to provision EBS volumes, you must specify to not use EBS devices in the `multipath.conf` file. For supported methods, see [Configuration File Blacklist](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/config_file_blacklist#config_file_blacklist). Here is an example.  

```
 defaults {
        user_friendly_names yes
        find_multipaths no
      }
      blacklist {
        device {
          vendor "NVME"
          product "Amazon Elastic Block Store"
        }
      }
```

# Use data storage with Amazon FSx for OpenZFS
<a name="fsx-openzfs-csi"></a>

Amazon FSx for OpenZFS is a fully managed file storage service that makes it easy to move data to AWS from on-premises ZFS or other Linux-based file servers. You can do this without changing your application code or how you manage data. It offers highly reliable, scalable, efficient, and feature-rich file storage built on the open-source OpenZFS file system. It combines these capabilities with the agility, scalability, and simplicity of a fully managed AWS service. For more information, see the [Amazon FSx for OpenZFS User Guide](https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/what-is-fsx.html).

The FSx for OpenZFS Container Storage Interface (CSI) driver provides a CSI interface that allows Amazon EKS clusters to manage the life cycle of FSx for OpenZFS volumes. Note that the Amazon FSx for OpenZFS CSI driver is not compatible with Amazon EKS Hybrid Nodes. To deploy the FSx for OpenZFS CSI driver to your Amazon EKS cluster, see [aws-fsx-openzfs-csi-driver](https://github.com/kubernetes-sigs/aws-fsx-openzfs-csi-driver) on GitHub.

# Minimize latency with Amazon File Cache
<a name="file-cache-csi"></a>

Amazon File Cache is a fully managed, high-speed cache on AWS that’s used to process file data, regardless of where the data is stored. Amazon File Cache automatically loads data into the cache when it’s accessed for the first time and releases data when it’s not used. For more information, see the [Amazon File Cache User Guide](https://docs.aws.amazon.com/fsx/latest/FileCacheGuide/what-is.html).

The Amazon File Cache Container Storage Interface (CSI) driver provides a CSI interface that allows Amazon EKS clusters to manage the life cycle of Amazon file caches. Note that the Amazon File Cache CSI driver is not compatible with Amazon EKS Hybrid Nodes. To deploy the Amazon File Cache CSI driver to your Amazon EKS cluster, see [aws-file-cache-csi-driver](https://github.com/kubernetes-sigs/aws-file-cache-csi-driver) on GitHub.

# Access Amazon S3 objects with Mountpoint for Amazon S3 CSI driver
<a name="s3-csi"></a>

With the [Mountpoint for Amazon S3 Container Storage Interface (CSI) driver](https://github.com/awslabs/mountpoint-s3-csi-driver), your Kubernetes applications can access Amazon S3 objects through a file system interface, achieving high aggregate throughput without changing any application code. Built on [Mountpoint for Amazon S3](https://github.com/awslabs/mountpoint-s3), the CSI driver presents an Amazon S3 bucket as a volume that can be accessed by containers in Amazon EKS and self-managed Kubernetes clusters.

## Considerations
<a name="s3-csi-considerations"></a>
+ The Mountpoint for Amazon S3 CSI driver isn’t presently compatible with Windows-based container images.
+ The Mountpoint for Amazon S3 CSI driver isn’t presently compatible with Amazon EKS Hybrid Nodes.
+ The Mountpoint for Amazon S3 CSI driver doesn’t support AWS Fargate. However, containers that are running in Amazon EC2 (either with Amazon EKS or a custom Kubernetes installation) are supported.
+ The Mountpoint for Amazon S3 CSI driver supports only static provisioning. Dynamic provisioning, or creation of new buckets, isn’t supported.
**Note**  
Static provisioning refers to using an existing Amazon S3 bucket that is specified as the `bucketName` in the `volumeAttributes` in the `PersistentVolume` object. For more information, see [Static Provisioning](https://github.com/awslabs/mountpoint-s3-csi-driver/blob/main/examples/kubernetes/static_provisioning/README.md) on GitHub.
+ Volumes mounted with the Mountpoint for Amazon S3 CSI driver don’t support all POSIX file-system features. For details about file-system behavior, see [Mountpoint for Amazon S3 file system behavior](https://github.com/awslabs/mountpoint-s3/blob/main/doc/SEMANTICS.md) on GitHub.

For details on deploying the driver, see [Deploy the Mountpoint for Amazon S3 driver](s3-csi-create.md). For details on removing the driver, see [Remove the Mountpoint for Amazon S3 Amazon EKS add-on](removing-s3-csi-eks-add-on.md).

# Deploy the Mountpoint for Amazon S3 driver
<a name="s3-csi-create"></a>

With the [Mountpoint for Amazon S3 Container Storage Interface (CSI) driver](https://github.com/awslabs/mountpoint-s3-csi-driver), your Kubernetes applications can access Amazon S3 objects through a file system interface, achieving high aggregate throughput without changing any application code.

This procedure will show you how to deploy the [Mountpoint for Amazon S3 CSI Amazon EKS driver](s3-csi.md). Before proceeding, please review the [Considerations](s3-csi.md#s3-csi-considerations).

## Prerequisites
<a name="s3-csi-prereqs"></a>
+ An existing AWS Identity and Access Management (IAM) OpenID Connect (OIDC) provider for your cluster. To determine whether you already have one, or to create one, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md).
+ Version 2.12.3 or later of the AWS CLI installed and configured on your device or AWS CloudShell.
+ The `kubectl` command line tool is installed on your device or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is `1.29`, you can use `kubectl` version `1.28`, `1.29`, or `1.30` with it. To install or upgrade `kubectl`, see [Set up `kubectl` and `eksctl`](install-kubectl.md).

## Step 1: Create an IAM policy
<a name="s3-create-iam-policy"></a>

The Mountpoint for Amazon S3 CSI driver requires Amazon S3 permissions to interact with your file system. This section shows how to create an IAM policy that grants the necessary permissions.

The following example policy follows the IAM permission recommendations for Mountpoint. Alternatively, you can use the AWS managed policy [AmazonS3FullAccess](https://console.aws.amazon.com/iam/home?#/policies/arn:aws:iam::aws:policy/AmazonS3FullAccess$jsonEditor), but this managed policy grants more permissions than are needed for Mountpoint.

For more information about the recommended permissions for Mountpoint, see [Mountpoint IAM permissions](https://github.com/awslabs/mountpoint-s3/blob/main/doc/CONFIGURATION.md#iam-permissions) on GitHub.

1. Open the IAM console at https://console.aws.amazon.com/iam/.

1. In the left navigation pane, choose **Policies**.

1. On the **Policies** page, choose **Create policy**.

1. For **Policy editor**, choose **JSON**.

1. Under **Policy editor**, copy and paste the following:
**Important**  
Replace `amzn-s3-demo-bucket1` with your own Amazon S3 bucket name.

   ```
   {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
           {
               "Sid": "MountpointFullBucketAccess",
               "Effect": "Allow",
               "Action": [
                   "s3:ListBucket"
               ],
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket1"
               ]
           },
           {
               "Sid": "MountpointFullObjectAccess",
               "Effect": "Allow",
               "Action": [
                   "s3:GetObject",
                   "s3:PutObject",
                   "s3:AbortMultipartUpload",
                   "s3:DeleteObject"
               ],
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket1/*"
               ]
           }
      ]
   }
   ```

   Directory buckets, introduced with the Amazon S3 Express One Zone storage class, use a different authentication mechanism from general purpose buckets. Instead of using `s3:*` actions, you should use the `s3express:CreateSession` action. For information about directory buckets, see [Directory buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-overview.html) in the *Amazon S3 User Guide*.

   Below is an example of least-privilege policy that you would use for a directory bucket.

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "s3express:CreateSession",
               "Resource": "arn:aws:s3express:us-west-2:111122223333:bucket/amzn-s3-demo-bucket1--usw2-az1--x-s3"
           }
       ]
   }
   ```

1. Choose **Next**.

1. On the **Review and create** page, name your policy. This example walkthrough uses the name `AmazonS3CSIDriverPolicy`.

1. Choose **Create policy**.

## Step 2: Create an IAM role
<a name="s3-create-iam-role"></a>

The Mountpoint for Amazon S3 CSI driver requires Amazon S3 permissions to interact with your file system. This section shows how to create an IAM role to delegate these permissions. To create this role, you can use one of these tools:
+  [eksctl](#eksctl_s3_store_app_data) 
+  [AWS Management Console](#console_s3_store_app_data) 
+  [AWS CLI](#awscli_s3_store_app_data) 

**Note**  
The IAM policy `AmazonS3CSIDriverPolicy` was created in the previous section.

### eksctl
<a name="eksctl_s3_store_app_data"></a>

 **To create your Mountpoint for Amazon S3 CSI driver IAM role with `eksctl` ** 

To create the IAM role and the Kubernetes service account, run the following commands. These commands also attach the `AmazonS3CSIDriverPolicy` IAM policy to the role, annotate the Kubernetes service account (`s3-csi-controller-sa`) with the IAM role’s Amazon Resource Name (ARN), and add the Kubernetes service account name to the trust policy for the IAM role.

```
CLUSTER_NAME=my-cluster
REGION=region-code
ROLE_NAME=AmazonEKS_S3_CSI_DriverRole
POLICY_ARN=AmazonEKS_S3_CSI_DriverRole_ARN
eksctl create iamserviceaccount \
    --name s3-csi-driver-sa \
    --namespace kube-system \
    --cluster $CLUSTER_NAME \
    --attach-policy-arn $POLICY_ARN \
    --approve \
    --role-name $ROLE_NAME \
    --region $REGION \
    --role-only
```

### AWS Management Console
<a name="console_s3_store_app_data"></a>

1. Open the IAM console at https://console.aws.amazon.com/iam/.

1. In the left navigation pane, choose **Roles**.

1. On the **Roles** page, choose **Create role**.

1. On the **Select trusted entity** page, do the following:

   1. In the **Trusted entity type** section, choose **Web identity**.

   1. For **Identity provider**, choose the **OpenID Connect provider URL** for your cluster (as shown under **Overview** in Amazon EKS).

      If no URLs are shown, review the [Prerequisites](#s3-csi-prereqs).

   1. For **Audience**, choose `sts.amazonaws.com`.

   1. Choose **Next**.

1. On the **Add permissions** page, do the following:

   1. In the **Filter policies** box, enter AmazonS3CSIDriverPolicy.
**Note**  
This policy was created in the previous section.

   1. Select the check box to the left of the `AmazonS3CSIDriverPolicy` result that was returned in the search.

   1. Choose **Next**.

1. On the **Name, review, and create** page, do the following:

   1. For **Role name**, enter a unique name for your role, such as AmazonEKS\$1S3\$1CSI\$1DriverRole.

   1. Under **Add tags (Optional)**, add metadata to the role by attaching tags as key-value pairs. For more information about using tags in IAM, see [Tagging IAM resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) in the *IAM User Guide*.

   1. Choose **Create role**.

1. After the role is created, choose the role in the console to open it for editing.

1. Choose the **Trust relationships** tab, and then choose **Edit trust policy**.

1. Find the line that looks similar to the following:

   ```
   "oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com"
   ```

   Add a comma to the end of the previous line, and then add the following line after it. Replace *region-code* with the AWS Region that your cluster is in. Replace *EXAMPLED539D4633E53DE1B71EXAMPLE* with your cluster’s OIDC provider ID.

   ```
   "oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:kube-system:s3-csi-driver-sa"
   ```

1. Ensure that the `Condition` operator is set to `"StringEquals"`.

1. Choose **Update policy** to finish.

### AWS CLI
<a name="awscli_s3_store_app_data"></a>

1. View the OIDC provider URL for your cluster. Replace *my-cluster* with the name of your cluster. If the output from the command is `None`, review the [Prerequisites](#s3-csi-prereqs).

   ```
   aws eks describe-cluster --name my-cluster --query "cluster.identity.oidc.issuer" --output text
   ```

   An example output is as follows.

   ```
   https://oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE
   ```

1. Create the IAM role, granting the Kubernetes service account the `AssumeRoleWithWebIdentity` action.

   1. Copy the following contents to a file named `aws-s3-csi-driver-trust-policy.json`. Replace *111122223333* with your account ID. Replace *EXAMPLED539D4633E53DE1B71EXAMPLE* and *region-code* with the values returned in the previous step.

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "Federated": "arn:aws:iam::111122223333:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
              "StringEquals": {
                "oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:kube-system:s3-csi-driver-sa",
                "oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com"
              }
            }
          }
        ]
      }
      ```

   1. Create the role. You can change *AmazonEKS\$1S3\$1CSI\$1DriverRole* to a different name, but if you do, make sure to change it in later steps too.

      ```
      aws iam create-role \
        --role-name AmazonEKS_S3_CSI_DriverRole \
        --assume-role-policy-document file://"aws-s3-csi-driver-trust-policy.json"
      ```

1. Attach the previously created IAM policy to the role with the following command.

   ```
   aws iam attach-role-policy \
     --policy-arn arn:aws:iam::aws:policy/AmazonS3CSIDriverPolicy \
     --role-name AmazonEKS_S3_CSI_DriverRole
   ```
**Note**  
The IAM policy `AmazonS3CSIDriverPolicy` was created in the previous section.

1. Skip this step if you’re installing the driver as an Amazon EKS add-on. For self-managed installations of the driver, create Kubernetes service accounts that are annotated with the ARN of the IAM role that you created.

   1. Save the following contents to a file named `mountpoint-s3-service-account.yaml`. Replace *111122223333* with your account ID.

      ```
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        labels:
          app.kubernetes.io/name: aws-mountpoint-s3-csi-driver
        name: mountpoint-s3-csi-controller-sa
        namespace: kube-system
        annotations:
          eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/AmazonEKS_S3_CSI_DriverRole
      ```

   1. Create the Kubernetes service account on your cluster. The Kubernetes service account (`mountpoint-s3-csi-controller-sa`) is annotated with the IAM role that you created named *AmazonEKS\$1S3\$1CSI\$1DriverRole*.

      ```
      kubectl apply -f mountpoint-s3-service-account.yaml
      ```
**Note**  
When you deploy the plugin in this procedure, it creates and is configured to use a service account named `s3-csi-driver-sa`.

## Step 3: Install the Mountpoint for Amazon S3 CSI driver
<a name="s3-install-driver"></a>

You may install the Mountpoint for Amazon S3 CSI driver through the Amazon EKS add-on. You can use the following tools to add the add-on to your cluster:
+  [eksctl](#eksctl_s3_add_store_app_data) 
+  [AWS Management Console](#console_s3_add_store_app_data) 
+  [AWS CLI](#awscli_s3_add_store_app_data) 

Alternatively, you may install Mountpoint for Amazon S3 CSI driver as a self-managed installation. For instructions on doing a self-managed installation, see [Installation](https://github.com/awslabs/mountpoint-s3-csi-driver/blob/main/docs/install.md#deploy-driver) on GitHub.

Starting from `v1.8.0`, you can configure taints to tolerate for the CSI driver’s Pods. To do this, either specify a custom set of taints to tolerate with `node.tolerations` or tolerate all taints with `node.tolerateAllTaints`. For more information, see [Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) in the Kubernetes documentation.

### eksctl
<a name="eksctl_s3_add_store_app_data"></a>

 **To add the Amazon S3 CSI add-on using `eksctl` ** 

Run the following command. Replace *my-cluster* with the name of your cluster, *111122223333* with your account ID, and *AmazonEKS\$1S3\$1CSI\$1DriverRole* with the name of the [IAM role created earlier](#s3-create-iam-role).

```
eksctl create addon --name aws-mountpoint-s3-csi-driver --cluster my-cluster \
  --service-account-role-arn arn:aws:iam::111122223333:role/AmazonEKS_S3_CSI_DriverRole --force
```

If you remove the *--force* option and any of the Amazon EKS add-on settings conflict with your existing settings, then updating the Amazon EKS add-on fails, and you receive an error message to help you resolve the conflict. Before specifying this option, make sure that the Amazon EKS add-on doesn’t manage settings that you need to manage, because those settings are overwritten with this option. For more information about other options for this setting, see [Addons](https://eksctl.io/usage/addons/) in the `eksctl` documentation. For more information about Amazon EKS Kubernetes field management, see [Determine fields you can customize for Amazon EKS add-ons](kubernetes-field-management.md).

You can customize `eksctl` through configuration files. For more information, see [Working with configuration values](https://eksctl.io/usage/addons/#working-with-configuration-values) in the `eksctl` documentation. The following example shows how to tolerate all taints.

```
# config.yaml
...

addons:
- name: aws-mountpoint-s3-csi-driver
  serviceAccountRoleARN: arn:aws:iam::111122223333:role/AmazonEKS_S3_CSI_DriverRole
  configurationValues: |-
    node:
      tolerateAllTaints: true
```

### AWS Management Console
<a name="console_s3_add_store_app_data"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, choose **Clusters**.

1. Choose the name of the cluster that you want to configure the Mountpoint for Amazon S3 CSI add-on for.

1. Choose the **Add-ons** tab.

1. Choose **Get more add-ons**.

1. On the **Select add-ons** page, do the following:

   1. In the **Amazon EKS-addons** section, select the **Mountpoint for Amazon S3 CSI Driver** check box.

   1. Choose **Next**.

1. On the **Configure selected add-ons settings** page, do the following:

   1. Select the **Version** you’d like to use.

   1. For **Select IAM role**, select the name of an IAM role that you attached the Mountpoint for Amazon S3 CSI driver IAM policy to.

   1. (Optional) Update the **Conflict resolution method** after expanding the **Optional configuration settings**. If you select **Override**, one or more of the settings for the existing add-on can be overwritten with the Amazon EKS add-on settings. If you don’t enable this option and there’s a conflict with your existing settings, the operation fails. You can use the resulting error message to troubleshoot the conflict. Before selecting this option, make sure that the Amazon EKS add-on doesn’t manage settings that you need to self-manage.

   1. (Optional) Configure tolerations in the **Configuration values** field after expanding the **Optional configuration settings**.

   1. Choose **Next**.

1. On the **Review and add** page, choose **Create**. After the add-on installation is complete, you see your installed add-on.

### AWS CLI
<a name="awscli_s3_add_store_app_data"></a>

 **To add the Mountpoint for Amazon S3 CSI add-on using the AWS CLI** 

Run the following command. Replace *my-cluster* with the name of your cluster, *111122223333* with your account ID, and *AmazonEKS\$1S3\$1CSI\$1DriverRole* with the name of the role that was created earlier.

```
aws eks create-addon --cluster-name my-cluster --addon-name aws-mountpoint-s3-csi-driver \
  --service-account-role-arn arn:aws:iam::111122223333:role/AmazonEKS_S3_CSI_DriverRole
```

You can customize the command with the `--configuration-values` flag. The following alternative example shows how to tolerate all taints.

```
aws eks create-addon --cluster-name my-cluster --addon-name aws-mountpoint-s3-csi-driver \
  --service-account-role-arn arn:aws:iam::111122223333:role/AmazonEKS_S3_CSI_DriverRole \
  --configuration-values '{"node":{"tolerateAllTaints":true}}'
```

## Step 4: Configure Mountpoint for Amazon S3
<a name="s3-configure-mountpoint"></a>

In most cases, you can configure Mountpoint for Amazon S3 with only a bucket name. For instructions on configuring Mountpoint for Amazon S3, see [Configuring Mountpoint for Amazon S3](https://github.com/awslabs/mountpoint-s3/blob/main/doc/CONFIGURATION.md) on GitHub.

## Step 5: Deploy a sample application
<a name="s3-sample-app"></a>

You can deploy static provisioning to the driver on an existing Amazon S3 bucket. For more information, see [Static provisioning](https://github.com/awslabs/mountpoint-s3-csi-driver/blob/main/examples/kubernetes/static_provisioning/README.md) on GitHub.

# Remove the Mountpoint for Amazon S3 Amazon EKS add-on
<a name="removing-s3-csi-eks-add-on"></a>

You have two options for removing the [Mountpoint for Amazon S3 CSI driver](s3-csi.md).
+  **Preserve add-on software on your cluster** – This option removes Amazon EKS management of any settings. It also removes the ability for Amazon EKS to notify you of updates and automatically update the Amazon EKS add-on after you initiate an update. However, it preserves the add-on software on your cluster. This option makes the add-on a self-managed installation, rather than an Amazon EKS add-on. With this option, there’s no downtime for the add-on. The commands in this procedure use this option.
+  **Remove add-on software entirely from your cluster** – We recommend that you remove the Amazon EKS add-on from your cluster only if there are no resources on your cluster that are dependent on it. To do this option, delete `--preserve` from the command you use in this procedure.

If the add-on has an IAM account associated with it, the IAM account isn’t removed.

You can use the following tools to remove the Amazon S3 CSI add-on:
+  [eksctl](#eksctl_s3_remove_store_app_data) 
+  [AWS Management Console](#console_s3_remove_store_app_data) 
+  [AWS CLI](#awscli_s3_remove_store_app_data) 

## eksctl
<a name="eksctl_s3_remove_store_app_data"></a>

 **To remove the Amazon S3 CSI add-on using `eksctl` ** 

Replace *my-cluster* with the name of your cluster, and then run the following command.

```
eksctl delete addon --cluster my-cluster --name aws-mountpoint-s3-csi-driver --preserve
```

## AWS Management Console
<a name="console_s3_remove_store_app_data"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, choose **Clusters**.

1. Choose the name of the cluster that you want to remove the Amazon EBS CSI add-on for.

1. Choose the **Add-ons** tab.

1. Choose **Mountpoint for Amazon S3 CSI Driver**.

1. Choose **Remove**.

1. In the **Remove: aws-mountpoint-s3-csi-driver** confirmation dialog box, do the following:

   1. If you want Amazon EKS to stop managing settings for the add-on, select **Preserve on cluster**. Do this if you want to retain the add-on software on your cluster. This is so that you can manage all of the settings of the add-on on your own.

   1. Enter `aws-mountpoint-s3-csi-driver`.

   1. Select **Remove**.

## AWS CLI
<a name="awscli_s3_remove_store_app_data"></a>

 **To remove the Amazon S3 CSI add-on using the AWS CLI** 

Replace *my-cluster* with the name of your cluster, and then run the following command.

```
aws eks delete-addon --cluster-name my-cluster --addon-name aws-mountpoint-s3-csi-driver --preserve
```

# Enable snapshot functionality for CSI volumes
<a name="csi-snapshot-controller"></a>

Snapshot functionality allows for point-in-time copies of your data. For this capability to work in Kubernetes, you need both a CSI driver with snapshot support (such as the Amazon EBS CSI driver) and a CSI snapshot controller. The snapshot controller is available either as an Amazon EKS managed add-on or as a self-managed installation.

Here are some things to consider when using the CSI snapshot controller.
+ The snapshot controller must be installed alongside a CSI driver with snapshot functionality. For installation instructions of the Amazon EBS CSI driver, see [Use Kubernetes volume storage with Amazon EBS](ebs-csi.md).
+ Kubernetes doesn’t support snapshots of volumes being served via CSI migration, such as Amazon EBS volumes using a `StorageClass` with provisioner `kubernetes.io/aws-ebs`. Volumes must be created with a `StorageClass` that references the CSI driver provisioner, `ebs.csi.aws.com`.
+ Amazon EKS Auto Mode does not include the snapshot controller. The storage capability of EKS Auto Mode is compatible with the snapshot controller.

We recommend that you install the CSI snapshot controller through the Amazon EKS managed add-on. This add-on includes the custom resource definitions (CRDs) that are needed to create and manage snapshots on Amazon EKS. To add an Amazon EKS add-on to your cluster, see [Create an Amazon EKS add-on](creating-an-add-on.md). For more information about add-ons, see [Amazon EKS add-ons](eks-add-ons.md).

Alternatively, if you want a self-managed installation of the CSI snapshot controller, see [Usage](https://github.com/kubernetes-csi/external-snapshotter/blob/master/README.md#usage) in the upstream Kubernetes `external-snapshotter` on GitHub.