

# Accessing caches
<a name="accessing-caches"></a>

In the following topics, you can learn how to access your cache on a Linux instance. In addition, you can find how to use the file `fstab` to automatically remount your cache after any system restarts.

Before you can mount a cache, you must create, configure, and launch your related AWS resources. For detailed instructions, see [Getting started with Amazon File Cache](getting-started.md).

Next, you can install and configure the Lustre client on your compute instance.

**Topics**
+ [

# Installing the Lustre client
](install-lustre-client.md)
+ [

# Mounting from an Amazon EC2 instance
](mounting-ec2-instance.md)
+ [

# Mounting from Amazon Elastic Container Service
](mounting-ecs.md)
+ [

# Mounting caches from on-premises or a peered Amazon VPC
](mounting-on-premises.md)
+ [

# Mounting your cache automatically
](mount-fs-auto-mount-onreboot.md)
+ [

# Mounting specific filesets
](mounting-from-fileset.md)
+ [

# Unmounting caches
](unmounting-fs.md)
+ [

# Working with Amazon EC2 Spot Instances
](working-with-ec2-spot-instances.md)

# Installing the Lustre client
<a name="install-lustre-client"></a>

To mount your cache from a Linux instance, first install the open-source Lustre client. Amazon File Cache version 2.12 supports access from the 2.12 versions of the Lustre client. Then, depending on your operating system version, use one of the following procedures.

If your compute instance isn't running the Linux kernel specified in the installation instructions, and you can't change the kernel, you can build your own Lustre client. For more information, see [Compiling Lustre](http://wiki.lustre.org/Compiling_Lustre) on the Lustre Wiki.

## Amazon Linux 2 and Amazon Linux
<a name="lustre-client-amazon-linux"></a>

### To install the Lustre client on Amazon Linux 2
<a name="install-lustre-client-amazon-linux-2"></a>

1. Open a terminal on your client.

1. Determine which kernel is currently running on your compute instance by running the following command.

   ```
   uname -r
   ```

1. Do one of the following:
   + Kernel requirements:
     + 5.10 kernel minimum requirement - 5.10.155-138.670.amzn2
     + 5.4 kernel minimum requirement - 5.4.219-126.411.amzn2
     + 4.14 kernel minimum requirement - 4.14.299-223.520.amzn2

     Download and install the Lustre client with the following command.

     ```
     sudo amazon-linux-extras install -y lustre
     ```
   + If the command returns a result less than the kernel minimum requirement, update the kernel and reboot your Amazon EC2 instance by running the following command.

     ```
     sudo yum -y update kernel && sudo reboot
     ```

     Confirm that the kernel has been updated using the **uname -r** command. Then download and install the Lustre client as described previously.

### To install the Lustre client on Amazon Linux
<a name="install-lustre-client-amazon-linux"></a>

1. Open a terminal on your client.

1. Determine which kernel is currently running on your compute instance by running the following command.

   ```
   uname -r
   ```

   The Lustre client requires Amazon Linux with minimum kernel version of 4.14.299-152.520.amzn1.

1. Do one of the following:
   + If the command returns a result equal to or greater than the kernel minimum requirement, download and install the Lustre client using the following command.

     ```
     sudo yum install -y lustre-client
     ```
   +  If the command returns a result less than the kernel minimum requirement, update the kernel and reboot your Amazon EC2 instance by running the following command.

     ```
     sudo yum -y update kernel && sudo reboot
     ```

     Confirm that the kernel has been updated using the **uname -r** command. Then download and install the Lustre client as described previously.

## CentOS, Rocky Linux, and Red Hat
<a name="lustre-client-rhel"></a>

### To install the Lustre client on CentOS, Rocky Linux, and Red Hat 8.4 and newer
<a name="install-lustre-client-RH8.4"></a>

You can install and update Lustre client packages that are compatible with Red Hat Enterprise Linux (RHEL), Rocky Linux, and CentOS from the Lustre client yum package repository. These packages are signed to help verify that they haven't been tampered with before or during download. The repository installation fails if you don't install the corresponding public key on your system.

**To add the Lustre client yum package repository**

1. Open a terminal on your client.

1. Install the AWS Lustre rpm public key by using the following command.

   ```
   curl https://fsx-lustre-client-repo-public-keys.s3.amazonaws.com/fsx-rpm-public-key.asc -o /tmp/fsx-rpm-public-key.asc
   ```

1. Import the key by using the following command.

   ```
   sudo rpm --import /tmp/fsx-rpm-public-key.asc
   ```

1. Add the repository and update the package manager using the following command.

   ```
   sudo curl https://fsx-lustre-client-repo.s3.amazonaws.com/el/8/fsx-lustre-client.repo -o /etc/yum.repos.d/aws-fsx.repo
   ```

**To configure the Lustre client yum repository**

The Lustre client yum package repository is configured by default to install the Lustre client that's compatible with the kernel version that initially shipped with the latest supported CentOS, Rocky Linux, and RHEL 8 release. To install a Lustre client that's compatible with the kernel version you're using, edit the repository configuration file.

This section describes how to determine which kernel you're running, whether you need to edit the repository configuration, and how to edit the configuration file.

1. Determine which kernel is currently running on your compute instance by using the following command.

   ```
   uname -r
   ```

1. Do one of the following:
   + If the command returns `4.18.0-477*`, you don't need to modify the repository configuration. Continue to the **To install the Lustre client** procedure.
   +  If the command returns `4.18.0-425*`, you must edit the repository configuration so that it points to the Lustre client for the CentOS, Rocky Linux, and RHEL 8.7 release.
   +  If the command returns `4.18.0-372*`, you must edit the repository configuration so that it points to the Lustre client for the CentOS, Rocky Linux, and RHEL 8.6 release.
   +  If the command returns `4.18.0-348*`, you must edit the repository configuration so that it points to the Lustre client for the CentOS, Rocky Linux, and RHEL 8.5 release.
   +  If the command returns `4.18.0-305*`, you must edit the repository configuration so that it points to the Lustre client for the CentOS, Rocky Linux, and RHEL 8.4 release.

1. Edit the repository configuration file to point to a specific version of RHEL using the following command.

   ```
   sudo sed -i 's#8#specific_RHEL_version#' /etc/yum.repos.d/aws-fsx.repo
   ```

   For example, to point to release 8.7, substitute `specific_RHEL_version` with `8.7` in the command.

   ```
   sudo sed -i 's#8#8.7#' /etc/yum.repos.d/aws-fsx.repo
   ```

1. Use the following command to clear the yum cache.

   ```
   sudo yum clean all
   ```

**To install the Lustre client**
+ Install the packages from the repository using the following command.

  ```
  sudo yum install -y kmod-lustre-client lustre-client
  ```

#### Additional information (CentOS, Rocky Linux, and Red Hat 8.4 and newer)
<a name="lustre-client-RH8.4-additional-info"></a>

The preceding commands install the two packages that are necessary for mounting and interacting with your cache. The repository includes additional Lustre packages, such as a package containing the source code and packages containing tests, which you can optionally install. To list all available packages in the repository, use the following command. 

```
yum --disablerepo="*" --enablerepo="aws-fsx" list available
```

To download the source rpm, containing a tarball of the upstream source code and the set of patches that we've applied, use the following command.

```
 sudo yumdownloader --source kmod-lustre-client
```

When you run yum update, a more recent version of the module is installed if available and the existing version is replaced. To prevent the currently installed version from being removed on update, add a line like the following to your `/etc/yum.conf` file.

```
installonlypkgs=kernel, kernel-PAE, installonlypkg(kernel), installonlypkg(kernel-module), 
              installonlypkg(vm), multiversion(kernel), kmod-lustre-client
```

 This list includes the default install only packages, specified in the `yum.conf` man page, and the `kmod-lustre-client` package.

### To install the Lustre client on CentOS and Red Hat 7.9 (x86\$164 instances)
<a name="install-lustre-client-Centos-7"></a>

You can install and update Lustre client packages that are compatible with Red Hat Enterprise Linux (RHEL) and CentOS from the Lustre client yum package repository. These packages are signed to help ensure that they haven't been tampered with before or during download. The repository installation fails if you don't install the corresponding public key on your system.

**To add the AWS Lustre client yum package repository**

1. Open a terminal on your client.

1. Install the AWS Lustre rpm public key using the following command.

   ```
   curl https://fsx-lustre-client-repo-public-keys.s3.amazonaws.com/fsx-rpm-public-key.asc -o /tmp/fsx-rpm-public-key.asc
   ```

1. Import the key using the following command.

   ```
   sudo rpm --import /tmp/fsx-rpm-public-key.asc
   ```

1. Add the repository and update the package manager using the following command.

   ```
   sudo curl https://fsx-lustre-client-repo.s3.amazonaws.com/el/7/fsx-lustre-client.repo -o /etc/yum.repos.d/aws-fsx.repo
   ```

The AWS Lustre client yum package repository is configured by default to install the Lustre client that's compatible with the kernel version that initially shipped with the latest supported CentOS 7 release. If you use the `uname -r` command to determine which kernel you are running, the command should return `3.10.0-1160*`.

**To install the Lustre client**
+ Install the Lustre client packages from the repository using the following command.

  ```
  sudo yum install -y kmod-lustre-client lustre-client
  ```

#### Additional information (CentOS and Red Hat 7.9)
<a name="lustre-client-Centos-7-additional-info"></a>

The preceding commands install the two packages that are necessary for mounting and interacting with your cache. The repository includes additional Lustre packages, such as a package containing the source code and packages containing tests, which you can optionally install. To list all available packages in the repository, use the following command. 

```
yum --disablerepo="*" --enablerepo="aws-fsx" list available
```

To download the source rpm containing a tarball of the upstream source code and the set of patches that we've applied, use the following command.

```
 sudo yumdownloader --source kmod-lustre-client
```

When you run yum update, a more recent version of the module is installed if available, and the existing version is replaced. To prevent the currently installed version from being removed on update, add a line like the following to your `/etc/yum.conf` file.

```
installonlypkgs=kernel, kernel-big‐mem, kernel-enterprise, kernel-smp,
              kernel-debug, kernel-unsupported, kernel-source, kernel-devel, kernel-PAE,
              kernel-PAE-debug, kmod-lustre-client
```

 This list includes the default install only packages, specified in the `yum.conf` man page, and the `kmod-lustre-client` package.

### To install the Lustre client on CentOS 7.9 (Arm-based AWS Graviton-powered instances)
<a name="install-lustre-client-Centos-7-arm"></a>

You can install Lustre client packages from the AWS Lustre client yum package repository that are compatible with CentOS 7 for Arm-based AWS Graviton-powered EC2 instances. These packages are signed to help verify they haven’t been tampered with before or during download. The repository installation fails if you don't install the corresponding public key on your system.

**To add the AWS Lustre client yum package repository**

1. Open a terminal on your client.

1. Install the AWS Lustre rpm public key using the following command.

   ```
   curl https://fsx-lustre-client-repo-public-keys.s3.amazonaws.com/fsx-rpm-public-key.asc -o /tmp/fsx-rpm-public-key.asc
   ```

1. Import the key using the following command.

   ```
   sudo rpm --import /tmp/fsx-rpm-public-key.asc
   ```

1. Add the repository and update the package manager using the following command.

   ```
   sudo curl https://fsx-lustre-client-repo.s3.amazonaws.com/centos/7/fsx-lustre-client.repo -o /etc/yum.repos.d/aws-fsx.repo
   ```

The AWS Lustre client yum package repository is configured by default to install the Lustre client that is compatible with the kernel version that initially shipped with the latest supported CentOS 7 release. If you use the `uname -r` command to determine which kernel you are running, the command should return `4.18.0-193*`.

**To install the Lustre client**
+ Install the packages from the repository using the following command.

  ```
  sudo yum install -y kmod-lustre-client lustre-client
  ```

#### Additional information (CentOS 7.9 for Arm-based AWS Graviton-powered EC2 instances)
<a name="lustre-client-Centos-7-arm-additional-info"></a>

The preceding commands install the two packages that are necessary for mounting and interacting with your cache. The repository includes additional Lustre packages, such as a package containing the source code and packages containing tests, which you can optionally install. To list all available packages in the repository, use the following command. 

```
yum --disablerepo="*" --enablerepo="aws-fsx" list available
```

To download the source rpm, containing a tarball of the upstream source code and the set of patches that we've applied, use the following command.

```
 sudo yumdownloader --source kmod-lustre-client
```

When you run yum update, a more recent version of the module is installed if available, and the existing version is replaced. To prevent the currently installed version from being removed on update, add a line like the following to your `/etc/yum.conf` file.

```
installonlypkgs=kernel, kernel-big‐mem, kernel-enterprise, kernel-smp,
              kernel-debug, kernel-unsupported, kernel-source, kernel-devel, kernel-PAE,
              kernel-PAE-debug, kmod-lustre-client
```

 This list includes the default install only packages, specified in the `yum.conf` man page, and the `kmod-lustre-client` package.

## Ubuntu
<a name="lustre-client-ubuntu"></a>

### To install the Lustre client on Ubuntu 22.04
<a name="install-lustre-client-Ubuntu-22"></a>

Starting with kernel `5.15.0.1020-aws`, Ubuntu 22.04.1 LTS is supported for Lustre 2.12 clients, for both x86 and Arm based instances.

**Note**  
FSx-vended Lustre clients are not supported on kernel 5.19.

You can get Lustre packages from the Ubuntu 22.04 AWS Lustre repository. To validate that the contents of the repository haven’t been tampered with before or during download, a GNU Privacy Guard (GPG) signature is applied to the metadata of the repository. Installing the repository fails unless you have the correct public GPG key installed on your system.

1. Open a terminal on your client.

1. Follow these steps to add the AWS Lustre client repository:

   1. If you haven’t previously registered an AWS Lustre client Ubuntu repository on your client instance, download and install the required public key. Use the following command.

      ```
      wget -O - https://fsx-lustre-client-repo-public-keys.s3.amazonaws.com/fsx-ubuntu-public-key.asc | gpg --dearmor | sudo tee /usr/share/keyrings/fsx-ubuntu-public-key.gpg >/dev/null
      ```

   1. Add the AWS Lustre package repository to your local package manager using the following command.

      ```
      sudo bash -c 'echo "deb [signed-by=/usr/share/keyrings/fsx-ubuntu-public-key.gpg] https://fsx-lustre-client-repo.s3.amazonaws.com/ubuntu jammy main" > /etc/apt/sources.list.d/fsxlustreclientrepo.list && apt-get update'
      ```

1. Determine which kernel is currently running on your client instance, and update as needed. The Lustre client on Ubuntu 22.04 requires kernel `5.15.0.1020-aws` or later for both x86-based EC2 instances and Arm-based EC2 instances powered by AWS Graviton processors.

   1. Run the following command to determine which kernel is running.

      ```
      uname -r
      ```

   1. Run the following command to update to the latest Ubuntu kernel and Lustre version and then reboot.

      ```
      sudo apt install -y linux-aws lustre-client-modules-aws && sudo reboot
      ```

       If your kernel version is greater than `5.15.0.1020-aws` for both x86-based EC2 instances and Graviton-based EC2 instances, and you don’t want to update to the latest kernel version, you can install Lustre for the current kernel with the following command. 

      ```
      sudo apt install -y lustre-client-modules-$(uname -r)
      ```

      The two AWS Lustre packages that are necessary for mounting and interacting with your cache are installed. You can optionally install additional related packages such as a package containing the source code and packages containing tests that are included in the repository.

   1. List all available packages in the repository by using the following command. 

      ```
      sudo apt-cache search ^lustre
      ```

   1. (Optional) If you want your system upgrade to also always upgrade Lustre client modules, verify that the `lustre-client-modules-aws` package is installed using the following command.

      ```
      sudo apt install -y lustre-client-modules-aws
      ```

### To install the Lustre client on Ubuntu 20.04
<a name="install-lustre-client-Ubuntu-20"></a>

Starting with kernel `5.15.0.1020-aws`, Ubuntu 20.04.5 LTS is supported for Lustre 2.12 clients, for both x86 and Arm based instances.

You can get Lustre packages from the Ubuntu 20.04 AWS Lustre repository. To validate that the contents of the repository haven’t been tampered with before or during download, a GNU Privacy Guard (GPG) signature is applied to the metadata of the repository. Installing the repository fails unless you have the correct public GPG key installed on your system.

1. Open a terminal on your client.

1. Follow these steps to add the AWS Lustre client Ubuntu repository:

   1. If you haven't previously registered an AWS Lustre Ubuntu repository on your client instance, download and install the required public key. Use the following command.

      ```
      wget -O - https://fsx-lustre-client-repo-public-keys.s3.amazonaws.com/fsx-ubuntu-public-key.asc | gpg --dearmor | sudo tee /usr/share/keyrings/fsx-ubuntu-public-key.gpg >/dev/null
      ```

   1. Add the AWS Lustre package repository to your local package manager using the following command.

      ```
      sudo bash -c 'echo "deb [signed-by=/usr/share/keyrings/fsx-ubuntu-public-key.gpg] https://fsx-lustre-client-repo.s3.amazonaws.com/ubuntu focal main" > /etc/apt/sources.list.d/fsxlustreclientrepo.list && apt-get update'
      ```

1. Determine which kernel is currently running on your client instance, and update as needed. The Lustre client on Ubuntu 20.04 requires kernel `5.15.0.1020-aws` or later for both x86-based EC2 instances and Arm-based EC2 instances powered by AWS Graviton processors. 

   1. Run the following command to determine which kernel is running.

      ```
      uname -r
      ```

   1. Run the following command to update to the latest Ubuntu kernel and Lustre version and then reboot.

      ```
      sudo apt install -y linux-aws lustre-client-modules-aws && sudo reboot
      ```

       If your kernel version is greater than `5.15.0.1020-aws` for both x86-based EC2 instances and Graviton-based EC2 instances, and you don’t want to update to the latest kernel version, you can install Lustre for the current kernel with the following command. 

      ```
      sudo apt install -y lustre-client-modules-$(uname -r)
      ```

      The two AWS Lustre packages that are necessary for mounting and interacting with your cache are installed. You can optionally install additional related packages such as a package containing the source code and packages containing tests that are included in the repository.

   1. List all available packages in the repository by using the following command. 

      ```
      sudo apt-cache search ^lustre
      ```

   1. (Optional) If you want your system upgrade to also always upgrade Lustre client modules, verify that the `lustre-client-modules-aws` package is installed using the following command.

      ```
      sudo apt install -y lustre-client-modules-aws
      ```

### To install the Lustre client on Ubuntu 18.04
<a name="install-lustre-client-Ubuntu-18"></a>

Starting with kernel `5.4.0.1085-aws`, Ubuntu 18.04.6 LTS is supported for Lustre 2.12 clients, for both x86 and Arm based instances.

You can get Lustre packages from the Ubuntu 18.04 AWS Lustre repository. To validate that the contents of the repository haven’t been tampered with before or during download, a GNU Privacy Guard (GPG) signature is applied to the metadata of the repository. Installing the repository fails unless you have the correct public GPG key installed on your system.

1. Open a terminal on your client.

1. Follow these steps to add the AWS Lustre Ubuntu repository:

   1. If you haven't previously registered an AWS Lustre Ubuntu repository on your client instance, download and install the required public key. Use the following command.

      ```
      wget -O - https://fsx-lustre-client-repo-public-keys.s3.amazonaws.com/fsx-ubuntu-public-key.asc | gpg --dearmor | sudo tee /usr/share/keyrings/fsx-ubuntu-public-key.gpg >/dev/null
      ```

   1. Add the AWS Lustre package repository to your local package manager using the following command.

      ```
      sudo bash -c 'echo "deb [signed-by=/usr/share/keyrings/fsx-ubuntu-public-key.gpg] https://fsx-lustre-client-repo.s3.amazonaws.com/ubuntu bionic main" > /etc/apt/sources.list.d/fsxlustreclientrepo.list && apt-get update'
      ```

1. Determine which kernel is currently running on your client instance, and update as needed. The Lustre client on Ubuntu 18.04 requires kernel `15.4.0.1085-aws` or later for both x86-based EC2 instances and Arm-based EC2 instances powered by AWS Graviton processors. 

   1. Run the following command to determine which kernel is running.

      ```
      uname -r
      ```

   1. Run the following command to update to the latest Ubuntu kernel and Lustre version and then reboot.

      ```
      sudo apt install -y linux-aws lustre-client-modules-aws && sudo reboot
      ```

       If your kernel version is greater than `5.4.0.1085-aws` for both x86-based EC2 instances and Graviton-based EC2 instances, and you don’t want to update to the latest kernel version, you can install Lustre for the current kernel with the following command. 

      ```
      sudo apt install -y lustre-client-modules-$(uname -r)
      ```

      The two Lustre packages that are necessary for mounting and interacting with your cache are installed. You can optionally install additional related packages, such as a package containing the source code and packages containing tests that are included in the repository.

   1. List all available packages in the repository by using the following command.

      ```
      sudo apt-cache search ^lustre
      ```

   1. (Optional) If you want your system upgrade to also always upgrade Lustre client modules, make sure that the `lustre-client-modules-aws` package is installed using the following command.

      ```
      sudo apt install -y lustre-client-modules-aws
      ```

# Mounting from an Amazon EC2 instance
<a name="mounting-ec2-instance"></a>

You can mount your cache from an Amazon EC2 instance.

**To mount your cache from Amazon EC2**

1. Connect to your Amazon EC2 instance.

1. Make a directory on your cache for the mount point with the following command.

   ```
   $ sudo mkdir -p /mnt
   ```

1. Mount the cache to the directory that you created. Use the following command and replace the following items:
   + Replace `cache_dns_name` with the actual file cache's DNS name.
   + Replace `mountname` with the cache's mount name. This mount name is returned in the `CreateFileCache` API operation response. It's also returned in the response of the **describe-file-caches** AWS CLI command, and the [DescribeFileCaches](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeFileCaches.html) API operation.

   ```
   sudo mount -t lustre -o relatime,flock cache_dns_name@tcp:/mountname /mnt
   ```

    This command mounts your cache with these options:
   +  `relatime` – Maintains `atime` (inode access times) data, but not for each time that a file is accessed. With this option enabled, `atime` data is written to disk only if the file has been modified since the `atime` data was last updated (mtime), or if the file was last accessed more than a certain amount of time ago (one day by default). `relatime` is required for [automatic cache eviction](cache-eviction.md#auto-cache-eviction) to work properly.
   +  `flock` –Enables file locking for your cache. If you don't want file locking enabled, use the `mount` command without `flock`. 

1. Verify that the mount command was successful by listing the contents of the directory to which you mounted the cache, `/mnt` by using the following command.

   ```
   $ ls /mnt
   import-path  lustre
   $
   ```

   You can also use the `df` command.

   ```
   $ df
   Filesystem                    1K-blocks    Used  Available Use% Mounted on
   devtmpfs                        1001808       0    1001808   0% /dev
   tmpfs                           1019760       0    1019760   0% /dev/shm
   tmpfs                           1019760     392    1019368   1% /run
   tmpfs                           1019760       0    1019760   0% /sys/fs/cgroup
   /dev/xvda1                      8376300 1263180    7113120  16% /
   123.456.789.0@tcp:/mountname 3547698816   13824 3547678848   1% /mnt
   tmpfs                            203956       0     203956   0% /run/user/1000
   ```

   The results show Amazon File Cache mounted on `/mnt`.

# Mounting from Amazon Elastic Container Service
<a name="mounting-ecs"></a>

You can access your cache from an Amazon Elastic Container Service (Amazon ECS) Docker container on an Amazon EC2 instance. You can do so by using either of the following options:

1. By mounting your cache from the Amazon EC2 instance that is hosting your Amazon ECS tasks, and exporting this mount point to your containers.

1. By mounting the cache directly inside your task container.

For more information about Amazon ECS, see [What is Amazon Elastic Container Service?](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) in the *Amazon Elastic Container Service Developer Guide*.

We recommend using option one ([Mounting from an Amazon EC2 instance hosting Amazon ECS tasks](#mounting-from-ecs-ec2)) because it provides better resource use, especially if you start many containers (more than five) on the same EC2 instance, or if your tasks are short-lived (less than five minutes). 

Use option two ([Mounting from a Docker container](#mounting-from-docker)), if you're unable to configure the EC2 instance, or if your application requires the container's flexibility.

**Note**  
Mounting your cache on an AWS Fargate launch type isn't supported.

The following sections describe the procedures for each of the options for mounting your cache from an Amazon ECS container.

**Topics**
+ [

## Mounting from an Amazon EC2 instance hosting Amazon ECS tasks
](#mounting-from-ecs-ec2)
+ [

## Mounting from a Docker container
](#mounting-from-docker)

## Mounting from an Amazon EC2 instance hosting Amazon ECS tasks
<a name="mounting-from-ecs-ec2"></a>

This procedure shows how you can configure an Amazon ECS on an EC2 instance to locally mount your cache. The procedure uses `volumes` and `mountPoints` container properties to share the resource and make this cache accessible to locally running tasks. For more information, see [Launching an Amazon ECS container instance](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html) in the *Amazon Elastic Container Service Developer Guide*. 

This procedure is for an Amazon ECS-Optimized Amazon Linux 2 AMI. If you're using another Linux distribution, see [Installing the Lustre client](install-lustre-client.md).

**To mount your cache from Amazon ECS on an EC2 instance**

1. When launching Amazon ECS instances, either manually or using an Auto Scaling group, add the lines in the following code example to the end of the **User data** field. Replace the following items in the example:
   + Replace `cache_dns_name` with the actual cache's DNS name.
   + Replace `mountname` with the cache's mount name.
   + Replace `mountpoint` with the cache's mount point, which you need to create.

   ```
   #!/bin/bash
   
   ...<existing user data>...
   
   fsx_dnsname=cache_dns_name
   fsx_mountname=mountname
   fsx_mountpoint=mountpoint
   amazon-linux-extras install -y lustre
   mkdir -p "$fsx_mountpoint"
   mount -t lustre ${fsx_dnsname}@tcp:/${fsx_mountname} ${fsx_mountpoint} -o relatime,flock
   ```

1. When creating your Amazon ECS tasks, add the following `volumes` and `mountPoints` container properties in the JSON definition. Replace `mountpoint` with the cache's mount point (such as `/mnt`).

   ```
   {
       "volumes": [
              {
                    "host": {
                         "sourcePath": "mountpoint"
                    },
                    "name": "Lustre"
              }
       ],
       "mountPoints": [
              {
                    "containerPath": "mountpoint",
                    "sourceVolume": "Lustre"
              }
       ],
   }
   ```

## Mounting from a Docker container
<a name="mounting-from-docker"></a>

The following procedure shows how you can configure an Amazon ECS task container to install the `lustre-client` package and mount your cache in it. The procedure uses an Amazon Linux (`amazonlinux`) Docker image, but a similar approach can work for other distributions.

**To mount your cache from a Docker container**

1. On your Docker container, install the `lustre-client` package and mount your cache with the `command` property. Replace the following items in the example:
   + Replace `cache_dns_name` with the actual file cache's DNS name.
   + Replace `mountname` with the cache's mount name.
   + Replace `mountpoint` with the cache's mount point.

   ```
   "command": [
     "/bin/sh -c \"amazon-linux-extras install -y lustre; mount -t lustre cache_dns_name@tcp:/mountname mountpoint -o relatime,flock;\""
   ],
   ```

1. Add `SYS_ADMIN` capability to your container to authorize it to mount your cache, using the `linuxParameters` property.

   ```
   "linuxParameters": {
     "capabilities": {
         "add": [
           "SYS_ADMIN"
         ]
      }
   }
   ```

# Mounting caches from on-premises or a peered Amazon VPC
<a name="mounting-on-premises"></a>

You can access your cache in two ways. One is from Amazon EC2 instances located in an Amazon VPC that's peered to the cache's VPC. The other is from on-premises clients that are connected to your cache's VPC using Direct Connect or VPN.

Connect the client's VPC and your Amazon File Cache's VPC using either a VPC peering connection or a VPC transit gateway. When you use either option, Amazon EC2 instances that are in one VPC can access caches in another VPC, even if the VPCs belong to different accounts.

Before using the following the procedure, you must set up either a VPC peering connection or a VPC transit gateway. 

A *transit gateway* is a network transit hub that you can use to interconnect your VPCs and on-premises networks. For more information about using VPC transit gateways, see [Getting Started with Transit Gateways](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-getting-started.html) in the *Amazon VPC Transit Gateways Guide*.

A *VPC peering connection* is a networking connection between two VPCs. This type of connection enables you to route traffic between them using private Internet Protocol version 4 (IPv4) or Internet Protocol version 6 (IPv6) addresses. You can use VPC peering to connect VPCs within the same AWS Region or between AWS RegionsAWS Regions. For more information about VPC peering, see [What is VPC Peering?](https://docs.aws.amazon.com/vpc/latest/peering/Welcome.html) in the *Amazon VPC Peering Guide*.

You can mount your cache from outside its VPC using the IP address of its primary network interface. The primary network interface is the first network interface returned when you run the `aws fsx describe-file-caches` AWS Command Line Interface (AWS CLI) command. You can also get this IP address from the AWS Management Console.

**To retrieve the IP address of the primary network interface for a cache**

1. Open the AWS console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. In the navigation pane, choose **Caches**.

1. Choose your cache from the dashboard.

1. From the **Summary** details page, choose **Network & security**.

1. For **Network interface**, choose the ID for your primary elastic network interface. This takes you to the Amazon EC2 console.

1. On the **Details** tab, find the **Primary private IPv4 IP**. This is the IP address for your primary network interface.

**Note**  
You can't use Domain Name System (DNS) name resolution when mounting an Amazon File Cache resource from outside the VPC it's associated with.

# Mounting your cache automatically
<a name="mount-fs-auto-mount-onreboot"></a>

 You can update the `/etc/fstab` file in your Amazon EC2 instance after you connect to the instance for the first time so that it mounts your cache each time it reboots.

## Using /etc/fstab to mount Amazon File Cache automatically
<a name="lustre-mount-fs-auto-mount-update-fstab"></a>

To automatically mount your cache directory when the Amazon EC2 instance reboots, you can use the `fstab` file. The `fstab` file contains information about the cache. The command `mount -a`, which runs during instance startup, mounts the caches listed in the `fstab` file.

**Note**  
Before you can update the `/etc/fstab` file of your EC2 instance, verify that you've already created your cache. For more information, see [Step 1: Create your cache](getting-started-step1.md) in the Getting Started exercise.

**To update the /etc/fstab file in your EC2 instance**

1. Connect to your EC2 instance, and open the `/etc/fstab` file in an editor.

1. Add the following line to the `/etc/fstab` file.

   Mount Amazon File Cache to the directory that you created. Use these commands and replace the following:
   + Replace *`/mnt`* with the directory that you want to mount your cache to.
   + Replace `cache_dns_name` with the actual cache's DNS name.
   + Replace `mountname` with the cache's mount name. This mount name is returned in the `CreateFileCache` API operation response. It's also returned in the response of the **describe-file-caches** AWS CLI command, and the `[DescribeFileCaches](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeFileCaches.html)` API operation.

   ```
   cache_dns_name@tcp:/mountname /mnt lustre defaults,relatime,flock,_netdev,x-systemd.automount,x-systemd.requires=network.service 0 0
   ```
**Warning**  
Use the `_netdev` option that's used to identify network file systems, when mounting your cache automatically. If `_netdev` is missing, your EC2 instance might stop responding. This result is because network file systems must be initialized after the compute instance starts its networking.

1. Save the changes to the file.

Your EC2 instance is now configured to mount the cache whenever it restarts.

**Note**  
In some cases, your Amazon EC2 instance might need to start regardless of the status of your mounted cache. In these cases, add the `nofail` option to your cache's entry in your `/etc/fstab` file.

The fields in the line of code that you added to the `/etc/fstab` file do the following.


| Field | Description | 
| --- | --- | 
|  `cache_dns_name@tcp:/`  |  The DNS name for your cache, which identifies it. You can get this name from the console or programmatically from the AWS CLI or an AWS SDK.  | 
|  `mountname`  | The mount name for the cache. You can get this name from the console or programmatically from the AWS CLI using the **describe-file-caches** command or the AWS API or SDK using the `[DescribeFileSystems](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeFileCaches.html)` operation. | 
|  `/mnt`  |  The mount point for the cache on your EC2 instance.  | 
|  `lustre`  |  The type of cache.  | 
|  `mount options`  |  Mount options for the cache, presented as a comma-separated list of the following options: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/fsx/latest/FileCacheGuide/mount-fs-auto-mount-onreboot.html)  | 
|  `x-systemd.automount,x-systemd.requires=network.service`  |  These options ensure that the auto mounter does not run until the network connectivity is online. For Ubuntu 22.04, use the `x-systemd.requires=systemd-networkd-wait-online.service` option instead of the `x-systemd.requires=network.service` option.  | 
|  `0`  |  A value that indicates whether the cache should be backed up by `dump`. This value should be `0`.  | 
|  `0`  |  A value that indicates the order in which `fsck` checks caches at boot. For caches, this value should be `0` to indicate that `fsck` should not run at startup.  | 

# Mounting specific filesets
<a name="mounting-from-fileset"></a>

By using the Lustre fileset feature, you can mount only a subset of the cache namespace, which is called a *fileset*. To mount a fileset of the cache on the client, specify the subdirectory path after the cache name. A fileset mount (also called a subdirectory mount) limits the cache namespace visibility on a specific client.

**Example – Mount a Lustre fileset**

1. Assume that you have a cache with the following directories:

   ```
   team1/dataset1/
   team2/dataset2/
   ```

1. You mount only the `team1/dataset1` fileset, making only this part of the cache locally visible on the client. Use these commands and replace the following items:
   + Replace `cache_dns_name` with the actual cache's DNS name.
   + Replace `mountname` with the cache's mount name. This mount name is returned in the `CreateFileCache` API operation response. It's also returned in the response of the **describe-file-caches** AWS CLI command, and the [DescribeFileCaches](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeFileCaches.html) API operation.

   ```
   mount -t lustre -o relatime,flock cache_dns_name@tcp:/mountname/team1/dataset1 /mnt
   ```

When using the Lustre fileset feature, keep the following in mind:
+ Before a cache directory can be mounted on a client, you must list the directory by running the `ls` command on a parent directory in the cache.
+ There are no constraints preventing a client from remounting the cache using a different fileset, or no fileset at all.
+ When using a fileset, some Lustre administrative commands requiring access to the `.lustre/` directory might not work, such as the `lfs fid2path` command.
+ If you plan to mount several subdirectories from the same cache on the same host, keep in mind that this consumes more resources than a single mount point, and it could be more efficient to mount the cache root directory only once instead.

For more information about the Lustre fileset feature, see the *Lustre Operations Manual* on the [Lustre documentation website](https://doc.lustre.org/lustre_manual.xhtml#SystemConfigurationUtilities.fileset).

# Unmounting caches
<a name="unmounting-fs"></a>

Before you delete a cache, we recommend that you unmount it from every Amazon EC2 instance that it's connected to. You can unmount a cache on your Amazon EC2 instance by running the `umount` command on the instance itself. You can't unmount a cache through the AWS CLI, the AWS Management Console, or through any of the AWS SDKs. To unmount a cache connected to an Amazon EC2 instance running Linux, use the `umount` command as follows:

```
umount /mnt 
```

We recommend that you don't specify any other `umount` options and avoid setting any other `umount` options that are different from the defaults.

You can verify that your cache has been unmounted by running the `df` command. This command displays the disk usage statistics for the Amazon File Caches currently mounted on your Linux-based Amazon EC2 instance. If the cache that you want to unmount isn’t listed in the `df` command output, this means that the cache is unmounted.

**Example – Identify the mount status of an Amazon File Cache resource and unmount it**  

```
$ df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on 
cache_id.fsx.aws-region.amazonaws.com@tcp:/mountname /mnt 3547708416 61440 3547622400 1% /mnt
      /dev/sda1 ext4 8123812 1138920 6884644 15% /
```

```
$ umount /mnt
```

```
$ df -T 
```

```
Filesystem Type 1K-blocks Used Available Use% Mounted on 
/dev/sda1 ext4 8123812 1138920 6884644 15% /
```

# Working with Amazon EC2 Spot Instances
<a name="working-with-ec2-spot-instances"></a>

Amazon File Cache can be used with EC2 Spot Instances to significantly lower your Amazon EC2 costs. A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Amazon EC2 can interrupt your Spot Instance when the Spot price exceeds your maximum price, when the demand for Spot Instances rises, or when the supply of Spot Instances decreases.

When Amazon EC2 interrupts a Spot Instance, it provides a Spot Instance interruption notice, which gives the instance a two-minute warning before Amazon EC2 interrupts it. For more information, see [Spot Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html) in the *Amazon EC2 User Guide*. 

To verify that Amazon File Cache resources are unaffected by EC2 Spot Instance Interruptions, we recommend unmounting caches prior to terminating or hibernating EC2 Spot Instances. For more information, see [Unmounting caches](unmounting-fs.md). 

## Handling Amazon EC2 Spot Instance interruptions
<a name="handling-ec2-spot-interruptions-in-fsx"></a>

Amazon File Cache is built on the Lustre distributed file system where server and client instances cooperate to provide a performant and reliable file system. They maintain a distributed and coherent state across both client and server instances. Lustre servers delegate temporary access permissions to clients while they are actively doing I/O and caching file system data. Clients are expected to reply in a short period of time when servers request them to revoke their temporary access permissions. To protect the cache against misbehaving clients, servers can evict Lustre clients that do not respond after a few minutes. To avoid having to wait multiple minutes for a non-responding client to reply to the server request, it's important to cleanly unmount Lustre clients, especially before terminating EC2 Spot Instances. 

 EC2 Spot sends termination notices two minutes in advance before shutting down an instance. We recommend that you automate the process of cleanly unmounting Lustre clients before terminating EC2 Spot Instances. 

**Example – Script to cleanly unmount terminating EC2 Spot Instances**  
This example script cleanly unmounts terminating EC2 Spot Instances by doing the following:  
+ Watches for Spot termination notices.
+ When it receives a termination notice:
  + Stops applications that are accessing the cache.
  + Unmounts the cache before the instance is terminated.
You can adapt the script as needed, especially for gracefully shutting down your application. For more information about best practices for handling Spot Instance interruptions, see [ Best practices for handling EC2 Spot Instance interruptions](https://aws.amazon.com/blogs//compute/best-practices-for-handling-ec2-spot-instance-interruptions/).  

```
#!/bin/bash

# TODO: Specify below the FSx mount point you are using
*FSXPATH=/mnt*

cd /

TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
if [ "$?" -ne 0 ]; then
    echo "Error running 'curl' command" >&2
    exit 1
fi

# Periodically check for termination
while sleep 5
do

    HTTP_CODE=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -s -w %{http_code} -o /dev/null http://169.254.169.254/latest/meta-data/spot/instance-action)

    if [[ "$HTTP_CODE" -eq 401 ]] ; then
        # Refreshing Authentication Token
        TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 30")
        continue
    elif [[ "$HTTP_CODE" -ne 200 ]] ; then
        # If the return code is not 200, the instance is not going to be interrupted
        continue
    fi

    echo "Instance is getting terminated. Clean and unmount '$FSXPATH' ..."
    curl -H "X-aws-ec2-metadata-token: $TOKEN" -s http://169.254.169.254/latest/meta-data/spot/instance-action
    echo

    # Gracefully stop applications accessing the filesystem
    #
    # TODO*: Replace with the proper command to stop your application if possible*

    # Kill every process still accessing Lustre filesystem
    echo "Kill every process still accessing Lustre filesystem..."
    fuser -kMm -TERM "${FSXPATH}"; sleep 2
    fuser -kMm -KILL "${FSXPATH}"; sleep 2

    # Unmount the cache
    if ! umount -c "${FSXPATH}"; then
        echo "Error unmouting '$FSXPATH'. Processes accessing it:" >&2
        lsof "${FSXPATH}"

        echo "Retrying..."
        continue
    fi

    # Start a graceful shutdown of the host
    shutdown now

done
```