

# Performance for Amazon FSx for OpenZFS
<a name="performance"></a>

Amazon FSx for OpenZFS provides simple, high-performance file storage. In this section, we provide an overview of FSx for OpenZFS performance for all deployment types, and describe how your file system configuration impacts key performance dimensions. We also include some important tips and recommendations for maximizing the performance of your file system.

**Topics**
+ [

## File system performance
](#zfs-fs-performance)
+ [

## Choosing a deployment type based on performance
](#choosing-between)
+ [

## Choosing a storage class based on performance
](#choosing-between-storage-classes)
+ [

## Migrating between deployment types and storage classes
](#migrating-between-deployments)
+ [

## Performance on S3 access points
](#access-points-performance)
+ [

## General tips for maximizing performance
](#performance-tips-zfs)
+ [

## Monitoring performance
](#measure-performance-cw)
+ [

# How FSx for OpenZFS file systems work with SSD storage
](performance-ssd.md)
+ [

# How FSx for OpenZFS file systems work with Intelligent-Tiering
](performance-intelligent-tiering.md)

## File system performance
<a name="zfs-fs-performance"></a>

File system performance is typically measured in latency, throughput, and I/O operations per second (IOPS). Amazon FSx for OpenZFS offers three deployment options, Multi-AZ (HA), Single-AZ (HA), and Single-AZ (non-HA). Both Single-AZ (non-HA and HA) deployment types, also support Single-AZ 1 and Single-AZ 2. Single-AZ 2 offers higher levels of performance than the maximum offered by Single-AZ 1. Each deployment option offers a different performance profile. In this section, we document the performance you can expect for frequently accessed data from the in-memory or NVMe caches and data accessed from disk for both deployment types. We also document the baseline performance you can always deliver, as well as the burst performance you can drive for short periods of time.

The specific level of performance a file system can provide is defined by its *provisioned throughput capacity*, which determines the size of the file server hosting the file system. Provisioned throughput capacity is equivalent to the baseline disk throughput supported by your file server. For data access from disks, your file system’s performance is also dependent on the number of *provisioned SSD disk IOPS * configured for the file system’s underlying disks. Note that the actual level of performance you can drive for your workload depends on a variety of factors. For more information, see [Tips for maximizing performance](#performance-tips-zfs).

## Choosing a deployment type based on performance
<a name="choosing-between"></a>

Both Single-AZ (non-HA) and Single-AZ (HA) offer two tiers of performance with Single-AZ 1 and Single-AZ 2. Single-AZ 2 (HA) is recommended for most use cases, given the higher level of both performance and availablity that it provides. Single-AZ 2 file systems offer double the performance scalability as compared to Single-AZ 1, delivering up to 400,000 IOPS and 10 GBps throughput for both reads and writes to persistent SSD storage.

In addition, Single-AZ 2 file systems include an up to 2.5 TB high-speed NVMe read cache that automatically caches your most recently-accessed data, making that data accessible at millions of IOPS and with latencies of a few hundred microseconds. Single-AZ 2 file systems are suitable for high-performance workloads such as media processing and rendering, financial analytics, and machine learning. Single-AZ 2 file systems are also appropriate for read-heavy workloads with frequently accessed datasets.

In addition to Single-AZ file systems, Amazon FSx for OpenZFS also offers Multi-AZ file systems that offer higher levels of availability and durability in addition to the same levels of performance as Single-AZ 2. For more information on Multi-AZ (HA) file systems and choosing between deployment types, see [Availability and durability for Amazon FSx for OpenZFS](availability-durability.md).

For information on which deployment types are supported in each AWS Region, see [Availability by AWS Region](available-aws-regions.md)

## Choosing a storage class based on performance
<a name="choosing-between-storage-classes"></a>

Your file system's performance also depends on its storage class. FSx for OpenZFS offers two storage classes, Intelligent-Tiering (elastic) and SSD (provisioned).

 With Intelligent-Tiering, your file system has fully elastic, low-cost storage and access to a built-in SSD-backed write log for low-latency writes and an optional provisioned SSD read cache for low-latency reads. Intelligent-Tiering is recommended for most Network Attached Storage (NAS) datasets to simplify storage management and reduce costs while offering performance levels comparable to file systems using the SSD (provisioned) storage class for most workloads. Intelligent-Tiering is suitable for home directories, analytics, and other project-based workloads. For more information on performance for file systems that use Intelligent-Tiering, see [How FSx for OpenZFS file systems work with Intelligent-Tiering](performance-intelligent-tiering.md).

 With SSD (provisioned) storage, your file system provides low-latency access to your full dataset. SSD (provisioned) is recommend for datasets that require the performance of all-flash storage across all data. SSD (provisioned) is suitable for databases and non-cache-friendly electronic design automation workloads. For more information on performance for file systems that use SSD (provisioned) storage, see [How FSx for OpenZFS file systems work with SSD storage](performance-ssd.md). 

For information on which storage classes are supported in each AWS Region, see [Availability by AWS Region](available-aws-regions.md)

## Migrating between deployment types and storage classes
<a name="migrating-between-deployments"></a>

Once you create your file system, you cannot change its deployment type or storage class. However, there are several options that you can use to migrate data from your pre-existing file system to a new file system with your desired deployment type or storage class.
+ **Restoring from backup** ‐ You can create a new Single-AZ 2 file system by restoring from a backup of your Single-AZ 1 file system and choosing the desired deployment type. You can also create a Single-AZ (HA) file system from a Single-AZ (non-HA) file system. You cannot create a Multi-AZ file system from a Single-AZ backup or migrate between storage classes by restoring from a backup. For more information, see [Restoring backups](restoring-backups.md).
+ **Using on-demand replication** ‐ You can use on-demand replication to synchronize data between file systems with different deployment types or storage classes. For more information, see [Protecting your data with on-demand replication](on-demand-replication.md).

For more information on how to migrate your data, see [Migrating your existing file storage to Amazon FSx for OpenZFS](migrating-fsx-openzfs.md).

## Performance on S3 access points
<a name="access-points-performance"></a>

Amazon S3 access points for FSx for OpenZFS file systems deliver latency in the tens of milliseconds range, consistent with S3 bucket access. Requests per second and throughput performance scale with your Amazon FSx ﬁle system's provisioned throughput and SSD. For example, your applications can achieve up to 3,500 PUT or 5,500 GET requests per second (consistent with what you can achieve per Amazon S3 partitioned prefix) and 3.5 GBps of write (PUT) throughput and 10 GBps of read (GET) throughput on a file system provisioned with the maximum throughput capacity level.

## General tips for maximizing performance
<a name="performance-tips-zfs"></a>

FSx for OpenZFS file systems are designed to deliver the maximum performance of your file system across your clients in aggregate, whether you are supporting data access from a single client, or thousands of clients. The following sections provide some practical tips on how to maximize client performance.

### Client considerations
<a name="client-considerations"></a>

#### Amazon EC2 instances
<a name="ec2-instances"></a>

When launching the Amazon EC2 instances that will work with your FSx for OpenZFS file system, ensure that they can support the level of performance your file system needs to deliver. Ensure they have the compute, memory, and network capacity sufficient to drive the throughput, IOPS, and latencies provided by your FSx for OpenZFS file system.

To determine your EC2 instance’s compute and memory capacity, see [Instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) in the *Amazon EC2 User Guide for Linux Instances*. To determine its network capacity, see [Amazon EC2 instance network bandwidth](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-network-bandwidth.html) in the same guide. The performance characteristics of FSx for OpenZFS file systems don't depend on the use of Amazon EC2–optimized instances.

#### NFS nconnect
<a name="nfs-nconnect"></a>

With FSx for OpenZFS, NFS clients can use the `nconnect` mount option to have multiple TCP connections (up to 16) associated with a single NFS mount. Such an NFS client multiplexes file operations onto multiple TCP connections (multi-flow) in a round-robin fashion to obtain improved performance beyond single TCP connection (single-flow) limits. For more information on single-flow limits, see [Amazon EC2 instance network bandwidth](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-network-bandwidth.html) in the *Amazon EC2 User Guide for Linux Instances*.

The following command demonstrates how to use the `nconnect` mount option to mount an FSx for OpenZFS volume with a maximum of 16 simultaneous connections:

```
sudo mount -t nfs -o nconnect=16 filesystem_dns_name:/vol_path /localpath
```

The `nconnect` mount option is supported for all NFS versions (v3, v4.0, v4.1, v4.2). NFS `nconnect` is supported by default in Linux kernel versions 5.3 and above, including the latest Ubuntu 18.04 LTS. In addition, RHEL8.3 supports `nconnect` by way of a backport into the `4.18.0-240.e18` kernel and newer.

#### NFS v3
<a name="nfs-v3"></a>

FSx for OpenZFS file systems flexibly support multiple versions of the NFS protocol (v3, v4.0, v4.1, v4.2). While more recent versions of NFS can better support simultaneous access from many clients (due to a more robust file-locking mechanism) and client-side caching, NFS v3 may still provide improved latency, throughput, and IOPS performance for performance-sensitive workloads. You can mount using NFS v3 from Linux, Windows, or macOS EC2 instances. For more information, see [Step 2: Mount your file system from an Amazon EC2 instance](getting-started.md#getting-started-step2).

The following example illustrates how to specify NFS v3 when mounting an FSx for OpenZFS volume:

```
sudo mount -t nfs -o nfsvers=3 fs-dns-name:/vol_path /local_path
```

#### NFS delegations
<a name="nfs-delegations"></a>

To improve the ability of NFS clients to cache data locally, NFS v4 introduced NFS delegations, or the ability of the server to delegate certain responsibilities to the client. If the client is granted a read delegation, it is assured that no other client has the ability to write to the file for the duration of the delegation, meaning that the client can read from its local copy instead of having to go back to the file server.

FSx for OpenZFS file systems support NFS v4 file read delegations. To take advantage of this capability, ensure your clients are mounting with NFS v4.0 or higher.

#### Request model
<a name="request-model"></a>

When you mount your file system, asynchronous writes are enabled by default (that is, `-o async`). With asynchronous writes, pending write operations are buffered on the client before they are written to your Amazon FSx file system, enabling lower latencies for these operations. A client that has enabled synchronous writes (that is, `-o sync`), or one that opens files using an option that bypasses the cache (for example, `O_DIRECT`), issues synchronous requests, which means that every operation incurs a round-trip between your client and the file server. We recommend using the default asynchronous write option to maximize client performance.

#### Other recommended mount options
<a name="other-mount-options"></a>

To improve the performance of your file system, you can also configure the following options when mounting your file system:
+ `rsize=1048576` – Sets the maximum number of bytes of data that the NFS client can receive for each network READ request to 1048576 bytes (1 MB). Due to lower memory capacity on file systems with 64 MBps and 128 MBps of provisioned throughput, these file systems will only accept a maximum `rsize` of 262144 and 524288 bytes, respectively.
+ `wsize=1048576` – Sets the maximum number of bytes of data that the NFS client can send for each network WRITE request to 1048576 bytes (1 MB). Due to lower memory capacity on file systems with 64 MBps and 128 MBps of provisioned throughput, these file systems will only accept a maximum `wsize` of 262144 and 524288 bytes, respectively.
+ `timeo=600` – Sets the timeout value that the NFS client uses to wait for a response before it retries an NFS request to 600 deciseconds (60 seconds).
+ `_netdev` – When present in **/etc/fstab**, prevents the client from attempting to mount the FSx for OpenZFS volume until the network has been enabled.

The following example uses sample values.

```
sudo mount -t nfs -o rsize=1048576,wsize=1048576,timeo=600 fs-01234567890abcdef1.fsx.us-east-1.amazonaws.com:/fsx/vol1 /fsx
```

### File system and volume configurations
<a name="fs-vol-configurations"></a>

#### Storage capacity utilization
<a name="storage-capacity-utilization"></a>

As the amount of used storage space gets closer to the total available storage capacity, Amazon FSx (like other file systems) spends more time finding suitable places to store new files and their metadata. This leads to higher latency for operations that modify files, which can negatively impact overall performance. To avoid this performance impact, we recommended keeping storage utilization of SSD (provisioned) file systems below 80% of the total capacity. If needed, you can increase your maximum storage capacity at anytime, without disruption to your end users or applications. For more information, see [Modifying provisioned SSD storage capacity and IOPS](managing-storage-capacity.md).

You do not need to modify storage capacity if your file system uses the Intelligent-Tiering storage class. For more information, see [How FSx for OpenZFS file systems work with Intelligent-Tiering](performance-intelligent-tiering.md).

#### Provisioned throughput capacity and in-memory cache
<a name="provisioned-throughput-in-mem-cache"></a>

In addition to defining the throughput and IOPS that a file system can deliver, a file system's provisioned throughput capacity also determines the amount of in-memory cache on your file server. Increasing your file system's throughput capacity improves workload performance in two ways.

First, it increases the throughput and IOPS you can drive from disk (disk I/O) and from in-memory cache. Second, by increasing the amount of in-memory cache, you can store more data in your file server's in-memory cache, which drives higher cached performance for larger workloads.

Some request- or metadata-intensive workloads will also benefit from a larger file server in-memory cache. These types of workloads can generate and store a large volume of metadata in the in-memory cache. To ensure the size of your file server's in-memory cache is not a bottleneck for your file system performance, we recommend provisioning at least 128 MBps of throughput capacity for these types of workloads. 

#### NFS export options (sync and async)
<a name="perf-nfs-export-options"></a>

On the file server side, the `sync` or `async` NFS export option can impact performance. (This is distinct from the similarly-named option you use when mounting your FSx for OpenZFS volume on your client.) This option determines whether your file server will acknowledge client I/O requests as complete when they are written to the file server’s in-memory cache (`async`), or only after they are committed to the file server’s persistent disks (`sync`). `sync` is the default option and is generally recommended for most workloads.

If you have performance-intensive workloads that can use an FSx for OpenZFS volume as temporary storage for shorter-term data processing or workloads that are resilient to data loss, you can use the `async` option to achieve substantially higher performance. Because an FSx for OpenZFS volume exported with the `async` option will acknowledge client writes before they are committed to durable disk storage, clients can write data to the file server at a significantly faster rate. However, this performance comes at the cost of losing data from acknowledged writes that have not yet been committed to the server's disks, in the event of a file server crash.

#### Data compression
<a name="perf-data-compression"></a>

For read-heavy workloads, compression can significantly improve the overall throughput performance of your file system because it reduces the amount of data that needs to be sent between the underlying storage and the file server. FSx for OpenZFS volumes support the following data compression algorithms.
+ *Zstandard compression* delivers very high levels of on-disk data compression, with higher read throughput and reduced write throughput performance than LZ4 compression.
+ *LZ4 compression* delivers higher write throughput performance, but achieves lower levels of data compression than Zstandard compression.

With data compression, you can improve your read throughput on data accessed from disk up to the same levels you deliver for frequently accessed cached data. The specific improvement depends upon the amount by which compression can reduce the size of your dataset. Your effective throughput will be roughly equivalent to the product of your provisioned disk throughput and your compression ratio (defined as the ratio of the size of the compressed data to the size of the uncompressed data). For the highest provisioned throughput level (4096 MBps), common Z-Standard compression ratios of 2-3x can increase your effective read throughput by up to 8-12 GBps.

You can change a volume's data compression to improve performance. Changing this property affects only newly-written data on the volume.

#### ZFS record size
<a name="record-size-performance"></a>

The ZFS record size specifies a suggested block size for files in the volume. This property is designed solely for use with databases and other workloads that access files in fixed-size records. ZFS automatically tunes block sizes according to internal algorithms optimized for typical access patterns. When you create a volume, the default record size for file systems using the Intelligent-Tiering storage class is 1024 KiB. The default for all other file systems is 128 KiB. General purpose workflows perform well using the default record size, and we don't recommend changing it, as it may adversely affect performance.

For database workflows that create very large files but access them in small random chunks, specifying a record size greater than or equal to the record size of the database can result in significant performance gains. For databases that use a fixed disk block or record size for I/O, set the ZFS record size to match it. See [Dataset record size](https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Workload%20Tuning.html#dataset-recordsize) in the OpenZFS documentation for more information.

Streaming workflows such as multimedia and video can benefit from setting a larger record size than the default value. For more information about setting the record size on a volume, see [Managing Amazon FSx for OpenZFS volumes](managing-volumes.md).

You can change a volume's record size to make performance improvements. Changing the volume record size affects only files created afterward; existing files are unaffected.

## Monitoring performance
<a name="measure-performance-cw"></a>

Every minute, FSx for OpenZFS emits usage metrics to Amazon CloudWatch and you can use these metrics to help identify opportunities to improve the performance your clients can drive from your file system.

You can investigate aggregate file system performance with the `Sum` statistic of each metric. For example, the `Sum` of the `DataReadBytes` statistic reports the total read throughput by file system or volume, and the `Sum` of the `DataWriteBytes` statistic reports the total write throughput by file system or volume.

For more information on monitoring your file system’s performance, see [Monitoring with Amazon CloudWatch](monitoring-cloudwatch.md).

# How FSx for OpenZFS file systems work with SSD storage
<a name="performance-ssd"></a>

FSx for OpenZFS file systems that use the SSD (provisioned) storage class consist of a file server that clients communicate with and a set of disks attached to that file server. Each file server employs a fast, in-memory cache to enhance performance for the most frequently accessed data. In addition to the in-memory cache, Single-AZ 2 file systems also provide an additional Non-volatile Memory Express (NVMe) cache for storing up to terrabytes of frequently accessed data. FSx for OpenZFS utilizes the Adaptive Replacement Cache (ARC) and L2ARC that are built into the OpenZFS file system, which improves the portion of data access driven from the in-memory and NVMe caches.

When a client accesses data that's stored in either the in-memory or NVMe caches, the data is served directly to the requesting client as *network I/O*, without the file server needing to read it from disk. When a client accesses data that is not in either of these caches, it is read from disk as *disk I/O* and then served to the client as network I/O; data read from disk is also subject to the IOPS and bandwidth limits of the underlying disks.

FSx for OpenZFS file systems can serve network I/O about three times faster than disk I/O, which means that clients can drive greater throughput and IOPS with lower latencies for frequently accessed data in cache. The following diagram illustrates how data is accessed from an FSx for OpenZFS file system, with the NVMe cache applying to all Single-AZ 2 file systems.

![\[Diagram showing how data is accessed in an FSx for OpenZFS file system.\]](http://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/images/zfs-data-access-nvme.png)


File-based workloads are typically spiky, characterized by short, intense periods of high I/O with plenty of idle time between bursts. To support spiky workloads, in addition to the *baseline* speeds that a file system can sustain 24/7, Amazon FSx provides the capability to *burst* to higher speeds for periods of time for both network I/O and disk I/O operations. Amazon FSx uses a network I/O credit mechanism to allocate throughput and IOPS based on average utilization — file systems accrue credits when their throughput and IOPS usage is below their baseline limits, and can use these credits when they perform I/O operations.

**Topics**
+ [

## Data access from cache
](#data-access-memory-cache)
+ [

## Data access from disk
](#data-access-disk)
+ [

## SSD IOPS and performance
](#ssd-iops-disk-performance)

## Data access from cache
<a name="data-access-memory-cache"></a>

For read access directly from the in-memory ARC or NVMe L2ARC cache, performance is primarily defined by two components: the performance supported by the client-server network I/O connection, and the size of the cache. The following tables show the cached read performance, and amount of memory available for in-memory caching and other activities, for all Single-AZ 1, all Single-AZ 2, and Multi-AZ (HA) file systems, based on provisioned throughput capacity and AWS Region.

**Note**  
Single-AZ 1 (HA) and Single-AZ 2 (HA) file systems are only available in a certain subset of AWS Regions. For more information on which AWS Regions support Single-AZ 1 (HA) and Single-AZ 2 (HA) file systems, see [Availability by AWS Region](available-aws-regions.md).

### Single-AZ 1 (US West (N. California), Africa (Cape Town), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Middle East (UAE), AWS GovCloud (US-West), AWS GovCloud (US-East))
<a name="cached-read-table-saz1-1"></a>


| Provisioned throughput capacity (MBps) | Memory (GB) | Maximum network throughput capacity (MBps) | Maximum number of client connections | Maximum network IOPS | 
| --- |--- |--- |--- |--- |
|  ****  |  ****  |  **Baseline**  |  **Burst**  |  ****  |  ****  | 
| --- |--- |--- |--- |--- |--- |
| 64 | 8 | 97 | 1,562 | 4,096 |  Tens of thousands of IOPS  | 
| --- |--- |--- |--- |--- |--- |
| 128 | 16 | 195 | 1,562 | 8,192 | 
| --- |--- |--- |--- |--- |
| 256 | 32 | 390 | 1,562 | 16,384 | 
| --- |--- |--- |--- |--- |
| 512 | 64 | 781 | 1,562 | 32,768 |  Hundreds of thousands of IOPS  | 
| --- |--- |--- |--- |--- |--- |
| 1,024 | 128 | 1,562 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |
| 2,048 | 256 | 3,125 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |
| 3,072 | 384 | 4,687 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |
| 4,096 | 512 | 6,250 |   –   | 32,768 |  Up to 1 million IOPS  | 
| --- |--- |--- |--- |--- |--- |

### Single-AZ 1 (All other AWS Regions)
<a name="cached-read-table-saz1-2"></a>


| Provisioned throughput capacity (MBps) | Memory (GB) | Maximum network throughput capacity (MBps) | Maximum number of client connections | Maximum network IOPS | 
| --- |--- |--- |--- |--- |
|  ****  |  ****  |  **Baseline**  |  **Burst**  |  ****  |  ****  | 
| --- |--- |--- |--- |--- |--- |
| 64 | 2 | 200 | 3,200 | 4,096 |  Tens of thousands of IOPS  | 
| --- |--- |--- |--- |--- |--- |
| 128 | 4 | 400 | 3,200 | 8,192 | 
| --- |--- |--- |--- |--- |
| 256 | 8 | 800 | 3,200 | 16,384 | 
| --- |--- |--- |--- |--- |
| 512 | 16 | 1,600 | 3,200 | 32,768 |  Hundreds of thousands of IOPS  | 
| --- |--- |--- |--- |--- |--- |
| 1,024 | 32 | 3,200 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |
| 2,048 | 64 | 6,400 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |
| 3,072 | 96 | 9,600 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |
| 4,096 | 128 | 12,800 |   –   | 32,768 |  1 million IOPS  | 
| --- |--- |--- |--- |--- |--- |

### Single-AZ 2 (All AWS Regions)
<a name="cached-read-performace-saz2"></a>


| Provisioned throughput capacity (MBps) | Memory (GB) | NVMe L2ARC cache (GB) | Network throughput capacity (MBps) | Maximum number of client connections | Maximum network IOPS | 
| --- |--- |--- |--- |--- |--- |
|  ****  |  ****  |  ****  |  **Baseline**  |  **Burst**  |  ****  |  ****  | 
| --- |--- |--- |--- |--- |--- |--- |
| 160 | 8 | 40 | 375 | 3,125 | 8,192 |  Tens of thousands of IOPS  | 
| --- |--- |--- |--- |--- |--- |--- |
| 320 | 16 | 80 | 775 | 3,750 | 16,384 | 
| --- |--- |--- |--- |--- |--- |
| 640 | 32 | 160 | 1,550 | 5,000 | 32,768 |  Hundreds of thousands of IOPS  | 
| --- |--- |--- |--- |--- |--- |--- |
| 1,280 | 64 | 320 | 3,125 | 6,250 | 32,768 | 
| --- |--- |--- |--- |--- |--- |
| 2,560 | 128 | 640 | 6,250 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |--- |
| 3,840 | 192 | 960 | 9,375 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |--- |
| 5,120 | 256 | 1,280 | 12,500 |   –   | 32,768 |  1\$1 million IOPS  | 
| --- |--- |--- |--- |--- |--- |--- |
| 7,680 | 384 | 1,920 | 18,750 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |--- |
| 10,240 | 512 | 2,560 | 21,000 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |--- |

### Multi-AZ (US West (N. California), Africa (Cape Town), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Middle East (UAE) AWS GovCloud (US-West), AWS GovCloud (US-East))
<a name="cached-read-performace-maz-1"></a>


| Provisioned throughput capacity (MBps) | Memory (GB) | Network throughput capacity (MBps) | Maximum number of client connections | Maximum network IOPS | 
| --- |--- |--- |--- |--- |
|  ****  |  ****  |  **Baseline**  |  **Burst**  |  ****  |  ****  | 
| --- |--- |--- |--- |--- |--- |
| 160 | 16 | 195 | 1,562 | 8,192 |  Tens of thousands of IOPS  | 
| --- |--- |--- |--- |--- |--- |
| 320 | 32 | 390 | 1,562 | 16,384 | 
| --- |--- |--- |--- |--- |
| 640 | 64 | 781 | 1,562 | 32,768 |  Hundreds of thousands of IOPS  | 
| --- |--- |--- |--- |--- |--- |
| 1,280 | 128 | 1,562 |  –  | 32,768 | 
| --- |--- |--- |--- |--- |
| 2,560 | 256 | 3,125 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |
| 3,840 | 384 | 4,687 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |
| 5,120 | 512 | 6,250 |   –   | 32,768 |  Up to 1 million IOPS  | 
| --- |--- |--- |--- |--- |--- |

### Multi-AZ (US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore))
<a name="cached-read-performace-maz-2"></a>


| Provisioned throughput capacity (MBps) | Memory (GB) | Network throughput capacity (MBps) | Maximum number of client connections | Maximum network IOPS | 
| --- |--- |--- |--- |--- |
|  ****  |  ****  |  **Baseline**  |  **Burst**  |  ****  |  ****  | 
| --- |--- |--- |--- |--- |--- |
| 160 | 8 | 375 | 3,125 | 8,192 |  Tens of thousands of IOPS  | 
| --- |--- |--- |--- |--- |--- |
| 320 | 16 | 775 | 3,750 | 16,384 | 
| --- |--- |--- |--- |--- |
| 640 | 32 | 1,550 | 5,000 | 32,768 |  Hundreds of thousands of IOPS  | 
| --- |--- |--- |--- |--- |--- |
| 1,280 | 64 | 3,125 | 6,250 | 32,768 | 
| --- |--- |--- |--- |--- |
| 2,560 | 128 | 6,250 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |
| 3,840 | 192 | 9,375 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |
| 5,120 | 256 | 12,500 |   –   | 32,768 |  1\$1 million IOPS  | 
| --- |--- |--- |--- |--- |--- |
| 7,680 | 384 | 18,750 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |
| 10,240 | 512 | 21,000 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |

**Note**  
For Multi-AZ file systems created in Canada (Central) and Asia Pacific (Mumbai) prior to July 9th, 2024, refer to Table 6 for performance details.

### Multi-AZ (All other AWS Regions)
<a name="cached-read-performace-maz-3"></a>


| Provisioned throughput capacity (MBps) | Memory (GB) | Network throughput capacity (MBps) | Maximum number of client connections | Maximum network IOPS | 
| --- |--- |--- |--- |--- |
|  ****  |  ****  |  **Baseline**  |  **Burst**  |  ****  |  ****  | 
| --- |--- |--- |--- |--- |--- |
| 160 | 4 | 400 | 3,200 | 8,192 |  Tens of thousands of IOPS  | 
| --- |--- |--- |--- |--- |--- |
| 320 | 8 | 800 | 3,200 | 16,384 | 
| --- |--- |--- |--- |--- |
| 640 | 16 | 1,600 | 3,400 | 32,768 |  Hundreds of thousands of IOPS  | 
| --- |--- |--- |--- |--- |--- |
| 1,280 | 32 | 3,200 |  –  | 32,768 | 
| --- |--- |--- |--- |--- |
| 2,560 | 64 | 6,400 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |
| 3,840 | 96 | 9,600 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |
| 5,120 | 128 | 12,800 |   –   | 32,768 |  1\$1 million IOPS  | 
| --- |--- |--- |--- |--- |--- |

## Data access from disk
<a name="data-access-disk"></a>

For read and write access from the disks attached to the file server, performance depends on the performance supported by the server’s disk I/O connection. Similar to data accessed from cache, the performance of this connection is determined by the provisioned throughput capacity of the file system, which is equivalent to the baseline throughput capacity of your file server.

### Single-AZ 1 (All AWS Regions)
<a name="disk-performance-table-saz1"></a>


| Provisioned throughput capacity (MBps) | Maximum disk throughput capacity (MBps) | Maximum disk IOPS | 
| --- |--- |--- |
| **** | **Baseline** | **Burst** | **Baseline** | **Burst** | 
| --- |--- |--- |--- |--- |
| 64 | 64 | 1,024 | 2,500 | 40,000 | 
| --- |--- |--- |--- |--- |
| 128 | 128 | 1,024 | 5,000 | 40,000 | 
| --- |--- |--- |--- |--- |
| 256 | 256 | 1,024 | 10,000 | 40,000 | 
| --- |--- |--- |--- |--- |
| 512 | 512 | 1,024 | 20,000 | 40,000 | 
| --- |--- |--- |--- |--- |
| 1,024 | 1,024 |  –  | 40,000 |  –  | 
| --- |--- |--- |--- |--- |
| 2,048 | 2,048 |  –  | 80,000 |  –  | 
| --- |--- |--- |--- |--- |
| 3,072 | 3,072 |  –  | 120,000 |  –  | 
| --- |--- |--- |--- |--- |
| 4,096 | 4,096 |  –  | 160,000 |  –  | 
| --- |--- |--- |--- |--- |

### Single-AZ 2 (All AWS Regions)
<a name="disk-performance-table-saz2"></a>


| Provisioned throughput capacity (MBps) | Maximum disk throughput capacity (MBps) | Maximum disk IOPS | 
| --- |--- |--- |
| **** | **Baseline** | **Burst** | **Baseline** | **Burst** | 
| --- |--- |--- |--- |--- |
| 160 | 160 | 3,125 | 6,250 | 100,000 | 
| --- |--- |--- |--- |--- |
| 320 | 320 | 3,125 | 12,500 | 100,000 | 
| --- |--- |--- |--- |--- |
| 640 | 640 | 3,125 | 25,000 | 100,000 | 
| --- |--- |--- |--- |--- |
| 1,280 | 1,280 | 3,125 | 50,000 | 100,000 | 
| --- |--- |--- |--- |--- |
| 2,560 | 2,560 |  –  | 100,000 |  –  | 
| --- |--- |--- |--- |--- |
| 3,840 | 3,840 |  –  | 150,000 |  –  | 
| --- |--- |--- |--- |--- |
| 5,120 | 5,120 |  –  | 200,000 |  –  | 
| --- |--- |--- |--- |--- |
| 7,680 | 7,680 |  –  | 300,000 |  –  | 
| --- |--- |--- |--- |--- |
| 10,240 | 10,240\$1 |  –  | 400,000 |  –  | 
| --- |--- |--- |--- |--- |

### Multi-AZ (US West (N. California), Africa (Cape Town), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Middle East (UAE), AWS GovCloud (US-West), AWS GovCloud (US-East))
<a name="disk-performance-table-maz-1"></a>


| Provisioned throughput capacity (MBps) | Maximum disk throughput capacity (MBps)\$1 | Maximum disk IOPS | 
| --- |--- |--- |
| **** | **Baseline** | **Burst** | **Baseline** | **Burst** | 
| --- |--- |--- |--- |--- |
| 160 | 160 | 1,250 | 6,000 | 40,000 | 
| --- |--- |--- |--- |--- |
| 320 | 320 | 1,250 | 12,000 | 40,000 | 
| --- |--- |--- |--- |--- |
| 640 | 640 | 1,250 | 20,000 | 40,000 | 
| --- |--- |--- |--- |--- |
| 1,280 | 1,280 |  –  | 40,000 |  –  | 
| --- |--- |--- |--- |--- |
| 2,560 | 2,560 |  –  | 80,000 |  –  | 
| --- |--- |--- |--- |--- |
| 3,840 | 3,840 |  –  | 120,000 |  –  | 
| --- |--- |--- |--- |--- |
| 5,120 | 5,120 |  –  | 160,000 |  –  | 
| --- |--- |--- |--- |--- |

**Note**  
\$1Deployment hardware differences in these regions may cause disk throughput capacity to vary by up to 5% from the values shown in this table.

### Multi-AZ (US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore))
<a name="disk-performance-table-maz-2"></a>


| Provisioned throughput capacity (MBps) | Maximum disk throughput capacity (MBps) | Maximum disk IOPS | 
| --- |--- |--- |
| **** | **Baseline** | **Burst** | **Baseline** | **Burst** | 
| --- |--- |--- |--- |--- |
| 160 | 160 | 3,125 | 6,250 | 100,000 | 
| --- |--- |--- |--- |--- |
| 320 | 320 | 3,125 | 12,500 | 100,000 | 
| --- |--- |--- |--- |--- |
| 640 | 640 | 3,125 | 25,000 | 100,000 | 
| --- |--- |--- |--- |--- |
| 1,280 | 1,280 | 3,125 | 50,000 | 100,000 | 
| --- |--- |--- |--- |--- |
| 2,560 | 2,560 |  –  | 100,000 |  –  | 
| --- |--- |--- |--- |--- |
| 3,840 | 3,840 |  –  | 150,000 |  –  | 
| --- |--- |--- |--- |--- |
| 5,120 | 5,120 |  –  | 200,000 |  –  | 
| --- |--- |--- |--- |--- |
| 7,680 | 7,680 |  –  | 300,000 |  –  | 
| --- |--- |--- |--- |--- |
| 10,240 | 10,240\$1 |  –  | 400,000 |  –  | 
| --- |--- |--- |--- |--- |

**Note**  
\$1If you have a Multi-AZ (HA) file system with a throughput capacity of 10,240 MBps, performance will be limited to 7,500 MBps for write traffic only. Otherwise, for read traffic on all Multi-AZ (HA) file systems, read and write traffic on all Single-AZ file systems, and all other throughput capacity levels, your file system will support the performance limits shown in the table.

**Note**  
For Multi-AZ file systems created in Canada (Central) and Asia Pacific (Mumbai) prior to July 9th, 2024, refer to Table 11 for performance details.

### Multi-AZ (All other AWS Regions)
<a name="disk-performance-table-maz-3"></a>


| Provisioned throughput capacity (MBps) | Maximum disk throughput capacity (MBps)\$1 | Maximum disk IOPS | 
| --- |--- |--- |
| **** | **Baseline** | **Burst** | **Baseline** | **Burst** | 
| --- |--- |--- |--- |--- |
| 160 | 160 | 1,187 | 5,000 | 40,000 | 
| --- |--- |--- |--- |--- |
| 320 | 320 | 1,187 | 10,000 | 40,000 | 
| --- |--- |--- |--- |--- |
| 640 | 640 | 1,187 | 20,000 | 40,000 | 
| --- |--- |--- |--- |--- |
| 1,280 | 1,280 |  –  | 40,000 |  –  | 
| --- |--- |--- |--- |--- |
| 2,560 | 2,560 |  –  | 80,000 |  –  | 
| --- |--- |--- |--- |--- |
| 3,840 | 3,840 |  –  | 120,000 |  –  | 
| --- |--- |--- |--- |--- |
| 5,120 | 5,120 |  –  | 160,000 |  –  | 
| --- |--- |--- |--- |--- |

**Note**  
\$1Deployment hardware differences in these regions may cause disk throughput capacity to vary by up to 5% from the values shown in this table.

The previous tables show your file system’s throughput capacity for uncompressed data. However, because data compression reduces the amount of data that needs to be transferred as disk I/O, you can often deliver higher levels of throughput for compressed data. For example, if your data is compressed to be 50% smaller (that is, a compression ratio of 2), then you can drive up to 2x the throughput than you could if the data were uncompressed. For more information, see [Data compression](performance.md#perf-data-compression).

## SSD IOPS and performance
<a name="ssd-iops-disk-performance"></a>

Data accessed from disk is also subject to the performance of those underlying disks, which is determined by the number of provisioned SSD IOPS configured on the file system. The maximum IOPS levels you can achieve are defined by the lower of either the maximum IOPS supported by your file server’s disk I/O connection, or the maximum SSD disk IOPS supported by your disks. In order to drive the maximum performance supported by the server-disk connection, you should configure your file system’s provisioned SSD IOPS to match the maximum IOPS in the table above.

If you select `Automatic` provisioned SSD IOPS, Amazon FSx will provision 3 IOPS per GB of storage capacity up to the maximum for your file system, which is the highest IOPS level supported by the disk I/O connection documented above. If you select `User-provisioned`, you can configure any level of SSD IOPS from the minimum of 3 IOPS per GB of storage, up to the maximum for your file system, as long as you don't exceed 1000 IOPS per GiB\$1.

**Note**  
\$1File systems in the following AWS Regions have a maximum IOPS to storage ratio of 50 IOPS per GiB: Africa (Cape Town), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Middle East (UAE), Middle East (Bahrain), Asia Pacific (Osaka), Europe (Milan), Europe (Paris), South America (São Paulo) Region, Israel (Tel Aviv), Asia Pacific (Hong Kong), Asia Pacific (Seoul), Asia Pacific (Mumbai), Canada (Central), Europe (Stockholm), and Europe (London).

The following graph illustrates the maximum IOPS for Single-AZ 1 (non-HA and HA), Single-AZ 2 (non-HA and HA), and Multi-AZ (HA) depending on storage capacity.

![\[Chart showing provisioned IOPS.\]](http://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/images/updated-ssdiops-performance-graph.png)


# How FSx for OpenZFS file systems work with Intelligent-Tiering
<a name="performance-intelligent-tiering"></a>

The FSx for OpenZFS Intelligent-Tiering storage class offers elastic, low-cost storage for workloads that traditionally run on Network Attached Storage (NAS) file systems. File systems using the Intelligent-Tiering storage class consist of a high-availability pair of file servers that clients communicate with and an in-memory cache within each file server that enhances performance for frequently accessed data. Instead of a set of volumes attached to each file server, however, file systems using the Intelligent-Tiering storage class utilize fully elastic, intelligently tiered, regional block storage which automatically grows and shrinks to fit your workload as it changes. File systems using the Intelligent-Tiering storage class also provide an optional provisioned SSD read cache for low-latency access to frequently accessed data and a built-in SSD-backed write log for low-latency, durable writes. As most workloads tend to be read-heavy and actively work with only a small subset of the overall dataset at any given time, the hybrid model of Intelligent-Tiering storage and an SSD read cache allows file systems using the Intelligent-Tiering storage class to provide storage that performs at levels comparable to SSD (provisioned) file systems for most workloads, while delivering up to 85% cost savings relative to the SSD (provisioned) storage class. 

When reading and writing data to an Intelligent-Tiering file system, especially data that hasn't been accessed recently or frequently enough to be in the file server's in-memory cache, performance depends on the size of the SSD read cache. Data access from Intelligent-Tiering storage has time-to-first-byte latencies of roughly tens of milliseconds as well as per-request costs, while accesses from the SSD read cache return with sub-millisecond latency and no per-request costs. 

You can configure your SSD read cache with one of three sizing mode options: Automatic, Custom, or None. With Automatic, Amazon FSx automatically selects an SSD read cache size based on provisioned throughput. With Custom, you can customize the size of your SSD read cache and scale it up or down at any time based on your workload's needs. When configuring the size of the SSD read cache for your file system, you should consider both the size of your frequently accessed dataset within the workload and the workload's sensitivity to higher latency for reads of less-frequently accessed data. Choose None if you do not want to use an SSD read cache with your file system. You can switch between SSD read cache sizing modes after your file system has been created. For more information on how to modify your SSD read cache, see [Modifying provisioned SSD read cache](managing-ssd-read-cache.md).

To help determine whether your SSD read cache is sized appropriately, Amazon FSx publishes a cache hit ratio metric, which reports the percentage of reads that are served from the cache. You can view the cache hit ratio metric in the Amazon FSx console on your file system's **Monitoring** tab or with CloudWatch. For most workloads, a cache hit ratio of 80% or more provides the right balance of performance and cost optimization. For more information on how to use CloudWatch to monitor your SSD read cache, see [Using Amazon FSx for OpenZFS CloudWatch metrics](how_to_use_metrics.md). 

With the Intelligent-Tiering storage class, you do not need to provision IOPS. Instead, you pay for data access based on usage, in the form of read and write requests. A request is a single read or write operation between the file system and the Intelligent-Tiering storage. FSx for OpenZFS automatically optimizes the conversion between read and write I/O to the file system and requests to optimize performance and reduce costs.

A write request occurs when FSx for OpenZFS writes a block of data to Intelligent-Tiering storage. When you write data to the file system, FSx for OpenZFS immediately sends the data to the built-in SSD-based write log. Write requests are then aggregated and written to Intelligent-Tiering storage from the SSD write log, increasing throughput and lowering request costs. Reads can be served from the file server’s in-memory cache, SSD read cache, or directly from Intelligent-Tiering storage. When a read is served from Intelligent-Tiering storage, a read request occurs for each block of retrieved data. When you read data sequentially, FSx for OpenZFS will prefetch data to improve performance.

Data from the in-memory cache on file systems using the Intelligent-Tiering storage class is served directly to the requesting client as *network I/O*. When a client accesses data that is not in the in-memory cache, it is read from either the SSD read cache or Intelligent-Tiering storage as *disk I/O* and then served to the client as network I/O. The following diagram illustrates how data is accessed from an Intelligent-Tiering file system.

![\[Diagram showing how data is accessed in an FSx for OpenZFS file system with Intelligent-Tiering.\]](http://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/images/intelligent-tiering-arch.png)


**Topics**
+ [

## Data access from in-memory cache
](#data-access-intelligent-tiering)
+ [

## Data access from SSD cache and Intelligent-Tiering
](#data-access-intelligent-tiering-disk)
+ [

## Performance considerations
](#intelligent-tiering-performance-considerations)

## Data access from in-memory cache
<a name="data-access-intelligent-tiering"></a>

For read access directly from the in-memory ARC cache, performance is primarily defined by two components: the performance supported by the client-server network I/O connection, and the size of the cache. The following table shows the cached read performance, and amount of memory available for in-memory caching and other activities, for file systems using the Intelligent-Tiering storage class, based on provisioned throughput capacity.

### Multi-AZ (US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Mumbai), Canada (Central), Europe (Frankfurt), Europe (Ireland))
<a name="cache-performance-table-intelligent-tiering"></a>


| Provisioned throughput capacity (MBps) | Memory (GB) | Network throughput capacity (MBps) | Maximum number of client connections | Maximum network IOPS | 
| --- |--- |--- |--- |--- |
|  ****  |  ****  |  **Baseline**  |  **Burst**  |  ****  |  ****  | 
| --- |--- |--- |--- |--- |--- |
| 160 | 8 | 375 | 3,125 | 8,192 |  Tens of thousands of IOPS  | 
| --- |--- |--- |--- |--- |--- |
| 320 | 16 | 775 | 3,750 | 16,384 | 
| --- |--- |--- |--- |--- |
| 640 | 32 | 1,550 | 5,000 | 32,768 |  Hundreds of thousands of IOPS  | 
| --- |--- |--- |--- |--- |--- |
| 1,280 | 64 | 3,125 | 6,250 | 32,768 | 
| --- |--- |--- |--- |--- |
| 2,560 | 128 | 6,250 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |
| 3,840 | 192 | 9,375 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |
| 5,120 | 256 | 12,500 |   –   | 32,768 |  1\$1 million IOPS  | 
| --- |--- |--- |--- |--- |--- |
| 7,680 | 384 | 18,750 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |
| 10,240 | 512 | 21,000 |   –   | 32,768 | 
| --- |--- |--- |--- |--- |

## Data access from SSD cache and Intelligent-Tiering
<a name="data-access-intelligent-tiering-disk"></a>

For read and write access from the SSD read cache and Intelligent-Tiering, performance depends on the performance supported by the server’s disk I/O connection. Similar to data accessed from cache, the performance of this connection is determined by the provisioned throughput capacity of the file system, which is equivalent to the baseline throughput capacity of your file server.

### Multi-AZ (US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Mumbai), Canada (Central), Europe (Frankfurt), Europe (Ireland))
<a name="disk-performance-table-intelligent-tiering"></a>


| Provisioned throughput capacity (MBps) | Maximum disk throughput capacity (MBps)\$1 | Maximum disk IOPS | 
| --- |--- |--- |
| **** | **** | **Baseline** | **Burst** | 
| --- |--- |--- |--- |
| 160 | 160 | 6,250 | 100,000 | 
| --- |--- |--- |--- |
| 320 | 320 | 12,500 | 100,000 | 
| --- |--- |--- |--- |
| 640 | 640 | 25,000 | 100,000 | 
| --- |--- |--- |--- |
| 1,280 | 1,280 | 50,000 | 100,000 | 
| --- |--- |--- |--- |
| 2,560 | 2,560 | 100,000 |  –  | 
| --- |--- |--- |--- |
| 3,840 | 3,840 | 150,000 |  –  | 
| --- |--- |--- |--- |
| 5,120 | 5,120 | 200,000 |  –  | 
| --- |--- |--- |--- |
| 7,680 | 7,680 | 300,000 |  –  | 
| --- |--- |--- |--- |
| 10,240 | 10,240 | 400,000 |  –  | 
| --- |--- |--- |--- |

**Note**  
\$1The maximum disk throughput values above refer to read throughput. Maximum write throughput is around 75% of the maximum read throughput for each throughput capacity level up to 3,840. For throughput capacity levels above 3,840, the maximum write throughput is 3,200 MBps.

The previous tables show your file system’s throughput capacity for uncompressed data. However, because data compression reduces the amount of data that needs to be transferred as disk I/O, you can often deliver higher levels of throughput for compressed data. For example, if your data is compressed to be 50% smaller (that is, a compression ratio of 2), then you can drive up to 2x the throughput than you could if the data were uncompressed. For more information, see [Data compression](performance.md#perf-data-compression).

## Performance considerations
<a name="intelligent-tiering-performance-considerations"></a>

Here are a few important performance considerations when working with file systems using the Intelligent-Tiering storage class:
+ The OpenZFS access time `atime` file property indicates when a file was last read. For FSx for OpenZFS Intelligent-Tiering storage class, the `atime` property is not updated during file read operations.
+ Due to the higher latency and per-request costs of data access from Intelligent-Tiering storage, you should configure your workloads and file systems with I/O size in mind. Workloads reading data with smaller I/O sizes will require higher concurrency and incur more request costs to achieve the same throughput on data not in the cache as workloads using larger I/O sizes.
+ For workloads with working-sets larger than the size of the SSD cache, consider maximizing application I/O size and FSx volume record size. Because writes to the file system are only stored on either the in-memory cache (for asynchronous writes) or the SSD write cache (for synchronous writes) before being served to the client, this should be less of a consideration for write I/O. 
+ You may be more likely to see higher latencies after a maintenance event. This is due to the in-memory cache being erased during maintenance windows for file systems using the Intelligent-Tiering storage class. For more information, see [Modifying file system maintenance windows](maintenance-windows.md).
+ The maximum disk IOPS your clients can drive with an Intelligent-Tiering file system depends on the specific access patterns of your workload and whether you have provisioned an SSD read cache. For workloads with random access, clients can typically drive much higher IOPS if the data is cached in the SSD read cache than if the data is not in the cache.
+ Random read requests from Intelligent-Tiering storage will have higher latencies than sequential read requests, as the data will be served just-in-time instead of ahead of time through pre-fetch. We recommend configuring your data access pattern sequentially when possible to allow for pre-fetching data and higher performance.
+ While you can select a throughput capacity as low as 160 megabytes per second (MBps), we recommend using higher throughput capacity levels for workloads that are write-heavy.