

# Monitoring with Amazon CloudWatch
<a name="monitoring-cloudwatch"></a>

You can monitor file systems using Amazon CloudWatch, which collects and processes raw data from Amazon FSx for NetApp ONTAP into readable, near real-time metrics. These statistics are retained for a period of 15 months, so that you can access historical information to determine how your file system is performing. FSx for ONTAP metric data is automatically sent to CloudWatch at 1-minute periods by default. For more information about CloudWatch, see [What is Amazon CloudWatch?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) in the *Amazon CloudWatch User Guide*.

**Note**  
By default, FSx for ONTAP sends metric data to CloudWatch at 1-minute periods except for the following metrics that are sent in 5-minute intervals:   
`FileServerDiskThroughputBalance`
`FileServerDiskIopsBalance`

CloudWatch metrics for FSx for ONTAP are organized into four categories, which are defined by the dimensions that are used to query each metric. For more information about dimensions, see [Dimensions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Dimension) in the *Amazon CloudWatch User Guide*.
+ **File system metrics**: File-system-level performance and storage capacity metrics.
+ **File server metrics**: File-server-level metrics.
+ **Detailed file system aggregate metrics**: Detailed file system metrics per aggregate.
+ **Detailed file system metrics**: File-system-level storage metrics per storage tier (SSD and capacity pool).
+ **Volume metrics**: Per-volume performance and storage capacity metrics.
+ **Detailed volume metrics**: Per-volume storage capacity metrics by storage tier or by the type of data (user, snapshot, or other).

All CloudWatch metrics for FSx for ONTAP are published to the `AWS/FSx` namespace in CloudWatch. 

**Topics**
+ [Accessing CloudWatch metrics](accessingmetrics.md)
+ [Monitoring in the Amazon FSx console](monitor-throughput-cloudwatch.md)
+ [File system metrics](file-system-metrics.md)
+ [Second-generation file system metrics](so-file-system-metrics.md)
+ [Volume metrics](volume-metrics.md)

# Accessing CloudWatch metrics
<a name="accessingmetrics"></a>

You can see Amazon CloudWatch metrics for Amazon FSx in the following ways:
+ The Amazon FSx console
+ The Amazon CloudWatch console
+ The AWS Command Line Interface (AWS CLI) for CloudWatch
+ The CloudWatch API

The following procedure explains how to view your file system's CloudWatch metrics with the Amazon FSx console. 

**To view CloudWatch metrics for your file system using the Amazon FSx console**

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. In the left navigation pane, choose **File systems**, then choose the file system whose metrics you want to view.

1. On the **Summary** page, choose **Monitoring & performance** from the second panel to view graphs for your file system's metrics. 

There are four tabs on the **Monitoring & performance** panel. 
+ Choose **Summary** (the default tab) to display any active warnings, CloudWatch alarms, and graphs for **File system activity**. 
+ Choose **Storage** to view storage capacity and utilization metrics. 
+ Choose **Performance** to view file server and storage performance metrics. 
+ Choose **CloudWatch alarms** to view graphs of any alarms configured for your file system. 

The following procedure explains how to view your volume's CloudWatch metrics with the Amazon FSx console

**To view CloudWatch metrics for your volume using the Amazon FSx console**

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. In the left navigation pane, choose **Volumes**, then choose the volume whose metrics you want to view.

1. On the **Summary** page, choose **Monitoring** (the default tab) from the second panel to view graphs for your volume's metrics. 

The following procedure explains how to view your file system's CloudWatch metrics with the Amazon CloudWatch console. 

**To view metrics using the Amazon CloudWatch console**

1. On the **Summary** page of your file system, choose **Monitoring & performance** from the second panel to view graphs for your file system's metrics. 

1. Choose **View in metrics** from the actions menu in the upper right of the graph that you want to view in the Amazon CloudWatch console. This opens the **Metrics** page in the Amazon CloudWatch console. 

The following procedure explains how to add FSx for ONTAP file system metrics to a dashboard in the Amazon CloudWatch console. 

**To add metrics to a Amazon CloudWatch console**

1. Choose the set of metrics (**Summary**, **Storage**, or **Performance**) in the **Monitoring & performance** panel of the Amazon FSx console. 

1. Choose **Add to dashboard** in the upper right hand of the panel. This opens the Amazon CloudWatch console. 

1. Select an existing CloudWatch dashboard from the list, or create a new dashboard. For more information, see [Using Amazon CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html) in the *Amazon CloudWatch User Guide*. 

The following procedure explains how to access your file system's metrics with the AWS CLI. 

**To access metrics from the AWS CLI**
+ Use the CloudWatch [list-metrics](https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/list-metrics.html) CLI command with the `--namespace "AWS/FSx"` parameter. For more information, see the [AWS CLI Command Reference](https://docs.aws.amazon.com/cli/latest/reference/).

The following procedure explains how to access your file system's metrics with the CloudWatch API. 

**To access metrics from the CloudWatch API**
+ Call the [GetMetricStatistics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricStatistics.html) API operation. For more information, see the [Amazon CloudWatch API Reference](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/). 

# Monitoring in the Amazon FSx console
<a name="monitor-throughput-cloudwatch"></a>

The CloudWatch metrics reported by Amazon FSx provide valuable information about your FSx for ONTAP file systems and volumes. 

**Topics**
+ [Monitoring file system metrics in the Amazon FSx console](#fsxn-howtomonitor-fs)
+ [Monitoring volume metrics in the Amazon FSx console](#fsxn-howtomonitor-vol)
+ [Performance warnings and recommendations](performance-insights-FSxN.md)
+ [Creating Amazon CloudWatch alarms to monitor Amazon FSx](creating_alarms.md)

## Monitoring file system metrics in the Amazon FSx console
<a name="fsxn-howtomonitor-fs"></a>

You can use the **Monitoring & performance** panel on your file system's dashboard in the Amazon FSx console to view the metrics that are described in the following table. For more information, see [Accessing CloudWatch metrics](accessingmetrics.md). 

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/fsx/latest/ONTAPGuide/monitor-throughput-cloudwatch.html)

**Note**  
We recommend that you maintain an average throughput capacity utilization of any performance-related dimensions such as network utilization, CPU utilization, and SSD IOPS utilization to under 50%. This ensures that you have enough spare throughput capacity for unexpected spikes in your workload, as well as for any background storage operations (such as storage synchronization, data tiering, or backups).

## Monitoring volume metrics in the Amazon FSx console
<a name="fsxn-howtomonitor-vol"></a>

You can view the **Monitoring** panel on your volume's dashboard in the Amazon FSx console to see additional performance metrics. For more information, see [Accessing CloudWatch metrics](accessingmetrics.md). 

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/fsx/latest/ONTAPGuide/monitor-throughput-cloudwatch.html)

# Performance warnings and recommendations
<a name="performance-insights-FSxN"></a>

FSx for ONTAP displays a warning for CloudWatch metrics whenever one of these metrics has approached or crossed a predetermined threshold for multiple consecutive data points. These warnings provide you with actionable recommendations that you can use to optimize your file system's performance.

Warnings are accessible in several areas of the **Monitoring & performance** dashboard. All active or recent Amazon FSx performance warnings and any CloudWatch alarms configured for the file system that are in an ALARM state appear in the **Monitoring & performance** panel in the **Summary** section. The warning also appears in the section of the dashboard where the metric graph is displayed.

You can create CloudWatch alarms for any of the Amazon FSx metrics. For more information, see [Creating Amazon CloudWatch alarms to monitor Amazon FSx](creating_alarms.md).

## Use performance warnings to improve file system performance
<a name="resolve-warnings"></a>

Amazon FSx provides actionable recommendations that you can use to optimize your file system's performance. These recommendations describe how you can address a potential performance bottle neck. You can take the recommended action if you expect the activity to continue, or if it's causing an impact to your file system's performance. Depending on which metric has triggered a warning, you can resolve it by increasing either the file system's throughput capacity or storage capacity, as described in the following table.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/fsx/latest/ONTAPGuide/performance-insights-FSxN.html)

**Note**  
During an SSD decrease operation, write-heavy workloads could experience a temporary performance degradation as the operation consumes disk and network resources. To minimize performance impact, maintain adequate headroom by ensuring ongoing workloads don't consistently consume more than 50% CPU, 50% disk throughput, or 50% SSD IOPS before initiating an SSD decrease operation.  
Brief I/O pauses of up to 60 seconds might occur for each volume as client access is redirected to the new set of disks. These pauses are expected and normal during the cutover phase of the operation.

For more information about file system performance, see [Amazon FSx for NetApp ONTAP performancePerformance](performance.md).

# Creating Amazon CloudWatch alarms to monitor Amazon FSx
<a name="creating_alarms"></a>

You can create a CloudWatch alarm that sends an Amazon Simple Notification Service (Amazon SNS) message when the alarm changes state. An alarm watches a single metric over a time period that you specify. If needed, the alarm then performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods. The action is a notification sent to an Amazon SNS topic or an Auto Scaling policy.

Alarms invoke actions for sustained state changes only. CloudWatch alarms don't invoke actions only because they are in a particular state; the state must have changed and been maintained for a specified number of periods. You can create an alarm from the Amazon FSx console or the Amazon CloudWatch console.

The following procedures describe how to create alarms using the Amazon FSx console, AWS Command Line Interface (AWS CLI), and API.

**To set alarms using the Amazon FSx console**

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. In the left navigation pane, choose **File systems**, and then choose the file system that you want to create the alarm for.

1. On the **Summary** page, choose **Monitoring & performance** from the second panel. 

1. Choose the **CloudWatch alarms** tab. 

1. Choose **Create CloudWatch alarm**. You are redirected to the CloudWatch console.

1. Choose **Select metric**.

1. In the **Metrics** section, choose **FSx**.

1. Choose a metric category:
   + **File System Metrics**
   + **Detailed File System Metrics**
   + **Volume Metrics**
   + **Detailed Volume Metrics**

1. Choose the metric you want to set the alarm for, and then choose **Select metric**.

1. In the **Conditions** section, choose the conditions you want for the alarm, and then choose **Next**.
**Note**  
Metrics might not be published during file system maintenance. To prevent unnecessary and misleading alarm condition changes and to configure your alarms so that they are resilient to missing data points, see [Configuring how CloudWatch alarms treat missing data](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html#alarms-and-missing-data) in the *Amazon CloudWatch User Guide*.

1. If you want CloudWatch to send you an email or Amazon SNS notification when the alarm state initiates the action, choose an alarm state for **Alarm state trigger**. 

   For **Send a notification to the following SNS topic**, choose an option. If you choose **Create topic**, you can set the name and email addresses for a new email subscription list. This list is saved and appears in the field for future alarms. Choose **Next**.
**Note**  
If you use **Create topic** to create a new Amazon SNS topic, the email addresses must be verified before they receive notifications. Emails are sent only when the alarm enters an alarm state. If this alarm state change happens before the email addresses are verified, they don't receive a notification.

1. Fill in the **Alarm name** and **Alarm description** fields, and then choose **Next**. 

1. On the **Preview and create** page, review the alarm that you're about to create, and then choose **Create alarm**. 

**To set alarms using the CloudWatch console**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. Choose **Create Alarm** to start the **Create Alarm Wizard**. 

1. Follow the procedure in **To set alarms using the Amazon FSx console**, beginning with step 6. 

**To set an alarm using the AWS CLI**
+ Call the [put-metric-alarm](https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/put-metric-alarm.html) CLI command. For more information, see the [AWS CLI Command Reference](https://docs.aws.amazon.com/cli/latest/reference/). 

**To set an alarm using the CloudWatch API**
+ Call the [PutMetricAlarm](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricAlarm.html) API operation. For more information, see the [Amazon CloudWatch API Reference](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/). 

# File system metrics
<a name="file-system-metrics"></a>

Your Amazon FSx for NetApp ONTAP file system metrics are classified as either **File system metrics** or **Detailed file system metrics**.
+ **File system metrics** are aggregate performance and storage metrics for a single file system that take a single dimension, `FileSystemId`. These metrics measure network performance and storage capacity usage for your file system.
+ **Detailed file system metrics** measure your file system's storage capacity and used storage in each storage tier (for example, SSD storage and capacity pool storage). Each metric includes a `FileSystemId`, `StorageTier`, and `DataType` dimension.

Note the following about when Amazon FSx publishes data points for these metrics to CloudWatch:
+ For the utilization metrics (any metric whose name ends in *Utilization*, such as `NetworkThroughputUtilization`), there is a data point emitted each period for every active file server or aggregate. For example, Amazon FSx emits one minutely metric per active file server for `FileServerDiskIopsUtilization`, and one minutely metric per aggregate for `DiskIopsUtilization`.
+ For all other metrics, there is a single data point emitted each period, corresponding to the total value of the metric across all of your active file servers (such as `DataReadBytes` for file server metrics) or all of your aggregates (such as `DiskReadBytes` for storage metrics).

**Topics**
+ [Network I/O metrics](#fsxn-network-IO-metrics)
+ [File server metrics](#fsxn-file-server-metrics)
+ [Disk I/O metrics](#fsxn-disk-IO-metrics)
+ [Storage capacity metrics](#fsxn-storage-volume-metrics)
+ [Detailed file system metrics](#detailed-fs-metrics)

## Network I/O metrics
<a name="fsxn-network-IO-metrics"></a>

All of these metrics take one dimension, `FileSystemId`.


| Metric | Description | 
| --- | --- | 
| NetworkThroughputUtilization |  The percent utilization of network throughput for the file system. Note that this metric reflects the direction - i.e. inbound or outbound - that has the higher traffic flow. To see individual metrics for each direction, please refer to the NetworkReceivedBytes and NetworkSentBytes metrics.  The `Average` statistic is the average network throughput utilization of the file system over a specified period.  The `Minimum` statistic is the lowest network throughput utilization of the file system over a specified period.  The `Maximum` statistic is the highest network throughput utilization of the file system over a specified period.  Units: Percent  Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| NetworkSentBytes |  The number of bytes (network I/O) sent by the file system.  The `Sum` statistic is the total number of bytes sent by the file system over a specified period.  To calculate sent throughput (bytes per second) for any statistic, divide the statistic by the seconds in the specified period.  Units: Bytes  Valid statistics: `Sum`  | 
| NetworkReceivedBytes |  The number of bytes (network I/O) received by the file system.  The `Sum` statistic is the total number of bytes received by the file system over a specified period.  To calculate received throughput (bytes per second) for any statistic, divide the statistic by the seconds in the specified period.  Units: Bytes  Valid statistics: `Sum`  | 
| DataReadBytes |  The number of bytes (network I/O) from reads by clients to the file system. The `Sum` statistic is the total number of bytes associated with read operations during the specified period. To calculate the average throughput (bytes per second) for a period, divide the `Sum` statistic by the number of seconds in the specified period. Units: Bytes Valid statistics: `Sum`  | 
| DataWriteBytes |  The number of bytes (network I/O) from writes by clients to the file system. The `Sum` statistic is the total number of bytes associated with write operations during the specified period. To calculate the average throughput (bytes per second) for a period, divide the `Sum` statistic by the number of seconds in the specified period. Units: Bytes Valid statistics: `Sum`  | 
| DataReadOperations |  The count of read operations (network I/O) from reads by clients to the file system. The `Sum` statistic is the total number of I/O operations that occurred over a specified period. To calculate the average read operations per second for a period, divide the `Sum` statistic by the number of seconds in the specified period. Units: Count Valid statistics: `Sum`  | 
| DataWriteOperations |  The count of write operations (network I/O) from writes by clients to the file system. The `Sum` statistic is the total number of I/O operations that occurred over a specified period. To calculate the average write operations per second for a period, divide the `Sum` statistic by the number of seconds in the specified period. Units: Count Valid statistics: `Sum`  | 
| MetadataOperations |  The count of metadata operations (network I/O) by clients to the file system. The `Sum` statistic is the total number of I/O operations that occurred over a specified period. To calculate the average metadata operations per second for a period, divide the `Sum` statistic by the number of seconds in the specified period. Units: Count Valid statistics: `Sum`  | 
| DataReadOperationTime |  The sum of total time spent within the file system for read operations (network I/O) from clients accessing data in the file system. The `Sum` statistic is the total number of seconds spent by read operations during the specified period. To calculate the average read latency for a period, divide the `Sum` statistic by the `Sum` of the `DataReadOperations` metric over the same period. Units: Seconds Valid statistics: `Sum`  | 
| DataWriteOperationTime |  The sum of total time spent within the file system for fulfilling write operations (network I/O) from clients accessing data in the file system. The `Sum` statistic is the total number of seconds spent by write operations during the specified period. To calculate the average write latency for a period, divide the `Sum` statistic by the `Sum` of the `DataWriteOperations` metric over the same period. Units: Seconds Valid statistics: `Sum`  | 
| CapacityPoolReadBytes | The number of bytes read (network I/O) from the file system's capacity pool tier. To ensure data integrity, ONTAP performs a read operation on the capacity pool immediately after performing a write operation.  The `Sum` statistic is the total number of bytes read from the file system's capacity pool tier over a specified period. To calculate capacity pool bytes per second, divide the `Sum` statistic by the seconds in a specified period. Units: BytesValid statistics: `Sum` | 
| CapacityPoolReadOperations |  The number of read operations (network I/O) from the file system's capacity pool tier. This translates to a capacity pool read request.  To ensure data integrity, ONTAP performs a read operation on the capacity pool immediately after performing a write operation.  The `Sum` statistic is the total number of read operations from the file system's capacity pool tier over a specified period. To calculate capacity pool requests per second, divide the `Sum` statistic by the seconds in a specified period.  Units: Count Valid statistics: `Sum`  | 
| CapacityPoolWriteBytes | The number of bytes written (network I/O) to the file system's capacity pool tier. To ensure data integrity, ONTAP performs a read operation on the capacity pool immediately after performing a write operation.  The `Sum` statistic is the total number of bytes written to the file system's capacity pool tier over a specified period. To calculate capacity pool bytes per second, divide the `Sum` statistic by the seconds in a specified period. Units: BytesValid statistics: `Sum` | 
| CapacityPoolWriteOperations |  The number of write operations (network I/O) to the file system from the capacity pool tier. This translates to a write request.  To ensure data integrity, ONTAP performs a read operation on the capacity pool immediately after performing a write operation.  The `Sum` statistic is the total number of write operations to the file system's capacity pool tier over a specified period. To calculate capacity pool requests per second, divide the `Sum` statistic by the seconds in a specified period.  Units: Count Valid statistics: `Sum`  | 

## File server metrics
<a name="fsxn-file-server-metrics"></a>

All of these metrics take one dimension, `FileSystemId`. 


| Metric | Description | 
| --- | --- | 
| CPUUtilization |  The percent utilization of the file system's CPU resources.  The `Average` statistic is the average CPU utilization of the file system over a specified period.  The `Minimum` statistic is the lowest CPU utilization of the file system over a specified period.  The `Maximum` statistic is the highest CPU utilization of the file system over a specified period.  Units: Percent  Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| FileServerDiskThroughputUtilization |  The disk throughput between your file server and the primary tier, as a percentage of the provisioned limit determined by throughput capacity.  The `Average` statistic is the average percent utilization of the file servers' disk throughput over a specified period. The `Minimum` statistic is the lowest percent utilization of the file servers' disk throughput over a specified period.  The `Maximum` statistic is the highest utilization of the file servers' disk throughput over a specified period.  Units: Percent Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| FileServerDiskThroughputBalance |  The percentage of available burst credits for disk throughput between your file server and the primary tier. This is valid for file systems that are provisioned with a throughput capacity of less than 512 MBps. The `Average` statistic is the average burst balance available over a specified period.  The `Minimum` statistic is the minimum burst balance available over a specified period.  The `Maximum` statistic is the maximum burst balance available over a specified period.  Units: Percent  Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| FileServerDiskIopsBalance |  The percentage of available burst credits for disk IOPS between your file server and the primary tier. This is valid for file systems that are provisioned with a throughput capacity of less than 512 MBps. The `Average` statistic is the average burst balance available over a specified period.  The `Minimum` statistic is the minimum burst balance available over a specified period.  The `Maximum` statistic is the maximum burst balance available over a specified period.  Units: Percent  Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| FileServerDiskIopsUtilization |  The percentage of IOPS utilization of available disk IOPS capacity for your file server.  The `Average` statistic is the average disk IOPS utilization of the file system over a specified period.  The `Minimum` statistic is the minimum disk IOPS utilization of the file system over a specified period.  The `Maximum` statistic is the maximum disk IOPS utilization of the file system over a specified period.  Units: Percent  Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| FileServerCacheHitRatio |   The percentage of all read requests that are served by data in the file system's RAM and NVMe caches. A higher percentage means that more reads are served by the file system's read caches.   Units: Percent  The `Average` statistic is the average cache hit percent for the file system over a specified period.   The `Minimum` statistic is the lowest cache hit percent for the file system over a specified period.  The `Maximum` statistic is the highest cache hit percent for the file system over a specified period.   Valid statistics: `Average`, `Minimum`, and `Maximum`   | 

## Disk I/O metrics
<a name="fsxn-disk-IO-metrics"></a>

All of these metrics take one dimension, `FileSystemId`. 


| Metric | Description | 
| --- | --- | 
| DiskReadBytes |  The number of bytes (disk I/O) from any disk reads to the file system's primary tier.  The `Sum` statistic is the total number of bytes read from the file system over a specified period.  To calculate read disk throughput (bytes per second) for any statistic, divide the `Sum` statistic by the seconds in the specified period.  Units: Bytes  Valid statistics: `Sum`  | 
| DiskWriteBytes |  The number of bytes (disk I/O) from any disk writes to the file system's primary tier.  The `Sum` statistic is the total number of bytes written from the file system over a specified period.  To calculate write disk throughput (bytes per second) for any statistic, divide `Sum` the statistic by the seconds in the specified period.  Units: Bytes  Valid statistics: `Sum`  | 
| DiskIopsUtilization |  The disk IOPS between your file server and storage volumes, as a percentage of the primary's tiers provisioned disk IOPS limit.  The `Average` statistic is the average disk IOPS utilization of the file system over a specified period.  The `Minimum` statistic is the minimum disk IOPS utilization of the file system over a specified period.  The `Maximum` statistic is the maximum disk IOPS utilization of the file system over a specified period.  Units: Percent  Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| DiskReadOperations |  The number of read operations (disk I/O) from the file system's primary tier.  The `Sum` statistic is the total number of read operations from the primary tier over a specified period.  Units: Count  Valid statistics: `Sum`  | 
| DiskWriteOperations |  The number of write operations (disk I/O) to the file system's primary tier.  The `Sum` statistic is the total number of write operations to the primary tier over a specified period.  Units: Count  Valid statistics: `Sum`  | 

## Storage capacity metrics
<a name="fsxn-storage-volume-metrics"></a>

All of these metrics take one dimension, `FileSystemId`. 


| Metric | Description | 
| --- | --- | 
| StorageEfficiencySavings |  The bytes saved from storage efficiency features (compression, deduplication, and compaction). The `Average` statistic is the average storage efficiency savings over a specified period. To calculate storage efficiency savings as a percentage of all data stored, over a one minute period, divide `StorageEfficiencySavings` by the sum of `StorageEfficiencySavings` and the `StorageUsed` file system metric, using the `Sum` statistic for `StorageUsed`.  The `Minimum` statistic is the minimum storage efficiency savings over a specified period.  The `Maximum` statistic is the maximum storage efficiency savings over a specified period.  Units: Bytes Valid statistics: `Average`, `Minimum`, and `Maximum`   | 
| StorageUsed |  The total amount of physical data stored on the file system, on both the primary (SSD) tier and the capacity pool tier. This metric includes savings from storage-efficiency features, such as data compression and deduplication. Units: Bytes Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| LogicalDataStored |  The total amount of logical data stored on the file system, considering both the SSD tier and the capacity pool tier. This metric includes the total logical size of snapshots and FlexClones, but does not include storage efficiency savings achieved through compression, compaction, and deduplication. To compute storage-efficiency savings in bytes, take the `Average` of `StorageUsed` over a given period and subtract it from the `Average` of `LogicalDataStored` over the same period.  To compute storage-efficiency savings as a percentage of total logical data size, take the `Average` of `StorageUsed` over a given period and subtract it from the `Average` of `LogicalDataStored` over the same period. Then divide the difference by the `Average` of `LogicalDataStored` over the same period. Units: Bytes Valid statistics: `Average`, `Minimum`, and `Maximum`  | 

## Detailed file system metrics
<a name="detailed-fs-metrics"></a>

Detailed file system metrics are detailed storage-utilization metrics for each of your storage tiers. Detailed file system metrics all have the dimensions `FileSystemId`, `StorageTier`, and `DataType`.
+ The `StorageTier` dimension indicates the storage tier that the metric measures, with possible values of `SSD` and `StandardCapacityPool`.
+ The `DataType` dimension indicates the type of data that the metric measures, with the possible value `All`.

There is a row for each unique combination of a given metric and dimensional key-value pairs, with a description of what that combination measures.


| Metric | Description | 
| --- | --- | 
| StorageCapacityUtilization |  The storage capacity utilization for each of your file system's aggregates. There is one metric emitted each minute for each of your file system's aggregates. The `Average` statistic is the average amount of storage capacity utilization for your file system's performance tier over the specified period. The `Minimum` statistic is the lowest amount of storage capacity utilization for your file system's performance tier over the specified period. The `Maximum` statistic is the highest amount of storage capacity utilization for your file system's performance tier over the specified period. Units: Percent Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| StorageCapacity |  The total storage capacity of the primary (SSD) tier. Units: Bytes Valid statistics: `Maximum`  | 
| StorageUsed |  The used physical storage capacity in bytes, specific to the storage tier. This value includes savings from storage-efficiency features, such as data compression and deduplication. Valid dimension values for `StorageTier` are `SSD` and `StandardCapacityPool`, corresponding to the storage tier that this metric measures. This metric also requires the `DataType` dimension with the value `All`. The `Average`, `Minimum`, and `Maximum` statistics are per-tier storage consumption in bytes for the given period.  To calculate storage capacity utilization of your primary (SSD) storage tier, divide any of these statistics by the `Maximum` `StorageCapacity` over the same period, with the `StorageTier` dimension equal to `SSD`.  To calculate the free storage capacity of your primary (SSD) storage tier in bytes, subtract any of these statistics from the `Maximum` `StorageCapacity` over the same period, with the dimension `StorageTier` equal to `SSD`. Units: Bytes Valid statistics: `Average`, `Minimum`, and `Maximum`  | 

# Second-generation file system metrics
<a name="so-file-system-metrics"></a>

The following metrics are provided for FSx for ONTAP second-generation file systems. For the metrics, a datapoint is emitted for each HA pair and for each aggregate (for storage utilization metrics).

**Note**  
If you have a file system with multiple HA pairs, you can also use the [single-HA pair file system metrics](file-system-metrics.md) and the [volume metrics](volume-metrics.md).

**Topics**
+ [Network I/O metrics](#so-network-IO-metrics)
+ [File server metrics](#so-file-server-metrics)
+ [Disk I/O metrics](#so-disk-IO-metrics)
+ [Detailed file system metrics](#so-detailed-fs-metrics)

## Network I/O metrics
<a name="so-network-IO-metrics"></a>

All of these metrics take two dimensions, `FileSystemId` and `FileServer`.
+ `FileSystemId` – Your file system's AWS resource ID.
+ `FileServer` – The name of a file server (or *node*) in ONTAP (for example, `FsxId01234567890abcdef-01`). Odd-numbered file servers are preferred file servers (that is, they service traffic unless the file system has failed over to the secondary file server), while even-numbered file servers are secondary file servers (that is, they serve traffic only when their partner is unavailable). Because of this, secondary file servers typically show less utilization than preferred file servers.


| Metric | Description | 
| --- | --- | 
| NetworkThroughputUtilization |  Network throughput utilization as a percentage of available network throughput for your file system. This metric is equivalent to the maximum of `NetworkSentBytes` and `NetworkReceivedBytes` as a percentage of the network throughput capacity of one HA pair for your file system. All traffic is considered in this metric, including background tasks (such as SnapMirror, tiering, and backups). There is one metric emitted each minute for each of your file system's file servers. The `Average` statistic is the average network throughput utilization for the given file server over the specified period. The `Minimum` statistic is the lowest network throughput utilization for the given file server over one minute, for the specified period. The `Maximum` statistic is the highest network throughput utilization for the given file server over one minute, for the specified period. Units: Percent  Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| NetworkSentBytes |  The number of bytes (network IO) sent by your file system. All traffic is considered in this metric, including background tasks (such as SnapMirror, tiering, and backups). There is one metric emitted each minute for each of your file system's file servers. The `Sum` statistic is the total number of bytes sent over the network by the given file server over the specified period. The `Average` statistic is the average number of bytes sent over the network by the given file server over the specified period. The `Minimum` statistic is the lowest number of bytes sent over the network by the given file server over the specified period. The `Maximum` statistic is the highest number of bytes sent over the network by the given file server over the specified period. To calculate sent throughput (bytes per second) for any statistic, divide the statistic by the seconds in the specified period.  Units: Bytes  Valid statistics: `Sum`, `Average`, `Minimum`, and `Maximum`  | 
| NetworkReceivedBytes |  The number of bytes (network IO) received by your file system. All traffic is considered in this metric, including background tasks (such as SnapMirror, tiering, and backups). There is one metric emitted each minute for each of your file system's file servers. The `Sum` statistic is the total number of bytes received over the network by the given file server over the specified period. The `Average` statistic is the average number of bytes received over the network by the given file server each minute over the specified period. The `Minimum` statistic is the lowest number of bytes received over the network by the given file server each minute over the specified period. The `Maximum` statistic is the highest number of bytes received over the network by the given file server each minute over the specified period. To calculate received throughput (bytes per second) for any statistic, divide the statistic by the seconds in the period. Units: Bytes  Valid statistics: `Sum`, `Average`, `Minimum`, and `Maximum`  | 

## File server metrics
<a name="so-file-server-metrics"></a>

All of these metrics take two dimensions, `FileSystemId` and `FileServer`.


| Metric | Description | 
| --- | --- | 
| CPUUtilization |  The percent utilization of the file system's CPU resources. There is one metric emitted each minute for each of your file system's file servers. The `Average` statistic is the average CPU utilization of the file system over a specified period.  The `Minimum` statistic is the lowest CPU utilization for the given file server over the specified period. The `Maximum` statistic is the highest CPU utilization for the given file server over the specified period. Units: Percent  Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| FileServerDiskThroughputUtilization |  The disk throughput between your file server and aggregate, as a percentage of the provisioned limit determined by throughput capacity. All traffic is considered in this metric, including background tasks (such as SnapMirror, tiering, and backups). This metric is equivalent to the sum of `DiskReadBytes` and `DiskWriteBytes` as a percentage of the file server's disk throughput capacity of one HA pair for your file system. There is one metric emitted each minute for each of your file system's file servers. The `Average` statistic is the average file server disk throughput utilization for the given file server over the specified period. The `Minimum` statistic is the lowest file server disk throughput utilization for the given file server over the specified period. The `Maximum` statistic is the highest file server disk throughput utilization for the given file server over the specified period. Units: Percent Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| FileServerDiskIopsUtilization |  The IOPS utilization of available disk IOPS capacity for your file server, as a percentage of its disk IOPS limit. This differs from `DiskIopsUtilization` in that the utilization of disk IOPS out of the maximum that your file server can handle, as opposed to your provisioned disk IOPS. All traffic is considered in this metric, including background tasks (such as SnapMirror, tiering, and backups). There is one metric emitted each minute for each of your file system's file servers. The `Average` statistic is the average disk IOPS utilization for the given file server over the specified period. The `Minimum` statistic is the lowest disk IOPS utilization for the given file server over the specified period. The `Maximum` statistic is the highest disk IOPS utilization for the given file server over the specified period. Units: Percent  Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| FileServerCacheHitRatio |  The percentage of all read requests which are served by data that resides in your file system's RAM or NVMe caches for each of your HA pairs (for example, the active file server in an HA pair). A higher percentage indicates a higher ratio of cached reads to total reads. All I/O is considered, including background tasks (such as SnapMirror, tiering, and backups). There is one metric emitted each minute for each of your file system's file servers.  Units: Percent  The `Average` statistic is the average cache hit ratio for one of your file system's HA pairs over the specified period.  The `Minimum` statistic is the lowest cache hit ratio for one of your file system's HA pairs over the specified period.  The `Maximum` statistic is the highest cache hit ratio for one of your file system's HA pairs over the specified period. Valid statistics: `Average`, `Minimum`, and `Maximum`  | 

## Disk I/O metrics
<a name="so-disk-IO-metrics"></a>

All of these metrics take two dimensions, `FileSystemId` and `Aggregate`.
+ `FileSystemId` – Your file system's AWS resource ID.
+ `Aggregate` – Your file system's performance tier consists of multiple storage pools called *aggregates*. There is one aggregate for each HA pair. For example, aggregate `aggr1` maps to file server `FsxId01234567890abcdef-01` (the active file server) and file server `FsxId01234567890abcdef-02` (the secondary file server) in an HA pair.


| Metric | Description | 
| --- | --- | 
| DiskReadBytes |  The number of bytes (disk IO) from ay disk reads from this aggregate. All traffic is considered in this metric, including background tasks (such as SnapMirror, tiering, and backups). There is one metric emitted each minute for each of your file system's aggregates. During SSD capacity decrease operations, this metric is reported for both the original aggregate (`aggr1_old`) and the new smaller aggregate (`aggr1`). The `Sum` statistic is the total number of bytes read each minute from the given aggregate over the specified period. The `Average` statistic is the average number of bytes read each minute from the given aggregate over the specified period. The `Minimum` statistic is the lowest number of bytes read each minute from the given aggregate over the specified period. The `Maximum` statistic is the highest number of bytes read each minute from the given aggregate over the specified period. To calculate read disk throughput (bytes per second) for any statistic, divide the statistic by the seconds in the period. Units: Bytes  Valid statistics: `Sum`, `Average`, `Minimum`, and `Maximum`  | 
| DiskWriteBytes |  The number of bytes (disk IO) from any disk writes to this aggregate. All traffic is considered in this metric, including background tasks (such as SnapMirror, tiering, and backups). There is one metric emitted each minute for each of your file system's aggregates. During SSD capacity decrease operations, this metric is reported for both the original aggregate (`aggr1_old`) and the new smaller aggregate (`aggr1`). The `Sum` statistic is the total number of bytes written to the given aggregate over the specified period. The `Average` statistic is the average number of bytes written to the given aggregate each minute over the specified period. The `Minimum` statistic is the lowest number of bytes written to the given aggregate each minute over the specified period. The `Maximum` statistic is the highest number of bytes written to the given aggregate each minute over the specified period. To calculate write disk throughput (bytes per second) for any statistic, divide the statistic by the seconds in the specified period.  Units: Bytes  Valid statistics: `Sum`, `Average`, `Minimum`, and `Maximum`  | 
| DiskIopsUtilization |  The disk IOPS utilization of one aggregate, as a percentage of the aggregate's disk IOPS limit (that is, the file system's total IOPS divided by the number of HA pairs for your file system). This differs from `FileServerDiskIopsUtilization` in that it is the utilization of provisioned disk IOPS against your provisioned IOPS limit, as opposed to the maximum disk IOPS supported by the file server (that is, dictated by your configured throughput capacity per HA pair). All traffic is considered in this metric, including background tasks (such as SnapMirror, tiering, and backups). There is one metric emitted each minute for each of your file system's aggregates. During SSD capacity decrease operations, this metric is reported for both the original aggregate (`aggr1_old`) and the new smaller aggregate (`aggr1`). The `Average` statistic is the average disk IOPS utilization for the given aggregate over the specified period. The `Minimum` statistic is the lowest disk IOPS utilization for the given aggregate over the specified period. The `Maximum` statistic ii the highest disk IOPS utilization for the given aggregate over the specified period. Units: Percent  Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| DiskReadOperations |  The number of read operations (disk IO) to this aggregate. All traffic is considered in this metric, including background tasks (such as SnapMirror, tiering, and backups). There is one metric emitted each minute for each of your file system's aggregates. During SSD capacity decrease operations, this metric is reported for both the original aggregate (`aggr1_old`) and the new smaller aggregate (`aggr1`). The `Sum` statistic is the total number of read operations performed by the given aggregate over the specified period. The `Average` statistic is the average number of read operations performed each minute by the given aggregate over the specified period. The `Minimum` statistic is the lowest number of read operations performed each minute by the given aggregate over the specified period. The `Maximum` statistic is the highest number of read operations performed each minute by the given aggregate over the specified period. To calculate average disk IOPS over the period, use the `Average` statistic and divide the result by 60 (seconds). Units: Count  Valid statistics: `Sum`, `Average`, `Minimum`, and `Maximum`  | 
| DiskWriteOperations |  The number of write operations (disk IO) to this aggregate. All traffic is considered in this metric, including background tasks (such as SnapMirror, tiering, and backups). There is one metric emitted each minute for each of your file system's aggregates. During SSD capacity decrease operations, this metric is reported for both the original aggregate (`aggr1_old`) and the new smaller aggregate (`aggr1`). The `Sum` statistic is the total number of write operations performed by the given aggregate over the specified period. The `Average` statistic is the average number of write operations performed each minute by the given aggregate over the specified period. To calculate average disk IOPS over the period, use the `Average` statistic and divide the result by 60 (seconds). Units: Count  Valid statistics: `Sum` and `Average`  | 

## Detailed file system metrics
<a name="so-detailed-fs-metrics"></a>

Detailed file system metrics are detailed storage-utilization metrics for each of your storage tiers. Detailed file system metrics have either the `FileSystemId`, `StorageTier`, and `DataType` dimensions, or the `FileSystemId`, `StorageTier`, `DataType`, and `Aggregate` dimensions.
+ When the `Aggregate` dimension is not supplied, the metrics are for your entire file system. The `StorageUsed` and `StorageCapacity` metrics have a single data point each minute corresponding to the file system's total consumed storage (per storage tier) and total storage capacity (for the SSD tier). Meanwhile, the `StorageCapacityUtilization` metric emits one metric each minute for each aggregate.
+ When the `Aggregate` dimension is supplied, the metrics are for each aggregate.

The meaning of the dimensions are as follows:
+ `FileSystemId` – Your file system's AWS resource ID.
+ `Aggregate` – Your file system's performance tier consists of multiple storage pools called *aggregates*. There is one aggregate for each HA pair. For example, aggregate `aggr1` maps to file server `FsxId01234567890abcdef-01` (the active file server) and file server `FsxId01234567890abcdef-02` (the secondary file server) in an HA pair.
+ `StorageTier` – Indicates the storage tier that the metric measures, with possible values of `SSD` and `StandardCapacityPool`.
+ `DataType` – Indicates the type of data that the metric measures, with the possible value `All`.

There is a row for each unique combination of a given metric and dimensional key-value pairs, with a description of what that combination measures.


| Metric | Description | 
| --- | --- | 
| StorageCapacityUtilization |  The storage capacity utilization for a given file system aggregate. There is one metric emitted each minute for each of your file system's aggregates. The `Average` statistic is the average amount of storage capacity utilization for a given aggregate over the specified period. The `Minimum` statistic is the minimum amount of storage capacity utilization for a given aggregate over the specified period. The `Maximum` statistic is the maximum amount of storage capacity utilization for a given aggregate over the specified period. During SSD capacity decrease operations, this metric is reported for both the original aggregate (`aggr1_old`) and the new smaller aggregate (`aggr1`). Units: Percent Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| StorageCapacity |  The storage capacity for a given file system aggregate. There is one metric emitted each minute for each of your file system's aggregates. The `Average` statistic is the average amount of storage capacity for a given aggregate over the specified period. The `Minimum` statistic is the minimum amount of storage capacity for a given aggregate over the specified period. The `Maximum` statistic is the maximum amount of storage capacity for a given aggregate over the specified period. During SSD capacity decrease operations, this metric is reported for both the original aggregate (`aggr1_old`) and the new smaller aggregate (`aggr1`). Units: Bytes Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| StorageUsed |  The used physical storage capacity in bytes, specific to the storage tier. This value includes savings from storage-efficiency features, such as data compression and deduplication. Valid dimension values for `StorageTier` are `SSD` and `StandardCapacityPool`, corresponding to the storage tier that this metric measures. There is one metric emitted each minute for each of your file system's aggregates. The `Average` statistic is the average amount of physical storage capacity consumed on the given storage tier by the given aggregate over the specified period. The `Minimum` statistic is the minimum amount of physical storage capacity consumed on the given storage tier by the given aggregate over the specified period. The `Maximum` statistic is the maximum amount of physical storage capacity consumed on the given storage tier by the given aggregate over the specified period. During SSD capacity decrease operations, this metric is reported for both the original aggregate (`aggr1_old`) and the new smaller aggregate (`aggr1`). Units: Bytes Valid statistics: `Average`, `Minimum`, and `Maximum`  | 

# Volume metrics
<a name="volume-metrics"></a>

Your Amazon FSx for NetApp ONTAP file system can have one or more volumes that store your data. Each of these volumes has a set of CloudWatch metrics, classified as either **Volume metrics** or **Detailed volume metrics**.
+ **Volume metrics** are per-volume performance and storage metrics that take two dimensions, `FileSystemId` and `VolumeId`. `FileSystemId` maps to the file system that the volume belongs to.
+ **Detailed volume metrics** are per-storage-tier metrics that measure storage consumption per tier with the `StorageTier` dimension (with possible values of `SSD` and `StandardCapacityPool`) and per data type with the `DataType` dimension (with possible values of `User`, `Snapshot`, and `Other`). These metrics have the `FileSystemId`, `VolumeId`, `StorageTier`, and `DataType` dimensions.

**Topics**
+ [Network I/O metrics](#fsxn-vol-network-IO-metrics)
+ [Storage capacity metrics](#fsxn-vol-storage-volume-metrics)
+ [Detailed volume metrics](#detailed-vol-metrics)

## Network I/O metrics
<a name="fsxn-vol-network-IO-metrics"></a>

All of these metrics take two dimensions, `FileSystemId` and `VolumeId`. 


| Metric | Description | 
| --- | --- | 
| DataReadBytes |  The number of bytes (network I/O) read from the volume by clients. The `Sum` statistic is the total number of bytes associated with read operations during the specified period. To calculate the average throughput (bytes per second) for a period, divide the `Sum` statistic by the number of seconds in the specified period. Units: Bytes Valid statistics: `Sum`  | 
| DataWriteBytes |  The number of bytes (network I/O) written to the volume by clients. The `Sum` statistic is the total number of bytes associated with write operations during the specified period. To calculate the average throughput (bytes per second) for a period, divide the `Sum` statistic by the number of seconds in the specified period. Units: Bytes Valid statistics: `Sum`  | 
| DataReadOperations |  The number of read operations (network I/O) on the volume by clients. The `Sum` statistic is the total number of read operations during the specified period. To calculate the average read operations per second for a period, divide the `Sum` statistic by the number of seconds in the specified period. Units: Count Valid statistics: `Sum`  | 
| DataWriteOperations |  The number of write operations (network I/O) on the volume by clients. The `Sum` statistic is the total number of write operations during the specified period. To calculate the average write operations per second for a period, divide the `Sum` statistic by the number of seconds in the specified period. Units: Count Valid statistics: `Sum`  | 
| MetadataOperations |  The number of I/O operations (network I/O) from metadata activities by clients to the volume. The `Sum` statistic is the total number of metadata operations during the specified period. To calculate the average metadata operations per second for a period, divide the `Sum` statistic by the number of seconds in the specified period. Units: Count Valid statistics: `Sum`  | 
| DataReadOperationTime |  The sum of total time spent within the volume for read operations (network I/O) from clients accessing data in the volume. The `Sum` statistic is the total number of seconds spent by read operations during the specified period. To calculate the average read latency for a period, divide the `Sum` statistic by the `Sum` of the `DataReadOperations` metric over the same period. Units: Seconds Valid statistics: `Sum`  | 
| DataWriteOperationTime |  The sum of total time spent within the volume for fulfilling write operations (network I/O) from clients accessing data in the volume. The `Sum` statistic is the total number of seconds spent by write operations during the specified period. To calculate the average write latency for a period, divide the `Sum` statistic by the `Sum` of the `DataWriteOperations` metric over the same period. Units: Seconds Valid statistics: `Sum`  | 
| MetadataOperationTime |  The sum of total time spent within the volume for fulfilling metadata operations (network I/O) from clients that are accessing data in the volume. The `Sum` statistic is the total number of seconds spent by read operations during the specified period. To calculate the average latency for a period, divide the `Sum` statistic by the `Sum` of the `MetadataOperations` over the same period. Units: Seconds Valid statistics: `Sum`  | 
| CapacityPoolReadBytes | The number of bytes read (network I/O) from the volume's capacity pool tier.  To ensure data integrity, ONTAP performs a read operation on the capacity pool immediately after performing a write operation.  The `Sum` statistic is the total number of bytes read from the volume's capacity pool tier over a specified period. To calculate capacity pool bytes per second, divide the `Sum` statistic by the seconds in a specified period. Units: BytesValid statistics: `Sum` | 
| CapacityPoolReadOperations |  The number of read operations (network I/O) from the volume's capacity pool tier. This translates to a capacity pool read request.  To ensure data integrity, ONTAP performs a read operation on the capacity pool immediately after performing a write operation.  The `Sum` statistic is the total number of read operations from the volume's capacity pool tier over a specified period. To calculate capacity pool requests per second, divide the `Sum` statistic by the seconds in a specified period.  Units: Count Valid statistics: `Sum`  | 
| CapacityPoolWriteBytes | The number of bytes written (network I/O) to the volume's capacity pool tier. To ensure data integrity, ONTAP performs a read operation on the capacity pool immediately after performing a write operation.  The `Sum` statistic is the total number of bytes written to the volume's capacity pool tier over a specified period. To calculate capacity pool bytes per second, divide the `Sum` statistic by the seconds in a specified period.  Units: Bytes Valid statistics: `Sum` | 
| CapacityPoolWriteOperations |  The number of write operations (network I/O) to the volume from the capacity pool tier. This translates to a write request.  To ensure data integrity, ONTAP performs a read operation on the capacity pool immediately after performing a write operation.  The `Sum` statistic is the total number of write operations to the volume's capacity pool tier over a specified period. To calculate capacity pool requests per second, divide the `Sum` statistic by the seconds in a specified period.  Units: Count Valid statistics: `Sum`  | 

## Storage capacity metrics
<a name="fsxn-vol-storage-volume-metrics"></a>

All of these metrics take two dimensions, `FileSystemId` and `VolumeId`. 


| Metric | Description | 
| --- | --- | 
| StorageCapacity |  The size of the volume in bytes. Units: Bytes Valid statistics: `Maximum`  | 
| StorageUsed |  The used logical storage capacity of the volume. Units: Bytes Valid statistics: `Average`  | 
| StorageCapacityUtilization |  The storage capacity utilization of the volume. Units: Percent Valid statistics: `Average`  | 
| FilesUsed |  The used files (number of files or inodes) on the volume. Units: Count Valid statistics: `Average`  | 
| FilesCapacity |  The total number of inodes that can be created on the volume. Units: Count Valid statistics: `Maximum`  | 

## Detailed volume metrics
<a name="detailed-vol-metrics"></a>

Detailed volume metrics take more dimensions than volume metrics, enabling more granular measurements of your data. All detailed volume metrics have the dimensions `FileSystemId`, `VolumeId`, `StorageTier`, and `DataType`.
+ The `StorageTier` dimension indicates the storage tier that the metric measures, with possible values of `All`, `SSD`, and `StandardCapacityPool`.
+ The `DataType` dimension indicates the type of data that the metric measures, with possible values of `All`, `User`, `Snapshot`, and `Other`.

The following table defines what the `StorageUsed` metric measures for the listed dimensions. 


| Metric | Description | 
| --- | --- | 
| StorageUsed |  The amount of logical space used, in bytes. This metric measures different types of space consumption depending on the dimensions used with this metric. When setting `StorageTier` to `SSD` or `StandardCapacityPool`, and setting `DataType` to `All`, this metric measures the logical space usage for this volume for your SSD and capacity pool tiers, respectively. When setting the `DataType` dimension to `User`, `Snapshot`, or `Other`, and setting `StorageTier` to `All`, this metric measures the logical space usage for each respective type of data. The `Snapshot` data consumption includes the snapshot reserve, which is 5% of the volume's size by default.  Units: Bytes Valid statistics: `Average`, `Minimum`, and `Maximum`  | 
| StorageCapacityUtilization |  The percentage of the volume's used physical disk space.  Units: Percent Valid statistics: `Maximum`  | 