

# Amazon EFS quotas
<a name="limits"></a>

Following, you can find out about quotas when working with Amazon EFS.

**Topics**
+ [

## Amazon EFS quotas that you can increase
](#soft-limits)
+ [

## Amazon EFS resource quotas that you cannot change
](#limits-efs-resources-per-account-per-region)
+ [

## Quotas for NFS clients
](#limits-client-specific)
+ [

## Quotas for Amazon EFS file systems
](#limits-fs-specific)
+ [

## Unsupported NFSv4.0 and 4.1 features
](#nfs4-unsupported-features)
+ [

## Additional considerations
](#limits-additional-considerations)
+ [

# Troubleshooting file operation errors related to quotas
](troubleshooting-efs-fileop-errors.md)

## Amazon EFS quotas that you can increase
<a name="soft-limits"></a>

Service Quotas is an AWS service that helps you manage your quotas, or limits, from one location. In the [Service Quotas console](https://console.aws.amazon.com/servicequotas/home?region=us-east-1#!/dashboard), you can view Amazon EFS limit values and request a quota increase for the number of EFS ﬁle systems in an AWS Region and the read IOPS for frequently accessed data.

You can also request an increase for the following Amazon EFS quotas by contacting AWS Support. To learn more, see [Requesting a quota increase](#request-limit-increase). The Amazon EFS service team reviews each request individually.
+ Number of file systems for each customer account. 
+ Number of access points for each file system.
+ Maximum read IOPS per file system using Elastic throughput. When the read IOPS for frequently accessed file systems is increased, both the read IOPS for infrequently accessed file systems and the write IOPS are also increased. 
+ Elastic throughput per Regional file system for all connected clients in an AWS Region.
+ Provisioned throughput per Regional file system for all connected clients in an AWS Region.

The following table lists general default resource quotas that you can request to change.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/efs/latest/ug/limits.html)<a name="ElasticTPLimits"></a>

The following table lists Elastic throughput quotas per file system for all connected clients in each AWS Region.


**Regional file systems – Total default Elastic throughput per file system for all connected clients in each AWS Region**  

| AWS Region | Maximum read throughput  | Maximum write throughput (metered throughput) | 
| --- | --- | --- | 
|  US East (Ohio) Region US East (N. Virginia) Region US West (Oregon) Region Asia Pacific (Mumbai) Region Asia Pacific (Seoul) Region Asia Pacific (Singapore) Region Asia Pacific (Sydney) Region Asia Pacific (Tokyo) Region Europe (Frankfurt) Region Europe (Ireland) Region  Europe (London) Region  | 60 gibibytes per second (GiBps) | 5 GiBps | 
| All other AWS Regions | 20 GiBps | 1 GiBps | <a name="ProvisionedTPLimits"></a>

The following table lists the Provisioned throughput quotas per file system for all connected clients in each AWS Region.


**Regional file systems – Total default Provisioned throughput per file system for all connected clients in each AWS Region**  

| AWS Region | Maximum read throughput | Maximum write throughput (metered throughput) | 
| --- | --- | --- | 
|  US East (Ohio) Region US East (N. Virginia) Region US West (Oregon) Region Europe (Ireland) Region  | 10 GiBps | 3.33 GiBps | 
| All other AWS Regions | 3 GiBps | 1 GiBps | 

 

### Requesting a quota increase
<a name="request-limit-increase"></a>

To request an increase for these quotas through AWS Support, take the following steps. The Amazon EFS team reviews each quota increase request. 

**To request a quota increase through AWS Support**

1. Open the [AWS Support Center](https://console.aws.amazon.com/support/home#/) page, and sign in if necessary. Then choose **Create Case**.

1. Under **Create case**, choose **Service Limit Increase**.

1. For **Limit Type**, choose the type of limit to increase. Fill in the necessary fields in the form, and then choose your preferred method of contact.

## Amazon EFS resource quotas that you cannot change
<a name="limits-efs-resources-per-account-per-region"></a>

Quotas for several Amazon EFS resources cannot be changed, including:
+ Quotas for general resources, such as the number of connections for each file system. 
+ Elastic and Provisioned throughput quotas per One Zone file system for all connected clients in an AWS Region.
+ Bursting throughput quotas per Regional or One Zone file system for all connected clients in an AWS Region.

The following table lists the general resource quotas that cannot be changed.<a name="ResourceHardLimits"></a>


**General resource quotas that cannot be changed**  

| Resource | Quota | 
| --- | --- | 
| Number of connections for each file system | 25,000 | 
| Number of mount targets for each file system in an Availability Zone | 1 | 
| Number of mount targets for each virtual private cloud (VPC) | 1,400 | 
| Number of security groups for each mount target | 5 | 
| Number of tags for each file system | 50 | 
| Number of VPCs for each file system | 1 | 

**Note**  
Clients can also connect to mount targets that are in an account or VPC that is different from that of the file system. For more information, see [Mounting EFS file systems from another AWS account or VPC](manage-fs-access-vpc-peering.md).

The following table lists the total default Elastic and Provisioned throughput limits per file system for all connected clients in each AWS Region.


**One Zone file systems – Total default Elastic and Provisioned throughput per file system for all connected clients in each AWS Region**  

| AWS Region | Maximum read throughput | Maximum write throughput (metered throughput) | 
| --- | --- | --- | 
| All AWS Regions | 3 GiBps | 1 GiBps | 

The following table lists the total Bursting throughput limits per file system for all connected clients in each AWS Region.


**Regional and One Zone file systems – Total Bursting throughput per file system for all connected clients in each AWS Region**  

| AWS Region | Maximum read throughput | Maximum write throughput | 
| --- | --- | --- | 
|  US East (Ohio) Region US East (N. Virginia) Region US West (Oregon) Region Asia Pacific (Sydney) Region Europe (Ireland) Region  | 5 GiBps | 3 GiBps | 
| All other AWS Regions | 3 GiBps | 1 GiBps | 

## Quotas for NFS clients
<a name="limits-client-specific"></a>

The following quotas for NFS clients apply, assuming a Linux NFSv4.1 client:
+ Maximum combined read and write throughput is 1,500 mebibytes per second (MiBps) for file systems using Elastic throughput and mounted using version 2.0 or later of the Amazon EFS client (amazon-efs-utils version) or the Amazon EFS CSI Driver (aws-efs-csi-driver). Maximum throughput for all other file systems is 500 MiBps. For more information about performance, see [Performance summary](performance.md#performance-overview). NFS client throughput is calculated as the total number of bytes that are sent and received, with a minimum NFS request size of 4 KB (after applying a 1/3 metering rate for read requests).
+ Up to 65,536 active users for each client can have files open at the same time.
+ Up to 65,536 files open at the same time on the instance. Listing directory contents doesn't count as opening a file.
+ Each unique mount on the client can acquire up to a total of 65,536 locks per connection.
+ When connecting to Amazon EFS, NFS clients located on-premises or in another AWS Region can observe lower throughput than when connecting to EFS from the same AWS Region. This effect is because of increased network latency. Network latency of 1 ms or less is required to achieve maximum per-client throughput. Use the DataSync data migration service when migrating large datasets from on-premises NFS servers to EFS. 
+ The NFS protocol supports a maximum of 16 group IDs (GIDs) per user and any additional GIDs are truncated from NFS client requests. For more information, see [Access denied to allowed files on NFS file system](troubleshooting-efs-general.md#nfs-16-group-limit).
+ Using Amazon EFS with Microsoft Windows isn't supported.

## Quotas for Amazon EFS file systems
<a name="limits-fs-specific"></a>

The following quotas are specific to Amazon EFS file systems.


| Resource | Quota | 
| --- | --- | 
| File name length, in bytes | 255 | 
| Symbolic link (symlink) length, in bytes | 4,080 | 
| Number of hard links to a file | 177 | 
| Size of a single file | 52,673,613,135,872 bytes (47.9 TiB) | 
| Number of levels for directory depth | 1,000 | 
| Number of locks on a single file across all instances and users | 512 | 
| Character limit for each file system policy | 20,000 | 
| \$1Number of file operations per second for General Purpose mode | 250,000 | 

\$1For more information about the number of file operations per second for General Purpose mode, see [Performance summary](performance.md#performance-overview).

## Unsupported NFSv4.0 and 4.1 features
<a name="nfs4-unsupported-features"></a>

Although Amazon EFS doesn't support NFSv2, or NFSv3, it does support both NFSv4.1 and NFSv4.0, except for the following features:
+ pNFS
+ Exended attributes
+ Client delegation or callbacks of any type
  + Operation OPEN always returns `OPEN_DELEGATE_NONE` as the delegation type. 
  + The operation OPEN returns `NFSERR_NOTSUPP` for the `CLAIM_DELEGATE_CUR` and `CLAIM_DELEGATE_PREV` claim types.
+ Mandatory locking

  All locks in Amazon EFS are advisory, which means that read and write operations don't check for conflicting locks before the operation is executed. 
+ Deny share

  NFS supports the concept of a share deny. A *share deny *is primarily used by Windows clients for users to deny others access to a particular file that has been opened. Amazon EFS doesn't support this, and returns the NFS error `NFS4ERR_NOTSUPP` for any OPEN commands specifying a share deny value other than `OPEN4_SHARE_DENY_NONE`. Linux NFS clients don't use anything other than `OPEN4_SHARE_DENY_NONE`.
+ Access control lists (ACLs)
+ Amazon EFS doesn't update the `time_access` attribute on file reads. Amazon EFS updates `time_access` in the following events:
  + When a file is created (an inode is created)
  + When an NFS client makes an explicit `setattr` call 
  + On a write to the inode caused by, for example, file size changes or file metadata changes
  + Any inode attribute is updated
+ Namespaces
+ Persistent reply cache
+ Kerberos based security
+ NFSv4.1 data retention
+ SetUID on directories
+ Unsupported file types when using the CREATE operation: Block devices (NF4BLK), character devices (NF4CHR), attribute directory (NF4ATTRDIR), and named attribute (NF4NAMEDATTR).
+ Unsupported attributes: FATTR4\$1ARCHIVE, FATTR4\$1FILES\$1AVAIL, FATTR4\$1FILES\$1FREE, FATTR4\$1FILES\$1TOTAL, FATTR4\$1FS\$1LOCATIONS, FATTR4\$1MIMETYPE, FATTR4\$1QUOTA\$1AVAIL\$1HARD, FATTR4\$1QUOTA\$1AVAIL\$1SOFT, FATTR4\$1QUOTA\$1USED, FATTR4\$1TIME\$1BACKUP, and FATTR4\$1ACL.

   An attempt to set these attributes results in an `NFS4ERR_ATTRNOTSUPP` error that is sent back to the client. 

## Additional considerations
<a name="limits-additional-considerations"></a>

In addition, note the following:
+ For a list of AWS Regions where you can create Amazon EFS file systems, see the [AWS General Reference User Guide](https://docs.aws.amazon.com/general/latest/gr/Welcome.html).
+ Amazon EFS does not support the `nconnect` mount option.
+ You can mount an Amazon EFS file system from on-premises data center servers using Direct Connect and VPN. For more information, see [Tutorial: Mounting with on-premises clients](mounting-fs-mount-helper-direct.md).

# Troubleshooting file operation errors related to quotas
<a name="troubleshooting-efs-fileop-errors"></a>

When you access EFS file systems, certain limits on the files in the file system apply. Exceeding these limits causes file operation errors. For more information about file-based limits in Amazon EFS, see [Amazon EFS quotas](limits.md).

Following, you can find some common file operation errors and the limits associated with each error.

**Topics**
+ [

## Checking for open files and file locks
](#check-open-files-locks)
+ [

## Command fails with “Disk quota exceeded” error
](#diskquotaerror)
+ [

## Command fails with "I/O error"
](#ioerror)
+ [

## Command fails with "File name is too long" error
](#filenametoolong)
+ [

## Command fails with "File not found" error
](#filenotfound)
+ [

## Command fails with "Too many links" error
](#hardlinkerror)
+ [

## Command fails with "File too large" error
](#filesizeerror)

## Checking for open files and file locks
<a name="check-open-files-locks"></a>

To troubleshoot file operation errors, you can check whether your client system has reached limits by examining open files and file locks on the EFS mount point:
+ **Check open files** – Use your operating system's tools to list open files on the EFS mount path. This helps identify if you're approaching the open file limit. For example, on Linux you can use:

  ```
  lsof <efs-mount-path>
  ```
+ **Check file locks** – Use your system's lock monitoring tools to list files with locks on the EFS mount path. This helps identify if lock limits are being reached. For example, on Linux you can use:

  ```
  lslocks | grep <efs-mount-path>
  ```

These commands will show you the current usage against EFS limits, helping you determine if file operation errors are related to reaching system or service limits.

## Command fails with “Disk quota exceeded” error
<a name="diskquotaerror"></a>

 Amazon EFS doesn't currently support user disk quotas. This error can occur if any of the following limits have been exceeded:
+ Up to 65,536 active users can have files open at the same time. A user account that is logged in multiple times counts as one active user.
+ Up to 65,536 files can be open at once for an instance. Listing directory contents doesn't count as opening a file.
+ Each unique mount on the client can acquire up to a total of 65,536 locks per connection.

**Action to take**  
If you encounter this issue, you can resolve it by identifying which of the preceding limits you are exceeding, and then making changes to meet that limit. For more information, see [Quotas for NFS clients](limits.md#limits-client-specific). To check your current usage, see [Checking for open files and file locks](#check-open-files-locks).

## Command fails with "I/O error"
<a name="ioerror"></a>

This error occurs when you encounter one of the following issues:
+ More than 65,536 active user accounts for each instance have files open at once.

**Action to take**  
If you encounter this issue, you can resolve it by meeting the supported limit of open files on your instances. To do so, reduce the number of active users that have files from your Amazon EFS file system open simultaneously on your instances. To check your current usage, see [Checking for open files and file locks](#check-open-files-locks).
+ The AWS KMS key encrypting your file system was deleted.

**Action to take**  
If you encounter this issue, you can no longer decrypt the data that was encrypted under that key, which means that data becomes unrecoverable.

## Command fails with "File name is too long" error
<a name="filenametoolong"></a>

This error occurs when the size of a file name or its symbolic link (symlink) is too long. File names have the following limits:
+ A name can be up to 255 bytes long.
+ A symlink can be up to 4080 bytes in size.

**Action to take**  
If you encounter this issue, you can resolve it by reducing the size of your file name or symlink length to meet the supported limits.

## Command fails with "File not found" error
<a name="filenotfound"></a>

This error occurs because some older 32-bit versions of Oracle E-Business suite use 32-bit file I/O interfaces, and EFS uses 64-bit inode numbers. System calls that may fail include `stat()` and `readdir()`.

**Action to take**  
If you encounter this error, you can resolve it by using the **nfs.enable\$1ino64=0 kernel** boot option. This option compresses the 64-bit EFS inode numbers to 32 bits. Kernel boot options are handled differently for different Linux distributions. On Amazon Linux, turn on this option by adding `nfs.enable_ino64=0 kernel` to the `GRUB_CMDLINE_LINUX_DEFAULT` variable in `/etc/default/grub`. Please consult your distribution for specific documentation on how to turn on kernel boot options.

## Command fails with "Too many links" error
<a name="hardlinkerror"></a>

This error occurs when there are too many hard links to a file. You can have up to 177 hard links in a file.

**Action to take**  
If you encounter this issue, you can resolve it by reducing the number of hard links to a file to meet the supported limit.

## Command fails with "File too large" error
<a name="filesizeerror"></a>

This error occurs when a file is too large. A single file can be up to 52,673,613,135,872 bytes (47.9 TiB) in size.

**Action to take**  
If you encounter this issue, you can resolve it by reducing the size of a file to meet the supported limit.