Service, account, and table quotas in Amazon DynamoDB
This section describes current quotas, formerly referred to as limits, within Amazon DynamoDB. Each quota applies on a per-Region basis unless otherwise specified.
Topics
- Read/write capacity mode and throughput
- Reserved Capacity
- Import quotas
- Contributor Insights
- Tables
- Global tables
- Secondary indexes
- Partition keys and sort keys
- Naming rules
- Data types
- Items
- Attributes
- Expression parameters
- DynamoDB transactions
- DynamoDB Streams
- DynamoDB Accelerator (DAX)
- API-specific limits
- DynamoDB encryption at rest
- Table export to Amazon S3
- Backup and restore
Read/write capacity mode and throughput
You can switch tables from on-demand mode to provisioned capacity mode at any time. When you do multiple switches between capacity modes, the following conditions apply:
-
You can switch a newly created table in on-demand mode to provisioned capacity mode at any time. However, you can only switch it back to on-demand mode 24 hours after the table’s creation timestamp.
-
You can switch an existing table in on-demand mode to provisioned capacity mode at any time. However, you can only switch it back to on-demand mode 24 hours after the last timestamp indicating a switch to on-demand.
For more information about switching between read and write capacity modes, see Considerations when switching capacity modes in DynamoDB.
Capacity unit sizes (for provisioned tables)
One read capacity unit = one strongly consistent read per second, or two eventually consistent reads per second, for items up to 4 KB in size.
One write capacity unit = one write per second, for items up to 1 KB in size.
Transactional read requests require two read capacity units to perform one read per second for items up to 4 KB.
Transactional write requests require two write capacity units to perform one write per second for items up to 1 KB.
Request unit sizes (for on-demand tables)
One read request unit = one strongly consistent read per second, or two eventually consistent reads per second, for items up to 4 KB in size.
One write request unit = one write per second, for items up to 1 KB in size.
Transactional read requests require two read request units to perform one read per second for items up to 4 KB.
Transactional write requests require two write request units to perform one write per second for items up to 1 KB.
Throughput default quotas
AWS places some default quotas on the throughput that your account can provision and consume within a Region.
The account-level read throughput and account-level write throughput quotas apply at the account level. These account-level quotas apply to the sum of the provisioned throughput capacity for all your account’s tables and global secondary indexes in a given Region. All the account's available throughput can be provisioned for a single table or across multiple tables. These quotas only apply to tables using the provisioned capacity mode.
The table-level read throughput and table-level write throughput quotas apply differently to tables that use the provisioned capacity mode, and tables that use the on-demand capacity mode.
For provisioned capacity mode tables and GSIs, the quota is the maximum amount of read and write capacity units that can be provisioned for any table or any of its GSIs in the Region. The total of any individual table and all its GSIs must also remain below the account-level read and write throughput quota. This is in addition to the requirement that the total of all provisioned tables and their GSIs must remain below the account-level read and write throughput quota.
For on-demand capacity mode tables and GSIs, the table-level quota is the maximum read and write capacity units that are available for any table, or any individual GSI within that table. No account-level read and write throughput quotas are applied to tables in on-demand mode.
Following are the throughput quotas that apply on your account, by default.
Throughput quota name | On-Demand | Provisioned | Adjustable |
---|---|---|---|
|
|
|
Yes |
|
|
|
Yes |
|
|
|
Yes |
You can use the Service
Quotas console
For your account-level throughput quotas, you can use the Service Quotas consoleAccountProvisionedReadCapacityUnits
and
AccountProvisionedWriteCapacityUnits
AWS usage metrics. To learn
more about usage metrics, see AWS usage metrics.
Increasing or decreasing throughput (for provisioned tables)
Increasing provisioned throughput
You can increase ReadCapacityUnits
or
WriteCapacityUnits
as often as necessary, using the AWS Management Console
or the UpdateTable
operation. In a single call, you can increase
the provisioned throughput for a table, for any global secondary indexes on that
table, or for any combination of these. The new settings do not take effect
until the UpdateTable
operation is complete.
You can't exceed your per-account quotas when you add provisioned capacity, and DynamoDB doesn't allow you to increase provisioned capacity very rapidly. Aside from these restrictions, you can increase the provisioned capacity for your tables as high as you need. For more information about per-account quotas, see the preceding section, Throughput default quotas.
Decreasing provisioned throughput
For every table and global secondary index in an
UpdateTable
operation, you can decrease
ReadCapacityUnits
or WriteCapacityUnits
(or both).
The new settings don't take effect until the UpdateTable
operation
is complete.
There is a default quota on the number of provisioned capacity decreases you can perform on your DynamoDB table per day. A day is defined according to Universal Time Coordinated (UTC). On a given day, you can start by performing up to four decreases within one hour as long as you have not performed any other decreases yet during that day. Subsequently, you can perform one additional decrease per hour (once every 60 minutes). This effectively brings the maximum number of decreases in a day to 27 times.
You can use the Service Quotas console
Important
Table and global secondary index decrease limits are decoupled, so any global secondary indexes for a particular table have their own decrease limits. However, if a single request decreases the throughput for a table and a global secondary index, it is rejected if either exceeds the current limits. Requests are not partially processed.
Example
In the first 4 hours of a day, a table with a global secondary index can be modified as follows:
-
Decrease the table's
WriteCapacityUnits
orReadCapacityUnits
(or both) four times. -
Decrease the
WriteCapacityUnits
orReadCapacityUnits
(or both) of the global secondary index four times.
At the end of that same day, the table and the global secondary index throughput can potentially be decreased a total of 27 times each.
Reserved Capacity
AWS places a default quota on the amount of active reserved capacity that your account can purchase. The quota limit is a combination of reserved capacity for write capacity units (WCUs) and read capacity units (RCUs).
Reserved capacity quota | Active reserved capacity | Adjustable |
---|---|---|
Per account |
1,000,000 provisioned capacity units (WCUs _ RCUs) |
Yes |
If you attempt to purchase more than 1,000,000 provisioned capacity units in a single purchase, you will receive an error for this service quota limit. If you have active reserved capacity and attempt to purchase additional reserved capacity that would result in more than 1,000,000 active provisioned capacity units, you will receive an error for this service quota limit.
If you need reserved capacity for more than 1,000,000 provisioned capacity units, you
can request a quota increase by submitting a request to the support
Import quotas
DynamoDB Import from Amazon S3 can support up to 50 concurrent import jobs with a total import source object size of 15TB at a time in us-east-1, us-west-2, and eu-west-1 regions. In all other regions, up to 50 concurrent import tasks with a total size of 1TB is supported. Each import job can take up to 50,000 Amazon S3 objects in all regions. For more information on import and validation, see import format quotas and validation.
Contributor Insights
When you enable Customer Insights on your DynamoDB table, you're still subject to Contributor Insights rules limits. For more information, see CloudWatch service quotas.
Tables
Table size
There is no practical limit on a table's size. Tables are unconstrained in terms of the number of items or the number of bytes.
Maximum number of tables per account per region
For any AWS account, there is an initial quota of 2,500 tables per AWS Region.
If you need more than 2,500 tables for a single account, please reach out to your AWS account team to explore an increase up to a maximum of 10,000 tables. For more than 10,000, the recommended best practice is to setup multiple accounts, each of which can serve up to 10,000 tables.
You can use the Service Quotas
console
Using the Service
Quotas consoleTableCount
AWS usage metrics. To learn more
about usage metrics, see AWS usage metrics.
Global tables
AWS places some default quotas on the throughput you can provision or utilize when using global tables.
Default global table quotas | On-Demand | Provisioned |
---|---|---|
|
|
|
|
|
|
Transactional operations provide atomicity, consistency, isolation, and durability (ACID) guarantees only within the AWS Region where the write is made originally. Transactions are not supported across Regions in global tables. For example, suppose that you have a global table with replicas in the US East (Ohio) and US West (Oregon) Regions and you perform a TransactWriteItems operation in the US East (N. Virginia) Region. In this case, you might observe partially completed transactions in the US West (Oregon) Region as changes are replicated. Changes are replicated to other Regions only after they have been committed in the source Region.
Note
There may be instances where you will need to request a quota limit increase
through AWS Support. If any of the following apply to you, please see https://aws.amazon.com/support
-
If you are adding a replica for a table that is configured to use more than 40,000 write capacity units (WCU), you must request a service quota increase for your add replica WCU quota.
-
If you are adding a replica or replicas to one destination Region within a 24-hour period with a combined total greater than 10TB, you must request a service quota increase for your add replica data backfill quota.
-
If you encounter an error similar to the following:
-
Cannot create a replica of table 'example_table' in region 'example_region_A' because its exceeds your current account limit in region 'example_region_B'.
-
Secondary indexes
Secondary indexes per table
You can define a maximum of 5 local secondary indexes.
There is a default quota of 20 global
secondary indexes per table. You can use the Service Quotas console
You can create or delete only one global secondary index per
UpdateTable
operation.
Projected Secondary Index attributes per table
You can project a total of up to 100 attributes into all of a table's local and global secondary indexes. This only applies to user-specified projected attributes.
In a CreateTable
operation, if you specify a
ProjectionType
of INCLUDE
, the total count of
attributes specified in NonKeyAttributes
, summed across all of the
secondary indexes, must not exceed 100. If you project the
same attribute name into two different indexes, this counts as two distinct
attributes when determining the total.
This limit does not apply for secondary indexes with a ProjectionType
of KEYS_ONLY
or ALL
.
Partition keys and sort keys
Partition key length
The minimum length of a partition key value is 1 byte. The maximum length is 2048 bytes.
Partition key values
There is no practical limit on the number of distinct partition key values, for tables or for secondary indexes.
Sort key length
The minimum length of a sort key value is 1 byte. The maximum length is 1024 bytes.
Sort key values
In general, there is no practical limit on the number of distinct sort key values per partition key value.
The exception is for tables with secondary indexes. An item collection is the set of items which have the same value of partition key attribute. In a global secondary index the item collection is independent of the base table (and can have a different partition key attribute), but in a local secondary index the indexed view is colocated in the same partition as the item in the table and shares the same partition key attribute. As a result of this locality, when a table has one or more LSIs, the item collection cannot be distributed to multiple partitions.
For a table with one or more LSIs, item collections cannot exceed 10GB in size. This includes all base table items and all projected LSI views which have the same value of the partition key attribute. 10GB is the maximum size of a partition. For more detailed information, see Item collection size limit.
Naming rules
Table names and Secondary Index names
Names for tables and secondary indexes must be at least 3 characters long, but no greater than 255 characters long. The following are the allowed characters:
-
A-Z
-
a-z
-
0-9
-
_
(underscore) -
-
(hyphen) -
.
(dot)
Attribute names
In general, an attribute name must be at least one character long, but no greater than 64 KB long.
The following are the exceptions. These attribute names must be no greater than 255 characters long:
-
Secondary index partition key names.
-
Secondary index sort key names.
-
The names of any user-specified projected attributes (applicable only to local secondary indexes). In a
CreateTable
operation, if you specify aProjectionType
ofINCLUDE
, the names of the attributes in theNonKeyAttributes
parameter are length-restricted. TheKEYS_ONLY
andALL
projection types are not affected.
These attribute names must be encoded using UTF-8, and the total size of each name (after encoding) cannot exceed 255 bytes.
Data types
String
The length of a String is constrained by the maximum item size of 400 KB.
Strings are Unicode with UTF-8 binary encoding. Because UTF-8 is a variable width encoding, DynamoDB determines the length of a String using its UTF-8 bytes.
Number
A Number can have up to 38 digits of precision, and can be positive, negative, or zero.
-
Positive range:
1E-130
to9.9999999999999999999999999999999999999E+125
-
Negative range:
-9.9999999999999999999999999999999999999E+125
to-1E-130
DynamoDB uses JSON strings to represent Number data in requests and replies. For more information, see DynamoDB low-level API.
If number precision is important, you should pass numbers to DynamoDB using strings that you convert from a number type.
Binary
The length of a Binary is constrained by the maximum item size of 400 KB.
Applications that work with Binary attributes must encode the data in base64 format before sending it to DynamoDB. Upon receipt of the data, DynamoDB decodes it into an unsigned byte array and uses that as the length of the attribute.
Items
Item size
The maximum item size in DynamoDB is 400 KB, which includes both attribute name binary length (UTF-8 length) and attribute value lengths (again binary length). The attribute name counts towards the size limit.
For example, consider an item with two attributes: one attribute named "shirt-color" with value "R" and another attribute named "shirt-size" with value "M". The total size of that item is 23 bytes.
Item size for tables with Local Secondary Indexes
For each local secondary index on a table, there is a 400 KB limit on the total of the following:
-
The size of an item's data in the table.
-
The size of corresponding entries (including key values and projected attributes) in all local secondary indexes.
Attributes
Attribute name-value pairs per item
The cumulative size of attributes per item must fit within the maximum DynamoDB item size (400 KB).
Number of values in list, map, or set
There is no limit on the number of values in a List, a Map, or a Set, as long as the item containing the values fits within the 400 KB item size limit.
Attribute values
Empty String and Binary attribute values are allowed, if the attribute is not used as a key attribute for a table or index. Empty String and Binary values are allowed inside Set, List, and Map types. An attribute value cannot be an an empty Set (String Set, Number Set, or Binary Set). However, empty Lists and Maps are allowed.
Nested attribute depth
DynamoDB supports nested attributes up to 32 levels deep.
Expression parameters
Expression parameters include ProjectionExpression
,
ConditionExpression
, UpdateExpression
, and
FilterExpression
.
Lengths
The maximum length of any expression string is 4 KB. For example, the size of the
ConditionExpression
a=b
is 3 bytes.
The maximum length of any single expression attribute name or expression attribute
value is 255 bytes. For example, #name
is 5 bytes; :val
is
4 bytes.
The maximum length of all substitution variables in an expression is 2 MB. This is
the sum of the lengths of all ExpressionAttributeNames
and
ExpressionAttributeValues
.
Operators and operands
The maximum number of operators or functions allowed in an
UpdateExpression
is 300. For example, the
UpdateExpression
SET a = :val1 + :val2 + :val3
contains two "+
"
operators.
The maximum number of operands for the IN
comparator is 100.
Reserved words
DynamoDB does not prevent you from using names that conflict with reserved words. (For a complete list, see Reserved words in DynamoDB.)
However, if you use a reserved word in an expression parameter, you must also
specify ExpressionAttributeNames
. For more information, see Expression attribute names (aliases)
in DynamoDB.
DynamoDB transactions
DynamoDB transactional API operations have the following constraints:
-
A transaction cannot contain more than 100 unique items.
-
A transaction cannot contain more than 4 MB of data.
-
No two actions in a transaction can work against the same item in the same table. For example, you cannot both
ConditionCheck
andUpdate
the same item in one transaction. -
A transaction cannot operate on tables in more than one AWS account or Region.
-
Transactional operations provide atomicity, consistency, isolation, and durability (ACID) guarantees only within the AWS Region where the write is made originally. Transactions are not supported across Regions in global tables. For example, suppose that you have a global table with replicas in the US East (Ohio) and US West (Oregon) Regions and you perform a
TransactWriteItems
operation in the US East (N. Virginia) Region. In this case, you might observe partially completed transactions in the US West (Oregon) Region as changes are replicated. Changes are replicated to other Regions only after they have been committed in the source Region.
DynamoDB Streams
Simultaneous readers of a shard in DynamoDB Streams
For single-Region tables that are not global tables, you can design for up to two processes to read from the same DynamoDB Streams shard at the same time. Exceeding this limit can result in request throttling. For global tables, we recommend you limit the number of simultaneous readers to one to avoid request throttling.
Maximum write capacity for a table with DynamoDB Streams enabled
AWS places some default quotas on the write capacity for DynamoDB tables with DynamoDB Streams enabled. These default quotas are applicable only for tables in provisioned read/write capacity mode. The following are throughput quotas that apply to your account by default.
-
US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), South America (São Paulo), Europe (Frankfurt), Europe (Ireland), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), China (Beijing) Regions:
-
Per table – 40,000 write capacity units
-
-
All other Regions:
-
Per table – 10,000 write capacity units
-
You can use the Service
Quotas console
Note
The provisioned throughput quotas also apply for DynamoDB tables with DynamoDB Streams enabled. When you request a quota increase on the write capacity for a table with Streams enabled, make sure you also request an increase of the provisioned throughput capacity for this table. For more information, see Throughput Default Quotas. Other quotas also apply when processing higher throughput DynamoDB Streams. For more information, see Amazon DynamoDB Streams API reference guide.
DynamoDB Accelerator (DAX)
AWS Region availability
For a list of AWS Regions in which DAX is available, see DynamoDB Accelerator (DAX) in the AWS General Reference.
Nodes
A DAX cluster consists of exactly one primary node, and between zero and ten read replica nodes.
The total number of nodes (per AWS account) cannot exceed 50 in a single AWS Region.
Parameter groups
You can create up to 20 DAX parameter groups per Region.
Subnet groups
You can create up to 50 DAX subnet groups per Region.
Within a subnet group, you can define up to 20 subnets.
Important
A DAX cluster supports a maximum of 500 DynamoDB tables. Once you go beyond 500 DynamoDB tables, your cluster may experience degradation in availability and performance.
API-specific limits
CreateTable
/UpdateTable
/DeleteTable
/PutResourcePolicy
/DeleteResourcePolicy
-
In general, you can have up to 500 CreateTable, UpdateTable, DeleteTable, PutResourcePolicy, and DeleteResourcePolicy requests running simultaneously in any combination. As a result, the total number of tables in the
CREATING
,UPDATING
, orDELETING
state cannot exceed 500.You can submit up to 2,500 requests per second of mutable (
CreateTable
,DeleteTable
,UpdateTable
,PutResourcePolicy
, andDeleteResourcePolicy
) control plane API requests across a group of tables. However, thePutResourcePolicy
andDeleteResourcePolicy
requests have lower individual limits. For more information, see the following quotas details forPutResourcePolicy
andDeleteResourcePolicy
.CreateTable
andPutResourcePolicy
requests which include a resource-based policy will count as two additional requests for each KB of the policy. For example, aCreateTable
orPutResourcePolicy
request with a policy of size 5 KB will count as 11 requests. 1 for theCreateTable
request and 10 for the resource-based policy (2 x 5 KB). Similarly, a policy of size 20 KB will count as 41 requests. 1 for theCreateTable
request and and 40 for the resource-based policy (2 x 20 KB).PutResourcePolicy
-
You can submit up to 25
PutResourcePolicy
API requests per second across a group of tables. After a successful request for an individual table, no newPutResourcePolicy
requests are supported for the following 15 seconds.The maximum size supported for a resource-based policy document is 20 KB. DynamoDB counts whitespaces when calculating the size of a policy against this limit.
DeleteResourcePolicy
-
You can submit up to 50
DeleteResourcePolicy
API requests per second across a group of tables. After a successfulPutResourcePolicy
request for an individual table, noDeleteResourcePolicy
requests are supported for the following 15 seconds.
BatchGetItem
-
A single
BatchGetItem
operation can retrieve a maximum of 100 items. The total size of all the items retrieved cannot exceed 16 MB.
BatchWriteItem
-
A single
BatchWriteItem
operation can contain up to 25PutItem
orDeleteItem
requests. The total size of all the items written cannot exceed 16 MB.
DescribeStream
-
You can call
DescribeStream
at a maximum rate of 10 times per second.
DescribeTableReplicaAutoScaling
-
DescribeTableReplicaAutoScaling
method supports only 10 requests per second.
DescribeLimits
-
DescribeLimits
should be called only periodically. You can expect throttling errors if you call it more than once in a minute.
DescribeContributorInsights
/ListContributorInsights
/UpdateContributorInsights
-
DescribeContributorInsights
,ListContributorInsights
andUpdateContributorInsights
should be called only periodically. DynamoDB supports up to five requests per second for each of these APIs.
DescribeTable
/ListTables
/GetResourcePolicy
-
You can submit up to 2,500 requests per second of a combination of read-only (
DescribeTable
,ListTables
, andGetResourcePolicy
) control plane API requests. TheGetResourcePolicy
API has a lower individual limit of 100 requests per second.
Query
-
The result set from a
Query
is limited to 1 MB per call. You can use theLastEvaluatedKey
from the query response to retrieve more results.
Scan
-
The result set from a
Scan
is limited to 1 MB per call. You can use theLastEvaluatedKey
from the scan response to retrieve more results.
UpdateKinesisStreamingDestination
-
When performing
UpdateKinesisStreamingDestination
operations, you can setApproximateCreationDateTimePrecision
to a new value a maximum of 3 times in a 24 hour period.
UpdateTableReplicaAutoScaling
-
UpdateTableReplicaAutoScaling
method supports only ten requests per second.
UpdateTableTimeToLive
-
The
UpdateTableTimeToLive
method supports only one request to enable or disableTime to Live (TTL)
per specified table per hour. This change can take up to one hour to fully process. Any additionalUpdateTimeToLive
calls for the same table during this one hour duration result in a ValidationException.
DynamoDB encryption at rest
You can switch between an AWS owned key, an AWS managed key, and a customer managed key up to four times, anytime per 24-hour window, on a per table basis, starting from when the table was created. If there was no change in the past six hours, an additional change is allowed. This effectively brings the maximum number of changes in a day to eight (four changes in the first six hours, and one change for each of the subsequent six hour windows in a day).
You can switch encryption keys to use an AWS owned key as often as necessary, even if the above quota has been exhausted.
These are the quotas unless you request a higher amount. To request a service quota
increase, see https://aws.amazon.com/support
Table export to Amazon S3
Full export: up to 300 concurrent export tasks, or up to a total of 100TB from all in-flight table exports, can be exported. Both of these limits are checked before an export is queued.
Incremental export: up to 300 concurrent jobs, or 100TB of table size, in an export period window between 15 minutes minimum and 24 hours maximum, can be exported concurrently.
Backup and restore
When restoring through DynamoDB on-demand backup, you can execute up to 50 concurrent restores that total 50TB. When restoring through AWS Backup, you can execute up to 50 concurrent restores that total 25TB. For more information about backups, see Backup and restore for DynamoDB.