

# DynamoDB throughput capacity
<a name="capacity-mode"></a>

This section provides an overview of the two throughput modes available for your DynamoDB table and considerations in selecting the appropriate capacity mode for your application. A table’s throughput mode determines how the capacity of a table is managed. Throughput mode also determines how you're charged for the read and write operations on your tables. In Amazon DynamoDB, you can choose between **on-demand mode** and **provisioned mode** for your tables to accommodate different workload requirements.

**Topics**
+ [

## On-demand mode
](#capacity-mode-on-demand)
+ [

## Provisioned mode
](#capacity-mode-provisioned)
+ [

# DynamoDB on-demand capacity mode
](on-demand-capacity-mode.md)
+ [

# DynamoDB provisioned capacity mode
](provisioned-capacity-mode.md)
+ [

# Understanding DynamoDB warm throughput
](warm-throughput.md)
+ [

# DynamoDB burst and adaptive capacity
](burst-adaptive-capacity.md)
+ [

# Considerations when switching capacity modes in DynamoDB
](bp-switching-capacity-modes.md)

## On-demand mode
<a name="capacity-mode-on-demand"></a>

Amazon DynamoDB on-demand mode is a serverless throughput option that simplifies database management and automatically scales to support customers' most demanding applications. DynamoDB on-demand enables you to create a table without worrying about capacity planning, monitoring usage, and configuring scaling policies. DynamoDB on-demand offers pay-per-request pricing for read and write requests so that you only pay for what you use. For on-demand mode tables, you don't need to specify how much read and write throughput you expect your application to perform. 

On-demand mode is the default and recommended throughput option for most DynamoDB workloads. DynamoDB handles all aspects of throughput management and scaling to support workloads that can start small and scale to millions of requests per second. You can read and write to your DynamoDB tables as needed without managing throughput capacity on the table. For more information, see [DynamoDB on-demand capacity mode](on-demand-capacity-mode.md).

## Provisioned mode
<a name="capacity-mode-provisioned"></a>

In provisioned mode, you must specify the number of reads and writes per second that you require for your application. You'll be charged based on the hourly read and write capacity you have provisioned, not how much of that provisioned capacity you actually consumed. This helps you govern your DynamoDB use to stay at or below a defined request rate in order to obtain cost predictability.

You can choose to use provisioned capacity if you have steady workloads with predictable growth, and if you can reliably forecast capacity requirements for your application. For more information, see [DynamoDB provisioned capacity mode](provisioned-capacity-mode.md).

# DynamoDB on-demand capacity mode
<a name="on-demand-capacity-mode"></a>

Amazon DynamoDB on-demand offers a truly serverless database experience that automatically scales to accommodate the most demanding workloads without capacity planning. On-demand simplifies the setup process, eliminates capacity management and monitoring, and provides fast, automatic scaling. With pay-per-request pricing, you don’t have to worry about idle capacity because you only pay for the throughput you actually use. You are billed per read or write request, so your costs directly reflect your actual usage. 

When you choose on-demand mode, DynamoDB instantly accommodates your workloads as they ramp up or down to any previously reached traffic level. If a workload’s traffic level hits a new peak, DynamoDB automatically scales to accommodate the increased throughput requirements. On-demand mode is the default and recommended throughput option because it simplifies building modern, serverless applications that can start small and scale to millions of requests per second. Once your on-demand table is scaled out, you can instantly achieve the same throughput again in the future without throttling. If you are driving zero traffic to your table, then with on-demand, you are not charged for any throughput. For more information about on-demand mode's scaling properties, see [Initial throughput and scaling properties](#on-demand-capacity-mode-initial). 

Tables that use on-demand mode deliver the same single-digit millisecond latency, service-level agreement (SLA), and security that DynamoDB provisioned mode offers.

**Note**  
By default, DynamoDB protects you from unintended, runaway usage. To scale beyond the 40,000 table-level read and write throughput limits for all tables in your account, you can request an increase for this quota. Throughput requests that exceed the default table throughput quota are throttled. For more information, see [Throughput default quotas](ServiceQuotas.md#default-limits-throughput).

Optionally, you can also configure maximum read or write (or both) throughput per second for individual on-demand tables and global secondary indexes. By configuring throughput, you can keep table-level usage and costs bounded, protect against an inadvertent surge in consumed resources, and prevent excessive use for predictable cost management. Throughput requests that exceed the maximum table throughput are throttled. You can modify the table-specific maximum throughput at any time based on your application requirements. For more information, see [DynamoDB maximum throughput for on-demand tables](on-demand-capacity-mode-max-throughput.md).

To get started, create or update a table to use on-demand mode. For more information, see [Basic operations on DynamoDB tables](WorkingWithTables.Basics.md).

You can switch tables from provisioned capacity mode to on-demand mode up to four times in a 24-hour rolling window. You can switch tables from on-demand mode to provisioned capacity mode at any time. 

For more information about switching between read and write capacity modes, see [Considerations when switching capacity modes in DynamoDB](bp-switching-capacity-modes.md). For on-demand table quotas, see [Read/write throughput](ServiceQuotas.md#default-limits-throughput-capacity-modes).

**Topics**
+ [

## Read request units and write request units
](#read-write-request-units)
+ [

## Initial throughput and scaling properties
](#on-demand-capacity-mode-initial)
+ [

# DynamoDB maximum throughput for on-demand tables
](on-demand-capacity-mode-max-throughput.md)

## Read request units and write request units
<a name="read-write-request-units"></a>

DynamoDB charges you for the reads and writes that your application performs on your tables in terms of *read request units* and *write request units*.

One *read request unit* represents one strongly consistent read operation per second, or two eventually consistent read operations per second, for an item up to 4 KB in size. For more information about DynamoDB read consistency models, see [DynamoDB read consistency](HowItWorks.ReadConsistency.md).

One *write request unit* represents one write operation per second, for an item up to 1 KB in size.

For more information about how read and write units are consumed, see [DynamoDB read and write operations](read-write-operations.md).

## Initial throughput and scaling properties
<a name="on-demand-capacity-mode-initial"></a>

DynamoDB tables using on-demand capacity mode automatically adapt to your application’s traffic volume. New on-demand tables will be able to sustain up to 4,000 writes per second and 12,000 reads per second. On-demand capacity mode instantly accommodates up to double the previous peak traffic on a table. For example, say that your application’s traffic pattern varies between 25,000 and 50,000 strongly consistent reads per second. 50,000 reads per second is the previous traffic peak. On-demand capacity mode instantly accommodates sustained traffic of up to 100,000 reads per second. If your application sustains traffic of 100,000 reads per second, that peak becomes your new previous peak. This previous peak enables subsequent traffic to reach up to 200,000 reads per second.

If your workload generates more than double your previous peak on a table, DynamoDB automatically allocates more capacity as your traffic volume increases. This capacity allocation helps ensure that your workload doesn't experience throttling. However, throttling can occur if you exceed double your previous peak within 30 minutes. For example, say that your application’s traffic pattern varies between 25,000 and 50,000 strongly consistent reads per second. 50,000 reads per second is the previously reached traffic peak. We recommend that you either pre-warm your table or space your traffic growth over at least 30 minutes before driving more than 100,000 reads per second. For more information about pre-warming, see [Understanding DynamoDB warm throughput](warm-throughput.md).

DynamoDB doesn’t place the 30 minute throttling restriction if your workload’s peak traffic remains within double your previous peak. If your peak traffic exceeds double the peak, make sure that this growth occurs 30 minutes after you last reached the peak.

# DynamoDB maximum throughput for on-demand tables
<a name="on-demand-capacity-mode-max-throughput"></a>

For on-demand tables, you can optionally specify maximum read or write (or both) throughput per second on individual tables and associated global secondary indexes (GSIs). Specifying a maximum on-demand throughput helps keep table-level usage and costs bounded. By default, maximum throughput settings don’t apply and your on-demand throughput rate is bounded by 40,000 table-level read and write throughput [AWS service quota](ServiceQuotas.md#default-limits-throughput) for all tables in the account. If needed, you can request an increase to your service quota.

When you configure maximum throughput for an on-demand table, throughput requests that exceed the maximum amount specified will be throttled. You can modify the table-level throughput settings any time based on your application requirements.

The following are some common use cases that can benefit from using maximum throughput for on-demand tables:
+ **Throughput cost optimization** – Using maximum throughput for on-demand tables provides an additional layer of cost predictability and manageability. Additionally, it offers greater flexibility to use on-demand mode to support workloads with differing traffic patterns and budget.
+ **Protection against excessive usage** – By setting maximum throughput, you can prevent an accidental surge in read or write consumption, which might arise from non-optimized code or rogue processes, against an on-demand table. This table-level setting can protect organizations from consuming excessive resources within a certain time frame.
+ **Safeguarding downstream services** – A customer application can include serverless and non-serverless technologies. The serverless piece of the architecture can scale rapidly to match demand. But downstream components with fixed capacities could be overwhelmed. Implementing maximum throughput settings for on-demand tables can prevent large volume of events from propagating to multiple downstream components with unexpected side effects.

You can configure maximum throughput for on-demand mode for new and existing single-Region tables and global tables and GSIs. You can also configure maximum throughput during table restore and data import from Amazon S3 workflows.

You can specify maximum throughput settings for an on-demand tables using the [DynamoDB console](https://console.aws.amazon.com/dynamodb/), [AWS CLI](AccessingDynamoDB.md#Tools.CLI), [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html), or [DynamoDB API](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/Welcome.html).

**Note**  
The maximum throughput for an on-demand table is applied on a best-effort basis and should be thought of as targets instead of guaranteed request ceilings. Your workload might temporarily exceed the maximum throughput specified because of [*burst capacity*](burst-adaptive-capacity.md#burst-capacity). In some cases, DynamoDB uses *burst capacity* to accommodate reads or writes in excess of your table's maximum throughput settings. With burst capacity, unexpected read or write requests can succeed where they otherwise would be throttled.

**Topics**
+ [

## Considerations when using maximum throughput for on-demand mode
](#consideration-use-max-throughput-ondemand)
+ [

## Request throttling and CloudWatch metrics
](#max-throughput-ondemand-request-throttle)

## Considerations when using maximum throughput for on-demand mode
<a name="consideration-use-max-throughput-ondemand"></a>

When you use maximum throughput for tables in on-demand mode, the following considerations apply:
+ You can independently set maximum throughput for reads and writes for any on-demand table, or individual global secondary index within that table to fine-tune your approach based on specific requirements.
+ You can use Amazon CloudWatch to monitor and understand DynamoDB table-level usage metrics and to determine appropriate maximum throughput settings for on-demand mode. For more information, see [DynamoDB Metrics and dimensions](metrics-dimensions.md).
+ When you specify the maximum read or write (or both) throughput settings on one global table replica, the same maximum throughput settings are automatically applied to all replica tables. It's important that the replica tables and secondary indexes in a global table have identical write throughput settings to ensure proper replication of data. For more information, see [Best practices for global tables](globaltables-bestpractices.md).
+ The smallest maximum read or write throughput that you can specify is one request unit per second.
+ The maximum throughput you specify must be lower than the default throughput quota that is available for any on-demand table, or individual global secondary index within that table.

## Request throttling and CloudWatch metrics
<a name="max-throughput-ondemand-request-throttle"></a>

If your application exceeds the maximum read or write throughput you've set on your on-demand table, DynamoDB begins to throttle those requests. When DynamoDB throttles a read or write, it returns a `ThrottlingException` to the caller. You can then take appropriate action, if required. For example, you can increase or disable the maximum table throughput setting, or wait for a short interval before retrying the request.

To simplify monitoring the maximum throughput configured for a table or global secondary index, CloudWatch provides the following metrics: [OnDemandMaxReadRequestUnits](metrics-dimensions.md#OnDemandMaxReadRequestUnits) and [OnDemandMaxWriteRequestUnits](metrics-dimensions.md#OnDemandMaxWriteRequestUnits).

# DynamoDB provisioned capacity mode
<a name="provisioned-capacity-mode"></a>

When you create a new provisioned table in DynamoDB, you must specify its *provisioned throughput capacity*. This is the amount of read and write throughput that the table can support. You'll be charged based on the hourly read and write capacity you have provisioned, not how much of that provisioned capacity you actually consumed.

As your application's data and access requirements change, you might need to adjust your table's throughput settings. You can use auto scaling to adjust your table’s provisioned capacity automatically in response to traffic changes. DynamoDB auto scaling uses a [scaling policy](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html) in [Application Auto Scaling](https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html). To configure auto scaling in DynamoDB, you set the minimum and maximum levels of read and write capacity in addition to the target utilization percentage. Application Auto Scaling creates and manages the CloudWatch alarms that trigger scaling events when the metric deviates from the target. Auto Scaling monitors your table’s activity and adjusts its capacity settings up or down based on preconfigured thresholds. Auto scaling triggers when your consumed capacity breaches the configured target utilization for two consecutive minutes. CloudWatch alarms might have a short delay of up to a few minutes before triggering auto scaling. For more information, see [Managing throughput capacity automatically with DynamoDB auto scaling](AutoScaling.md).

If you're using DynamoDB auto scaling, the throughput settings are automatically adjusted in response to actual workloads. You can also use the [UpdateTable](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html) operation to manually adjust your table's throughput capacity. For example, you might decide to do this if you need to bulk-load data from an existing data store into your new DynamoDB table. You could create the table with a large write throughput setting and then reduce this setting after the bulk data load is complete.

**Note**  
By default, DynamoDB protects you from unintended, runaway usage. To scale beyond the 40,000 table-level read and write throughput limits for all tables in your account, you can request an increase for this quota. Throughput requests that exceed the default table throughput quota are throttled. For more information, see [Throughput default quotas](ServiceQuotas.md#default-limits-throughput).

You can switch tables from provisioned capacity mode to on-demand mode up to four times in a 24-hour rolling window. You can switch tables from on-demand mode to provisioned capacity mode at any time. 

For more information about switching between read and write capacity modes, see [Considerations when switching capacity modes in DynamoDB](bp-switching-capacity-modes.md).

**Topics**
+ [

## Read capacity units and write capacity units
](#read-write-capacity-units)
+ [

## Choosing initial throughput settings
](#choosing-initial-throughput)
+ [

## DynamoDB auto scaling
](#ddb-autoscaling)
+ [

# Managing throughput capacity automatically with DynamoDB auto scaling
](AutoScaling.md)
+ [

# DynamoDB reserved capacity
](reserved-capacity.md)

## Read capacity units and write capacity units
<a name="read-write-capacity-units"></a>

For provisioned mode tables, you specify throughput requirements in terms of *capacity units*. These units represent the amount of data your application needs to read or write per second. You can modify these settings later, if needed, or enable DynamoDB auto scaling to modify them automatically.

For an item up to 4 KB, one *read capacity unit* (RCU) represents one strongly consistent read operation per second, or two eventually consistent read operations per second. For more information about DynamoDB read consistency models, see [DynamoDB read consistency](HowItWorks.ReadConsistency.md).

A *write capacity unit* (WCU) represents one write per second for an item up to 1 KB. For more information about the different read and write operations, see [DynamoDB read and write operations](read-write-operations.md).

## Choosing initial throughput settings
<a name="choosing-initial-throughput"></a>

Every application has different requirements for reading from and writing to a database. When you're determining the initial throughput settings for a DynamoDB table, consider the following:
+ **Expected read and write request rates** — You should estimate the number of reads and writes you need to perform per second.
+ **Item sizes** — Some items are small enough that they can be read or written using a single capacity unit. Larger items require multiple capacity units. By estimating the average size of the items that will be in your table, you can specify accurate settings for your table's provisioned throughput.
+ **Read consistency requirements** — Read capacity units are based on strongly consistent read operations, which consume twice as many database resources as eventually consistent reads. You should determine whether your application requires strongly consistent reads, or whether it can relax this requirement and perform eventually consistent reads instead. Read operations in DynamoDB are eventually consistent, by default. You can request strongly consistent reads for these operations, if necessary.

For example, say that you want to read 80 items per second from a table. The size of these items is 3 KB, and you want strongly consistent reads. In this case, each read requires one provisioned read capacity unit. To determine this number, divide the item size of the operation by 4 KB. Then, round up to the nearest whole number, as shown in the following example:
+ 3 KB / 4 KB = 0.75 or **1** read capacity unit

Therefore, to read 80 items per second from a table, set the table's provisioned read throughput to 80 read capacity units as shown in the following example:
+ 1 read capacity unit per item × 80 reads per second = **80** read capacity units

Now suppose that you want to write 100 items per second to your table and that the size of each item is 512 bytes. In this case, each write requires one provisioned write capacity unit. To determine this number, divide the item size of the operation by 1 KB. Then, round up to the nearest whole number, as shown in the following example:
+ 512 bytes / 1 KB = 0.5 or **1** write capacity unit

To write 100 items per second to your table, set the table's provisioned write throughput to 100 write capacity units:
+ 1 write capacity unit per item × 100 writes per second = **100** write capacity units

## DynamoDB auto scaling
<a name="ddb-autoscaling"></a>

DynamoDB auto scaling actively manages provisioned throughput capacity for tables and global secondary indexes. With auto scaling, you define a range (upper and lower limits) for read and write capacity units. You also define a target utilization percentage within that range. DynamoDB auto scaling seeks to maintain your target utilization, even as your application workload increases or decreases.

With DynamoDB auto scaling, a table or a global secondary index can increase its provisioned read and write capacity to handle sudden increases in traffic, without request throttling. When the workload decreases, DynamoDB auto scaling can decrease the throughput so that you don't pay for unused provisioned capacity.

**Note**  
If you use the AWS Management Console to create a table or a global secondary index, DynamoDB auto scaling is enabled by default.  
You can manage auto scaling settings at any time by using the console, the AWS CLI, or one of the AWS SDKs. For more information, see [Managing throughput capacity automatically with DynamoDB auto scaling](AutoScaling.md).

### Utilization rate
<a name="ddb-autoscaling-utilization-rate"></a>

Utilization rate can help you determine if you’re over provisioning capacity, in which case should reduce your table capacity to save costs. Conversely, it can also help you determine if you’re under provisioning capacity. In this case, you should increase table capacity to prevent potential throttling of requests during unexpected high traffic instances. For more information, see [Amazon DynamoDB auto scaling: Performance and cost optimization at any scale](https://aws.amazon.com/blogs/database/amazon-dynamodb-auto-scaling-performance-and-cost-optimization-at-any-scale/).

If you’re using DynamoDB auto scaling, you’ll also need to set a target utilization percentage. Auto scaling will use this percentage as a target to adjust capacity upward or downward. We recommend setting target utilization to 70%. For more information, see [Managing throughput capacity automatically with DynamoDB auto scaling](AutoScaling.md).

# Managing throughput capacity automatically with DynamoDB auto scaling
<a name="AutoScaling"></a>

Many database workloads are cyclical in nature, while others are difficult to predict in advance. For one example, consider a social networking app where most of the users are active during daytime hours. The database must be able to handle the daytime activity, but there's no need for the same levels of throughput at night. For another example, consider a new mobile gaming app that is experiencing unexpectedly rapid adoption. If the game becomes too popular it could exceed the available database resources, resulting in slow performance and unhappy customers. These kinds of workloads often require manual intervention to scale database resources up or down in response to varying usage levels.

Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. This enables a table or a global secondary index (GSI) to increase its provisioned read and write capacity to handle sudden increases in traffic, without throttling. When the workload decreases, Application Auto Scaling decreases the throughput so that you don't pay for unused provisioned capacity.

**Note**  
If you use the AWS Management Console to create a table or a global secondary index, DynamoDB auto scaling is enabled by default. You can modify your auto scaling settings at any time. For more information, see [Using the AWS Management Console with DynamoDB auto scaling](AutoScaling.Console.md).  
When you delete a table or global table replica then any associated scalable targets, scaling polices, or CloudWatch alarms are not automatically deleted with it.

With Application Auto Scaling, you create a *scaling policy* for a table or a global secondary index. The scaling policy specifies whether you want to scale read capacity or write capacity (or both), and the minimum and maximum provisioned capacity unit settings for the table or index.

The scaling policy also contains a *target utilization*—the percentage of consumed provisioned throughput at a point in time. Application Auto Scaling uses a *target tracking* algorithm to adjust the provisioned throughput of the table (or index) upward or downward in response to actual workloads, so that the actual capacity utilization remains at or near your target utilization.

DynamoDB outputs consumed provisioned throughput for one-minute periods. Auto scaling triggers when your consumed capacity breaches the configured target utilization for two consecutive minutes. CloudWatch alarms might have a short delay of up to a few minutes before triggering auto scaling. This delay ensures accurate CloudWatch metric evaluation. If the consumed throughput spikes are more than a minute apart, auto scaling might not trigger. Similarly, a scale down event can occur when 15 consecutive data points are lower than the target utilization. In either case, after auto scaling triggers, the [UpdateTable](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html) API is invoked. It then takes several minutes to update the provisioned capacity for the table or index. During this period, any requests that exceed the previous provisioned capacity of the tables are throttled.

**Important**  
You can't adjust the number of data points to breach to trigger the underlying alarm (though the current number could change in the future).

 You can set the auto scaling target utilization values between 20 and 90 percent for your read and write capacity. 

**Note**  
In addition to tables, DynamoDB auto scaling also supports global secondary indexes. Every global secondary index has its own provisioned throughput capacity, separate from that of its base table. When you create a scaling policy for a global secondary index, Application Auto Scaling adjusts the provisioned throughput settings for the index to ensure that its actual utilization stays at or near your desired utilization ratio.

## How DynamoDB auto scaling works
<a name="AutoScaling.HowItWorks"></a>

**Note**  
To get started quickly with DynamoDB auto scaling, see [Using the AWS Management Console with DynamoDB auto scaling](AutoScaling.Console.md).

The following diagram provides a high-level overview of how DynamoDB auto scaling manages throughput capacity for a table.

![\[DynamoDB auto scaling adjusts a table’s throughput capacity to meet demand.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/auto-scaling.png)


The following steps summarize the auto scaling process as shown in the previous diagram:

1. You create an Application Auto Scaling policy for your DynamoDB table.

1. DynamoDB publishes consumed capacity metrics to Amazon CloudWatch. 

1. If the table's consumed capacity exceeds your target utilization (or falls below the target) for a specific length of time, Amazon CloudWatch triggers an alarm. You can view the alarm on the console and receive notifications using Amazon Simple Notification Service (Amazon SNS).

1. The CloudWatch alarm invokes Application Auto Scaling to evaluate your scaling policy.

1. Application Auto Scaling issues an `UpdateTable` request to adjust your table's provisioned throughput.

1. DynamoDB processes the `UpdateTable` request, dynamically increasing (or decreasing) the table's provisioned throughput capacity so that it approaches your target utilization.

To understand how DynamoDB auto scaling works, suppose that you have a table named `ProductCatalog`. The table is bulk-loaded with data infrequently, so it doesn't incur very much write activity. However, it does experience a high degree of read activity, which varies over time. By monitoring the Amazon CloudWatch metrics for `ProductCatalog`, you determine that the table requires 1,200 read capacity units (to avoid DynamoDB throttling read requests when activity is at its peak). You also determine that `ProductCatalog` requires 150 read capacity units at a minimum, when read traffic is at its lowest point. For more information about preventing throttling, see [Troubleshooting throttling in Amazon DynamoDB](TroubleshootingThrottling.md).

Within the range of 150 to 1,200 read capacity units, you decide that a target utilization of 70 percent would be appropriate for the `ProductCatalog` table. *Target utilization* is the ratio of consumed capacity units to provisioned capacity units, expressed as a percentage. Application Auto Scaling uses its target tracking algorithm to ensure that the provisioned read capacity of `ProductCatalog` is adjusted as required so that utilization remains at or near 70 percent.

**Note**  
DynamoDB auto scaling modifies provisioned throughput settings only when the actual workload stays elevated or depressed for a sustained period of several minutes. The Application Auto Scaling target tracking algorithm seeks to keep the target utilization at or near your chosen value over the long term.  
Sudden, short-duration spikes of activity are accommodated by the table's built-in burst capacity. For more information, see [Burst capacity](burst-adaptive-capacity.md#burst-capacity).

To enable DynamoDB auto scaling for the `ProductCatalog` table, you create a scaling policy. This policy specifies the following:
+ The table or global secondary index that you want to manage
+ Which capacity type to manage (read capacity or write capacity)
+ The upper and lower boundaries for the provisioned throughput settings
+ Your target utilization

When you create a scaling policy, Application Auto Scaling creates a pair of Amazon CloudWatch alarms on your behalf. Each pair represents the upper and lower boundaries for your provisioned throughput settings. These CloudWatch alarms are triggered when the table's actual utilization deviates from your target utilization for a sustained period of time.

When one of the CloudWatch alarms is triggered, Amazon SNS sends you a notification (if you have enabled it). The CloudWatch alarm then invokes Application Auto Scaling, which in turn notifies DynamoDB to adjust the `ProductCatalog` table's provisioned capacity upward or downward as appropriate.

During a scaling event, AWS Config is charged per configuration item recorded. When a scaling event occurs, four CloudWatch alarms are created for each read and write auto-scaling event: ProvisionedCapacity alarms: ProvisionedCapacityLow, ProvisionedCapacityHigh and ConsumedCapacity alarms: AlarmHigh, AlarmLow. This results in a total of eight alarms. Therefore, AWS Config records eight configuration items for every scaling event.

**Note**  
You can also schedule your DynamoDB scaling so it happens at certain times. Learn the basic steps [here](https://docs.aws.amazon.com/autoscaling/application/userguide/get-started-exercise.html).

## Usage notes
<a name="AutoScaling.UsageNotes"></a>

Before you begin using DynamoDB auto scaling, you should be aware of the following:
+ DynamoDB auto scaling can increase read capacity or write capacity as often as necessary, in accordance with your auto scaling policy. All DynamoDB quotas remain in effect, as described in [Quotas in Amazon DynamoDB](ServiceQuotas.md).
+ DynamoDB auto scaling doesn't prevent you from manually modifying provisioned throughput settings. These manual adjustments don't affect any existing CloudWatch alarms that are related to DynamoDB auto scaling.
+ If you enable DynamoDB auto scaling for a table that has one or more global secondary indexes, we highly recommend that you also apply auto scaling uniformly to those indexes. This will help ensure better performance for table writes and reads, and help avoid throttling. You can enable auto scaling by selecting **Apply same settings to global secondary indexes** in the AWS Management Console. For more information, see [Enabling DynamoDB auto scaling on existing tables](AutoScaling.Console.md#AutoScaling.Console.ExistingTable).
+ When you delete a table or global table replica, any associated scalable targets, scaling polices or CloudWatch alarms are not automatically deleted with it.
+ When creating a GSI for an existing table, auto scaling is not enabled for the GSI. You will have to manually manage the capacity while the GSI is being built. Once the backfill on the GSI completes and it reaches active status, auto scaling will operate as normal.

# Using the AWS Management Console with DynamoDB auto scaling
<a name="AutoScaling.Console"></a>

When you use the AWS Management Console to create a new table, Amazon DynamoDB auto scaling is enabled for that table by default. You can also use the console to enable auto scaling for existing tables, modify auto scaling settings, or disable auto scaling.

**Note**  
 For more advanced features like setting scale-in and scale-out cooldown times, use the AWS Command Line Interface (AWS CLI) to manage DynamoDB auto scaling. For more information, see [Using the AWS CLI to manage DynamoDB auto scaling](AutoScaling.CLI.md).

**Topics**
+ [

## Before you begin: Granting user permissions for DynamoDB auto scaling
](#AutoScaling.Permissions)
+ [

## Creating a new table with auto scaling enabled
](#AutoScaling.Console.NewTable)
+ [

## Enabling DynamoDB auto scaling on existing tables
](#AutoScaling.Console.ExistingTable)
+ [

## Viewing auto scaling activities on the console
](#AutoScaling.Console.ViewingActivities)
+ [

## Modifying or disabling DynamoDB auto scaling settings
](#AutoScaling.Console.Modifying)

## Before you begin: Granting user permissions for DynamoDB auto scaling
<a name="AutoScaling.Permissions"></a>

In AWS Identity and Access Management (IAM), the AWS managed policy `DynamoDBFullAccess` provides the required permissions for using the DynamoDB console. However, for DynamoDB auto scaling, users require additional permissions. 

**Important**  
 To delete an auto scaling-enabled table, `application-autoscaling:*` permissions are required. The AWS managed policy `DynamoDBFullAccess` includes such permissions.

To set up a user for DynamoDB console access and DynamoDB auto scaling, create a role and add the **AmazonDynamoDBFullAccess** policy to that role. Then assign the role to a user.

## Creating a new table with auto scaling enabled
<a name="AutoScaling.Console.NewTable"></a>

**Note**  
DynamoDB auto scaling requires the presence of a service-linked role (`AWSServiceRoleForApplicationAutoScaling_DynamoDBTable`) that performs auto scaling actions on your behalf. This role is created automatically for you. For more information, see [Service-linked roles for Application Auto Scaling](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-service-linked-roles.html) in the*Application Auto Scaling User Guide*.

**To create a new table with auto scaling enabled**

1. Open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/).

1. Choose **Create table**.

1. On the **Create table** page, enter the **Table name** and primary key details.

1. If you choose **Default settings**, auto scaling is enabled in the new table.

   Otherwise, choose **Customize settings** and do the following to specify custom settings for the table: 

   1. For **Table class**, keep the default selection of **DynamoDB Standard**.

   1. For **Read/write capacity settings**, keep the default selection of **Provisioned**, then do the following:

      1. For **Read capacity**, make sure **Auto scaling** is set to **On**.

      1. For **Write capacity**, make sure **Auto scaling** is set to **On**.

      1. For **Read capacity** and **Write capacity**, set your desired scaling policy for the table and, optionally, all global secondary indexes of the table.
         + **Minimum capacity units** – Enter your lower boundary for the auto scaling range.
         + **Maximum capacity units** – Enter your upper boundary for the auto scaling range.
         + **Target utilization** – Enter your target utilization percentage for the table.
**Note**  
If you create a global secondary index for the new table, the index's capacity at time of creation will be the same as your base table's capacity. You can change the index's capacity in the table's settings after you create the table.

1. Choose **Create table**. This creates your table with the auto scaling parameters you specified.

## Enabling DynamoDB auto scaling on existing tables
<a name="AutoScaling.Console.ExistingTable"></a>

**Note**  
DynamoDB auto scaling requires the presence of a service-linked role (`AWSServiceRoleForApplicationAutoScaling_DynamoDBTable`) that performs auto scaling actions on your behalf. This role is created automatically for you. For more information, see [Service-linked roles for Application Auto Scaling](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-service-linked-roles.html).

**To enable DynamoDB auto scaling for an existing table**

1. Open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/).

1. In the navigation pane on the left side of the console, choose **Tables**.

1. Choose the table on which you want to enable auto scaling, and then do the following:

   1. Choose the **Additional settings** tab.

   1. In the **Read/write capacity** section, choose **Edit**.

   1. In the **Capacity mode** section, choose **Provisioned**.

   1. In the **Table capacity** section, set **Auto scaling** to **On** for **Read capacity**, **Write capacity**, or both. For each of these, set your desired scaling policy for the table and, optionally, all global secondary indexes of the table.
      + **Minimum capacity units** – Enter your lower boundary for the auto scaling range.
      + **Maximum capacity units** – Enter your upper boundary for the auto scaling range.
      + **Target utilization** – Enter your target utilization percentage for the table.
      + **Use the same capacity read/write capacity settings for all global secondary indexes** – Choose whether global secondary indexes should use the same auto scaling policy as the base table.
**Note**  
For best performance, we recommend that you enable **Use the same read/write capacity settings for all global secondary indexes**. This option allows DynamoDB auto scaling to uniformly scale all the global secondary indexes on the base table. This includes existing global secondary indexes, and any others that you create for this table in the future.  
With this option enabled, you can't set a scaling policy on an individual global secondary index.

1. When the settings are as you want them, choose **Save**.

## Viewing auto scaling activities on the console
<a name="AutoScaling.Console.ViewingActivities"></a>

As your application drives read and write traffic to your table, DynamoDB auto scaling dynamically modifies the table's throughput settings. Amazon CloudWatch keeps track of provisioned and consumed capacity, throttled events, latency, and other metrics for all of your DynamoDB tables and secondary indexes.

To view these metrics in the DynamoDB console, choose the table that you want to work with and choose the **Monitor** tab. To create a customizable view of table metrics, select **View all in CloudWatch**.

## Modifying or disabling DynamoDB auto scaling settings
<a name="AutoScaling.Console.Modifying"></a>

You can use the AWS Management Console to modify your DynamoDB auto scaling settings. To do this, go to the **Additional settings** tab for your table, and choose **Edit** in the **Read/write capacity** section. For more information about these settings, see [Enabling DynamoDB auto scaling on existing tables](#AutoScaling.Console.ExistingTable).

# Using the AWS CLI to manage DynamoDB auto scaling
<a name="AutoScaling.CLI"></a>

Instead of using the AWS Management Console, you can use the AWS Command Line Interface (AWS CLI) to manage Amazon DynamoDB auto scaling. The tutorial in this section demonstrates how to install and configure the AWS CLI for managing DynamoDB auto scaling. In this tutorial, you do the following:
+ Create a DynamoDB table named `TestTable`. The initial throughput settings are 5 read capacity units and 5 write capacity units.
+ Create an Application Auto Scaling policy for `TestTable`. The policy seeks to maintain a 50 percent target ratio between consumed write capacity and provisioned write capacity. The range for this metric is between 5 and 10 write capacity units. (Application Auto Scaling is not allowed to adjust the throughput beyond this range.)
+ Run a Python program to drive write traffic to `TestTable`. When the target ratio exceeds 50 percent for a sustained period of time, Application Auto Scaling notifies DynamoDB to adjust the throughput of `TestTable` upward to maintain the 50 percent target utilization.
+ Verify that DynamoDB has successfully adjusted the provisioned write capacity for `TestTable`.

**Note**  
You can also schedule your DynamoDB scaling so it happens at certain times. Learn the basic steps [here](https://docs.aws.amazon.com/autoscaling/application/userguide/get-started-exercise.html).

**Topics**
+ [

## Before you begin
](#AutoScaling.CLI.BeforeYouBegin)
+ [

## Step 1: Create a DynamoDB table
](#AutoScaling.CLI.CreateTable)
+ [

## Step 2: Register a scalable target
](#AutoScaling.CLI.RegisterScalableTarget)
+ [

## Step 3: Create a scaling policy
](#AutoScaling.CLI.CreateScalingPolicy)
+ [

## Step 4: Drive write traffic to TestTable
](#AutoScaling.CLI.DriveTraffic)
+ [

## Step 5: View Application Auto Scaling actions
](#AutoScaling.CLI.ViewCWAlarms)
+ [

## (Optional) Step 6: Clean up
](#AutoScaling.CLI.CleanUp)

## Before you begin
<a name="AutoScaling.CLI.BeforeYouBegin"></a>

Complete the following tasks before starting the tutorial.

### Install the AWS CLI
<a name="AutoScaling.CLI.BeforeYouBegin.InstallCLI"></a>

If you haven't already done so, you must install and configure the AWS CLI. To do this, follow these instructions in the *AWS Command Line Interface User Guide*:
+ [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/installing.html)
+ [Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html)

### Install Python
<a name="AutoScaling.CLI.BeforeYouBegin.InstallPython"></a>

Part of this tutorial requires you to run a Python program (see [Step 4: Drive write traffic to TestTable](#AutoScaling.CLI.DriveTraffic)). If you don't already have it installed, you can [download Python](https://www.python.org/downloads). 

## Step 1: Create a DynamoDB table
<a name="AutoScaling.CLI.CreateTable"></a>

In this step, you use the AWS CLI to create `TestTable`. The primary key consists of `pk` (partition key) and `sk` (sort key). Both of these attributes are of type `Number`. The initial throughput settings are 5 read capacity units and 5 write capacity units.

1. Use the following AWS CLI command to create the table.

   ```
   aws dynamodb create-table \
       --table-name TestTable \
       --attribute-definitions \
           AttributeName=pk,AttributeType=N \
           AttributeName=sk,AttributeType=N \
       --key-schema \
           AttributeName=pk,KeyType=HASH \
           AttributeName=sk,KeyType=RANGE \
       --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
   ```

1. To check the status of the table, use the following command.

   ```
   aws dynamodb describe-table \
       --table-name TestTable \
       --query "Table.[TableName,TableStatus,ProvisionedThroughput]"
   ```

   The table is ready for use when its status is `ACTIVE`.

## Step 2: Register a scalable target
<a name="AutoScaling.CLI.RegisterScalableTarget"></a>

Next you register the table's write capacity as a scalable target with Application Auto Scaling. This allows Application Auto Scaling to adjust the provisioned write capacity for *TestTable*, but only within the range of 5–10 capacity units.

**Note**  
DynamoDB auto scaling requires the presence of a service linked role (`AWSServiceRoleForApplicationAutoScaling_DynamoDBTable`) that performs auto scaling actions on your behalf. This role is created automatically for you. For more information, see [Service-linked roles for Application Auto Scaling](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-service-linked-roles.html) in the *Application Auto Scaling User Guide*. 

1. Enter the following command to register the scalable target. 

   ```
   aws application-autoscaling register-scalable-target \
       --service-namespace dynamodb \
       --resource-id "table/TestTable" \
       --scalable-dimension "dynamodb:table:WriteCapacityUnits" \
       --min-capacity 5 \
       --max-capacity 10
   ```

1. To verify the registration, use the following command.

   ```
   aws application-autoscaling describe-scalable-targets \
       --service-namespace dynamodb \
       --resource-id "table/TestTable"
   ```
**Note**  
 You can also register a scalable target against a global secondary index. For example, for a global secondary index ("test-index"), the resource ID and scalable dimension arguments are updated appropriately.   

   ```
   aws application-autoscaling register-scalable-target \
       --service-namespace dynamodb \
       --resource-id "table/TestTable/index/test-index" \
       --scalable-dimension "dynamodb:index:WriteCapacityUnits" \
       --min-capacity 5 \
       --max-capacity 10
   ```

## Step 3: Create a scaling policy
<a name="AutoScaling.CLI.CreateScalingPolicy"></a>

In this step, you create a scaling policy for `TestTable`. The policy defines the details under which Application Auto Scaling can adjust your table's provisioned throughput, and what actions to take when it does so. You associate this policy with the scalable target that you defined in the previous step (write capacity units for the `TestTable` table).

The policy contains the following elements:
+ `PredefinedMetricSpecification`—The metric that Application Auto Scaling is allowed to adjust. For DynamoDB, the following values are valid values for `PredefinedMetricType`:
  + `DynamoDBReadCapacityUtilization`
  + `DynamoDBWriteCapacityUtilization`
+ `ScaleOutCooldown`—The minimum amount of time (in seconds) between each Application Auto Scaling event that increases provisioned throughput. This parameter allows Application Auto Scaling to continuously, but not aggressively, increase the throughput in response to real-world workloads. The default setting for `ScaleOutCooldown` is 0.
+ `ScaleInCooldown`—The minimum amount of time (in seconds) between each Application Auto Scaling event that decreases provisioned throughput. This parameter allows Application Auto Scaling to decrease the throughput gradually and predictably. The default setting for `ScaleInCooldown` is 0.
+ `TargetValue`—Application Auto Scaling ensures that the ratio of consumed capacity to provisioned capacity stays at or near this value. You define `TargetValue` as a percentage.

**Note**  
To further understand how `TargetValue` works, suppose that you have a table with a provisioned throughput setting of 200 write capacity units. You decide to create a scaling policy for this table, with a `TargetValue` of 70 percent.  
Now suppose that you begin driving write traffic to the table so that the actual write throughput is 150 capacity units. The consumed-to-provisioned ratio is now (150 / 200), or 75 percent. This ratio exceeds your target, so Application Auto Scaling increases the provisioned write capacity to 215 so that the ratio is (150 / 215), or 69.77 percent—as close to your `TargetValue` as possible, but not exceeding it.

For `TestTable`, you set `TargetValue` to 50 percent. Application Auto Scaling adjusts the table's provisioned throughput within the range of 5–10 capacity units (see [Step 2: Register a scalable target](#AutoScaling.CLI.RegisterScalableTarget)) so that the consumed-to-provisioned ratio remains at or near 50 percent. You set the values for `ScaleOutCooldown` and `ScaleInCooldown` to 60 seconds.

1. Create a file named `scaling-policy.json` with the following contents.

   ```
   {
       "PredefinedMetricSpecification": {
           "PredefinedMetricType": "DynamoDBWriteCapacityUtilization"
       },
       "ScaleOutCooldown": 60,
       "ScaleInCooldown": 60,
       "TargetValue": 50.0
   }
   ```

1. Use the following AWS CLI command to create the policy.

   ```
   aws application-autoscaling put-scaling-policy \
       --service-namespace dynamodb \
       --resource-id "table/TestTable" \
       --scalable-dimension "dynamodb:table:WriteCapacityUnits" \
       --policy-name "MyScalingPolicy" \
       --policy-type "TargetTrackingScaling" \
       --target-tracking-scaling-policy-configuration file://scaling-policy.json
   ```

1. In the output, note that Application Auto Scaling has created two Amazon CloudWatch alarms—one each for the upper and lower boundary of the scaling target range.

1. Use the following AWS CLI command to view more details about the scaling policy.

   ```
   aws application-autoscaling describe-scaling-policies \
       --service-namespace dynamodb \
       --resource-id "table/TestTable" \
       --policy-name "MyScalingPolicy"
   ```

1. In the output, verify that the policy settings match your specifications from [Step 2: Register a scalable target](#AutoScaling.CLI.RegisterScalableTarget) and [Step 3: Create a scaling policy](#AutoScaling.CLI.CreateScalingPolicy).

## Step 4: Drive write traffic to TestTable
<a name="AutoScaling.CLI.DriveTraffic"></a>

Now you can test your scaling policy by writing data to `TestTable`. To do this, you run a Python program.

1. Create a file named `bulk-load-test-table.py` with the following contents.

   ```
   import boto3
   dynamodb = boto3.resource('dynamodb')
   
   table = dynamodb.Table("TestTable")
   
   filler = "x" * 100000
   
   i = 0
   while (i < 10):
       j = 0
       while (j < 10):
           print (i, j)
           
           table.put_item(
               Item={
                   'pk':i,
                   'sk':j,
                   'filler':{"S":filler}
               }
           )
           j += 1
       i += 1
   ```

1. Enter the following command to run the program.

   `python bulk-load-test-table.py`

   The provisioned write capacity for `TestTable` is very low (5 write capacity units), so the program stalls occasionally due to write throttling. This is expected behavior.

   Let the program continue running while you move on to the next step.

## Step 5: View Application Auto Scaling actions
<a name="AutoScaling.CLI.ViewCWAlarms"></a>

 In this step, you view the Application Auto Scaling actions that are initiated on your behalf. You also verify that Application Auto Scaling has updated the provisioned write capacity for `TestTable`.

1. Enter the following command to view the Application Auto Scaling actions.

   ```
   aws application-autoscaling describe-scaling-activities \
       --service-namespace dynamodb
   ```

   Rerun this command occasionally, while the Python program is running. (It takes several minutes before your scaling policy is invoked.) You should eventually see the following output.

   ```
   ...
   {
       "ScalableDimension": "dynamodb:table:WriteCapacityUnits", 
       "Description": "Setting write capacity units to 10.", 
       "ResourceId": "table/TestTable", 
       "ActivityId": "0cc6fb03-2a7c-4b51-b67f-217224c6b656", 
       "StartTime": 1489088210.175, 
       "ServiceNamespace": "dynamodb", 
       "EndTime": 1489088246.85, 
       "Cause": "monitor alarm AutoScaling-table/TestTable-AlarmHigh-1bb3c8db-1b97-4353-baf1-4def76f4e1b9 in state ALARM triggered policy MyScalingPolicy", 
       "StatusMessage": "Successfully set write capacity units to 10. Change successfully fulfilled by dynamodb.", 
       "StatusCode": "Successful"
   }, 
   ...
   ```

   This indicates that Application Auto Scaling has issued an `UpdateTable` request to DynamoDB.

1. Enter the following command to verify that DynamoDB increased the table's write capacity.

   ```
   aws dynamodb describe-table \
       --table-name TestTable \
       --query "Table.[TableName,TableStatus,ProvisionedThroughput]"
   ```

   The `WriteCapacityUnits` should have been scaled from `5` to `10`.

## (Optional) Step 6: Clean up
<a name="AutoScaling.CLI.CleanUp"></a>

In this tutorial, you created several resources. You can delete these resources if you no longer need them.

1. Delete the scaling policy for `TestTable`.

   ```
   aws application-autoscaling delete-scaling-policy \
       --service-namespace dynamodb \
       --resource-id "table/TestTable" \
       --scalable-dimension "dynamodb:table:WriteCapacityUnits" \
       --policy-name "MyScalingPolicy"
   ```

1. Deregister the scalable target.

   ```
   aws application-autoscaling deregister-scalable-target \
       --service-namespace dynamodb \
       --resource-id "table/TestTable" \
       --scalable-dimension "dynamodb:table:WriteCapacityUnits"
   ```

1. Delete the `TestTable` table.

   ```
   aws dynamodb delete-table --table-name TestTable
   ```

# Using the AWS SDK to configure auto scaling on Amazon DynamoDB tables
<a name="AutoScaling.HowTo.SDK"></a>

In addition to using the AWS Management Console and the AWS Command Line Interface (AWS CLI), you can write applications that interact with Amazon DynamoDB auto scaling. This section contains two Java programs that you can use to test this functionality:
+ `EnableDynamoDBAutoscaling.java`
+ `DisableDynamoDBAutoscaling.java`

## Enabling Application Auto Scaling for a table
<a name="AutoScaling.HowTo.SDK-enable"></a>

The following program shows an example of setting up an auto scaling policy for a DynamoDB table (`TestTable`). It proceeds as follows:
+ The program registers write capacity units as a scalable target for `TestTable`. The range for this metric is between 5 and 10 write capacity units.
+ After the scalable target is created, the program builds a target tracking configuration. The policy seeks to maintain a 50 percent target ratio between consumed write capacity and provisioned write capacity.
+ The program then creates the scaling policy, based on the target tracking configuration.

**Note**  
When you manually remove a table or global table replica, you do not automatically remove any associated scalable targets, scaling policies, or CloudWatch alarms.

------
#### [ Java v2 ]

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.applicationautoscaling.ApplicationAutoScalingClient;
import software.amazon.awssdk.services.applicationautoscaling.model.ApplicationAutoScalingException;
import software.amazon.awssdk.services.applicationautoscaling.model.DescribeScalableTargetsRequest;
import software.amazon.awssdk.services.applicationautoscaling.model.DescribeScalableTargetsResponse;
import software.amazon.awssdk.services.applicationautoscaling.model.DescribeScalingPoliciesRequest;
import software.amazon.awssdk.services.applicationautoscaling.model.DescribeScalingPoliciesResponse;
import software.amazon.awssdk.services.applicationautoscaling.model.PolicyType;
import software.amazon.awssdk.services.applicationautoscaling.model.PredefinedMetricSpecification;
import software.amazon.awssdk.services.applicationautoscaling.model.PutScalingPolicyRequest;
import software.amazon.awssdk.services.applicationautoscaling.model.RegisterScalableTargetRequest;
import software.amazon.awssdk.services.applicationautoscaling.model.ScalingPolicy;
import software.amazon.awssdk.services.applicationautoscaling.model.ServiceNamespace;
import software.amazon.awssdk.services.applicationautoscaling.model.ScalableDimension;
import software.amazon.awssdk.services.applicationautoscaling.model.MetricType;
import software.amazon.awssdk.services.applicationautoscaling.model.TargetTrackingScalingPolicyConfiguration;
import java.util.List;

/**
 * Before running this Java V2 code example, set up your development environment, including your credentials.
 *
 * For more information, see the following documentation topic:
 *
 * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
 */
public class EnableDynamoDBAutoscaling {
    public static void main(String[] args) {
        final String usage = """

            Usage:
               <tableId> <roleARN> <policyName>\s

            Where:
               tableId - The table Id value (for example, table/Music).
               roleARN - The ARN of the role that has ApplicationAutoScaling permissions.
               policyName - The name of the policy to create.
               
            """;

        if (args.length != 3) {
            System.out.println(usage);
            System.exit(1);
        }

        System.out.println("This example registers an Amazon DynamoDB table, which is the resource to scale.");
        String tableId = args[0];
        String roleARN = args[1];
        String policyName = args[2];
        ServiceNamespace ns = ServiceNamespace.DYNAMODB;
        ScalableDimension tableWCUs = ScalableDimension.DYNAMODB_TABLE_WRITE_CAPACITY_UNITS;
        ApplicationAutoScalingClient appAutoScalingClient = ApplicationAutoScalingClient.builder()
            .region(Region.US_EAST_1)
            .build();

        registerScalableTarget(appAutoScalingClient, tableId, roleARN, ns, tableWCUs);
        verifyTarget(appAutoScalingClient, tableId, ns, tableWCUs);
        configureScalingPolicy(appAutoScalingClient, tableId, ns, tableWCUs, policyName);
    }

    public static void registerScalableTarget(ApplicationAutoScalingClient appAutoScalingClient, String tableId, String roleARN, ServiceNamespace ns, ScalableDimension tableWCUs) {
        try {
            RegisterScalableTargetRequest targetRequest = RegisterScalableTargetRequest.builder()
                .serviceNamespace(ns)
                .scalableDimension(tableWCUs)
                .resourceId(tableId)
                .roleARN(roleARN)
                .minCapacity(5)
                .maxCapacity(10)
                .build();

            appAutoScalingClient.registerScalableTarget(targetRequest);
            System.out.println("You have registered " + tableId);

        } catch (ApplicationAutoScalingException e) {
            System.err.println(e.awsErrorDetails().errorMessage());
        }
    }

    // Verify that the target was created.
    public static void verifyTarget(ApplicationAutoScalingClient appAutoScalingClient, String tableId, ServiceNamespace ns, ScalableDimension tableWCUs) {
        DescribeScalableTargetsRequest dscRequest = DescribeScalableTargetsRequest.builder()
            .scalableDimension(tableWCUs)
            .serviceNamespace(ns)
            .resourceIds(tableId)
            .build();

        DescribeScalableTargetsResponse response = appAutoScalingClient.describeScalableTargets(dscRequest);
        System.out.println("DescribeScalableTargets result: ");
        System.out.println(response);
    }

    // Configure a scaling policy.
    public static void configureScalingPolicy(ApplicationAutoScalingClient appAutoScalingClient, String tableId, ServiceNamespace ns, ScalableDimension tableWCUs, String policyName) {
        // Check if the policy exists before creating a new one.
        DescribeScalingPoliciesResponse describeScalingPoliciesResponse = appAutoScalingClient.describeScalingPolicies(DescribeScalingPoliciesRequest.builder()
            .serviceNamespace(ns)
            .resourceId(tableId)
            .scalableDimension(tableWCUs)
            .build());

        if (!describeScalingPoliciesResponse.scalingPolicies().isEmpty()) {
            // If policies exist, consider updating an existing policy instead of creating a new one.
            System.out.println("Policy already exists. Consider updating it instead.");
            List<ScalingPolicy> polList = describeScalingPoliciesResponse.scalingPolicies();
            for (ScalingPolicy pol : polList) {
                System.out.println("Policy name:" +pol.policyName());
            }
        } else {
            // If no policies exist, proceed with creating a new policy.
            PredefinedMetricSpecification specification = PredefinedMetricSpecification.builder()
                .predefinedMetricType(MetricType.DYNAMO_DB_WRITE_CAPACITY_UTILIZATION)
                .build();

            TargetTrackingScalingPolicyConfiguration policyConfiguration = TargetTrackingScalingPolicyConfiguration.builder()
                .predefinedMetricSpecification(specification)
                .targetValue(50.0)
                .scaleInCooldown(60)
                .scaleOutCooldown(60)
                .build();

            PutScalingPolicyRequest putScalingPolicyRequest = PutScalingPolicyRequest.builder()
                .targetTrackingScalingPolicyConfiguration(policyConfiguration)
                .serviceNamespace(ns)
                .scalableDimension(tableWCUs)
                .resourceId(tableId)
                .policyName(policyName)
                .policyType(PolicyType.TARGET_TRACKING_SCALING)
                .build();

            try {
                appAutoScalingClient.putScalingPolicy(putScalingPolicyRequest);
                System.out.println("You have successfully created a scaling policy for an Application Auto Scaling scalable target");
            } catch (ApplicationAutoScalingException e) {
                System.err.println("Error: " + e.awsErrorDetails().errorMessage());
            }
        }
    }
}
```

------
#### [ Java v1 ]

The program requires that you provide an Amazon Resource Name (ARN) for a valid Application Auto Scaling service linked role. (For example: `arn:aws:iam::122517410325:role/AWSServiceRoleForApplicationAutoScaling_DynamoDBTable`.) In the following program, replace `SERVICE_ROLE_ARN_GOES_HERE` with the actual ARN. 

```
package com.amazonaws.codesamples.autoscaling;

import com.amazonaws.services.applicationautoscaling.AWSApplicationAutoScalingClient;
import com.amazonaws.services.applicationautoscaling.AWSApplicationAutoScalingClientBuilder;
import com.amazonaws.services.applicationautoscaling.model.DescribeScalableTargetsRequest;
import com.amazonaws.services.applicationautoscaling.model.DescribeScalableTargetsResult;
import com.amazonaws.services.applicationautoscaling.model.DescribeScalingPoliciesRequest;
import com.amazonaws.services.applicationautoscaling.model.DescribeScalingPoliciesResult;
import com.amazonaws.services.applicationautoscaling.model.MetricType;
import com.amazonaws.services.applicationautoscaling.model.PolicyType;
import com.amazonaws.services.applicationautoscaling.model.PredefinedMetricSpecification;
import com.amazonaws.services.applicationautoscaling.model.PutScalingPolicyRequest;
import com.amazonaws.services.applicationautoscaling.model.RegisterScalableTargetRequest;
import com.amazonaws.services.applicationautoscaling.model.ScalableDimension;
import com.amazonaws.services.applicationautoscaling.model.ServiceNamespace;
import com.amazonaws.services.applicationautoscaling.model.TargetTrackingScalingPolicyConfiguration;

public class EnableDynamoDBAutoscaling {

	static AWSApplicationAutoScalingClient aaClient = (AWSApplicationAutoScalingClient) AWSApplicationAutoScalingClientBuilder
			.standard().build();

	public static void main(String args[]) {

		ServiceNamespace ns = ServiceNamespace.Dynamodb;
		ScalableDimension tableWCUs = ScalableDimension.DynamodbTableWriteCapacityUnits;
		String resourceID = "table/TestTable";

		// Define the scalable target
		RegisterScalableTargetRequest rstRequest = new RegisterScalableTargetRequest()
				.withServiceNamespace(ns)
				.withResourceId(resourceID)
				.withScalableDimension(tableWCUs)
				.withMinCapacity(5)
				.withMaxCapacity(10)
				.withRoleARN("SERVICE_ROLE_ARN_GOES_HERE");

		try {
			aaClient.registerScalableTarget(rstRequest);
		} catch (Exception e) {
			System.err.println("Unable to register scalable target: ");
			System.err.println(e.getMessage());
		}

		// Verify that the target was created
		DescribeScalableTargetsRequest dscRequest = new DescribeScalableTargetsRequest()
				.withServiceNamespace(ns)
				.withScalableDimension(tableWCUs)
				.withResourceIds(resourceID);
		try {
			DescribeScalableTargetsResult dsaResult = aaClient.describeScalableTargets(dscRequest);
			System.out.println("DescribeScalableTargets result: ");
			System.out.println(dsaResult);
			System.out.println();
		} catch (Exception e) {
			System.err.println("Unable to describe scalable target: ");
			System.err.println(e.getMessage());
		}

		System.out.println();

		// Configure a scaling policy
		TargetTrackingScalingPolicyConfiguration targetTrackingScalingPolicyConfiguration = new TargetTrackingScalingPolicyConfiguration()
				.withPredefinedMetricSpecification(
						new PredefinedMetricSpecification()
								.withPredefinedMetricType(MetricType.DynamoDBWriteCapacityUtilization))
				.withTargetValue(50.0)
				.withScaleInCooldown(60)
				.withScaleOutCooldown(60);

		// Create the scaling policy, based on your configuration
		PutScalingPolicyRequest pspRequest = new PutScalingPolicyRequest()
				.withServiceNamespace(ns)
				.withScalableDimension(tableWCUs)
				.withResourceId(resourceID)
				.withPolicyName("MyScalingPolicy")
				.withPolicyType(PolicyType.TargetTrackingScaling)
				.withTargetTrackingScalingPolicyConfiguration(targetTrackingScalingPolicyConfiguration);

		try {
			aaClient.putScalingPolicy(pspRequest);
		} catch (Exception e) {
			System.err.println("Unable to put scaling policy: ");
			System.err.println(e.getMessage());
		}

		// Verify that the scaling policy was created
		DescribeScalingPoliciesRequest dspRequest = new DescribeScalingPoliciesRequest()
				.withServiceNamespace(ns)
				.withScalableDimension(tableWCUs)
				.withResourceId(resourceID);

		try {
			DescribeScalingPoliciesResult dspResult = aaClient.describeScalingPolicies(dspRequest);
			System.out.println("DescribeScalingPolicies result: ");
			System.out.println(dspResult);
		} catch (Exception e) {
			e.printStackTrace();
			System.err.println("Unable to describe scaling policy: ");
			System.err.println(e.getMessage());
		}

	}

}
```

------

## Disabling Application Auto Scaling for a table
<a name="AutoScaling.HowTo.SDK-disable"></a>

The following program reverses the previous process. It removes the auto scaling policy and then deregisters the scalable target.

------
#### [ Java v2 ]

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.applicationautoscaling.ApplicationAutoScalingClient;
import software.amazon.awssdk.services.applicationautoscaling.model.ApplicationAutoScalingException;
import software.amazon.awssdk.services.applicationautoscaling.model.DeleteScalingPolicyRequest;
import software.amazon.awssdk.services.applicationautoscaling.model.DeregisterScalableTargetRequest;
import software.amazon.awssdk.services.applicationautoscaling.model.DescribeScalableTargetsRequest;
import software.amazon.awssdk.services.applicationautoscaling.model.DescribeScalableTargetsResponse;
import software.amazon.awssdk.services.applicationautoscaling.model.DescribeScalingPoliciesRequest;
import software.amazon.awssdk.services.applicationautoscaling.model.DescribeScalingPoliciesResponse;
import software.amazon.awssdk.services.applicationautoscaling.model.ScalableDimension;
import software.amazon.awssdk.services.applicationautoscaling.model.ServiceNamespace;

/**
 * Before running this Java V2 code example, set up your development environment, including your credentials.
 *
 * For more information, see the following documentation topic:
 *
 * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
 */

public class DisableDynamoDBAutoscaling {
    public static void main(String[] args) {
        final String usage = """

            Usage:
               <tableId> <policyName>\s

            Where:
               tableId - The table Id value (for example, table/Music).\s
               policyName - The name of the policy (for example, $Music5-scaling-policy). 
        
            """;
        if (args.length != 2) {
            System.out.println(usage);
            System.exit(1);
        }

        ApplicationAutoScalingClient appAutoScalingClient = ApplicationAutoScalingClient.builder()
            .region(Region.US_EAST_1)
            .build();

        ServiceNamespace ns = ServiceNamespace.DYNAMODB;
        ScalableDimension tableWCUs = ScalableDimension.DYNAMODB_TABLE_WRITE_CAPACITY_UNITS;
        String tableId = args[0];
        String policyName = args[1];

        deletePolicy(appAutoScalingClient, policyName, tableWCUs, ns, tableId);
        verifyScalingPolicies(appAutoScalingClient, tableId, ns, tableWCUs);
        deregisterScalableTarget(appAutoScalingClient, tableId, ns, tableWCUs);
        verifyTarget(appAutoScalingClient, tableId, ns, tableWCUs);
    }

    public static void deletePolicy(ApplicationAutoScalingClient appAutoScalingClient, String policyName, ScalableDimension tableWCUs, ServiceNamespace ns, String tableId) {
        try {
            DeleteScalingPolicyRequest delSPRequest = DeleteScalingPolicyRequest.builder()
                .policyName(policyName)
                .scalableDimension(tableWCUs)
                .serviceNamespace(ns)
                .resourceId(tableId)
                .build();

            appAutoScalingClient.deleteScalingPolicy(delSPRequest);
            System.out.println(policyName +" was deleted successfully.");

        } catch (ApplicationAutoScalingException e) {
            System.err.println(e.awsErrorDetails().errorMessage());
        }
    }

    // Verify that the scaling policy was deleted
    public static void verifyScalingPolicies(ApplicationAutoScalingClient appAutoScalingClient, String tableId, ServiceNamespace ns, ScalableDimension tableWCUs) {
        DescribeScalingPoliciesRequest dscRequest = DescribeScalingPoliciesRequest.builder()
            .scalableDimension(tableWCUs)
            .serviceNamespace(ns)
            .resourceId(tableId)
            .build();

        DescribeScalingPoliciesResponse response = appAutoScalingClient.describeScalingPolicies(dscRequest);
        System.out.println("DescribeScalableTargets result: ");
        System.out.println(response);
    }

    public static void deregisterScalableTarget(ApplicationAutoScalingClient appAutoScalingClient, String tableId, ServiceNamespace ns, ScalableDimension tableWCUs) {
        try {
            DeregisterScalableTargetRequest targetRequest = DeregisterScalableTargetRequest.builder()
                .scalableDimension(tableWCUs)
                .serviceNamespace(ns)
                .resourceId(tableId)
                .build();

            appAutoScalingClient.deregisterScalableTarget(targetRequest);
            System.out.println("The scalable target was deregistered.");

        } catch (ApplicationAutoScalingException e) {
            System.err.println(e.awsErrorDetails().errorMessage());
        }
    }

    public static void verifyTarget(ApplicationAutoScalingClient appAutoScalingClient, String tableId, ServiceNamespace ns, ScalableDimension tableWCUs) {
        DescribeScalableTargetsRequest dscRequest = DescribeScalableTargetsRequest.builder()
            .scalableDimension(tableWCUs)
            .serviceNamespace(ns)
            .resourceIds(tableId)
            .build();

        DescribeScalableTargetsResponse response = appAutoScalingClient.describeScalableTargets(dscRequest);
        System.out.println("DescribeScalableTargets result: ");
        System.out.println(response);
    }
}
```

------
#### [ Java v1 ]

```
package com.amazonaws.codesamples.autoscaling;

import com.amazonaws.services.applicationautoscaling.AWSApplicationAutoScalingClient;
import com.amazonaws.services.applicationautoscaling.model.DeleteScalingPolicyRequest;
import com.amazonaws.services.applicationautoscaling.model.DeregisterScalableTargetRequest;
import com.amazonaws.services.applicationautoscaling.model.DescribeScalableTargetsRequest;
import com.amazonaws.services.applicationautoscaling.model.DescribeScalableTargetsResult;
import com.amazonaws.services.applicationautoscaling.model.DescribeScalingPoliciesRequest;
import com.amazonaws.services.applicationautoscaling.model.DescribeScalingPoliciesResult;
import com.amazonaws.services.applicationautoscaling.model.ScalableDimension;
import com.amazonaws.services.applicationautoscaling.model.ServiceNamespace;

public class DisableDynamoDBAutoscaling {

	static AWSApplicationAutoScalingClient aaClient = new AWSApplicationAutoScalingClient();

	public static void main(String args[]) {

		ServiceNamespace ns = ServiceNamespace.Dynamodb;
		ScalableDimension tableWCUs = ScalableDimension.DynamodbTableWriteCapacityUnits;
		String resourceID = "table/TestTable";

		// Delete the scaling policy
		DeleteScalingPolicyRequest delSPRequest = new DeleteScalingPolicyRequest()
				.withServiceNamespace(ns)
				.withScalableDimension(tableWCUs)
				.withResourceId(resourceID)
				.withPolicyName("MyScalingPolicy");

		try {
			aaClient.deleteScalingPolicy(delSPRequest);
		} catch (Exception e) {
			System.err.println("Unable to delete scaling policy: ");
			System.err.println(e.getMessage());
		}

		// Verify that the scaling policy was deleted
		DescribeScalingPoliciesRequest descSPRequest = new DescribeScalingPoliciesRequest()
				.withServiceNamespace(ns)
				.withScalableDimension(tableWCUs)
				.withResourceId(resourceID);

		try {
			DescribeScalingPoliciesResult dspResult = aaClient.describeScalingPolicies(descSPRequest);
			System.out.println("DescribeScalingPolicies result: ");
			System.out.println(dspResult);
		} catch (Exception e) {
			e.printStackTrace();
			System.err.println("Unable to describe scaling policy: ");
			System.err.println(e.getMessage());
		}

		System.out.println();

		// Remove the scalable target
		DeregisterScalableTargetRequest delSTRequest = new DeregisterScalableTargetRequest()
				.withServiceNamespace(ns)
				.withScalableDimension(tableWCUs)
				.withResourceId(resourceID);

		try {
			aaClient.deregisterScalableTarget(delSTRequest);
		} catch (Exception e) {
			System.err.println("Unable to deregister scalable target: ");
			System.err.println(e.getMessage());
		}

		// Verify that the scalable target was removed
		DescribeScalableTargetsRequest dscRequest = new DescribeScalableTargetsRequest()
				.withServiceNamespace(ns)
				.withScalableDimension(tableWCUs)
				.withResourceIds(resourceID);

		try {
			DescribeScalableTargetsResult dsaResult = aaClient.describeScalableTargets(dscRequest);
			System.out.println("DescribeScalableTargets result: ");
			System.out.println(dsaResult);
			System.out.println();
		} catch (Exception e) {
			System.err.println("Unable to describe scalable target: ");
			System.err.println(e.getMessage());
		}

	}

}
```

------

# DynamoDB reserved capacity
<a name="reserved-capacity"></a>

For provisioned capacity tables that use the Standard [table class](HowItWorks.TableClasses.md), DynamoDB offers the ability to purchase reserved capacity for your read and write capacity. A reserved capacity purchase is an agreement to pay for a minimum amount of provisioned throughput capacity, for the duration of the term of the agreement, in exchange for discounted pricing.

**Note**  
You can't purchase reserved capacity for replicated write capacity units (rWCUs). Reserved capacity is applied only to the Region in which it was purchased. Reserved capacity is also not available for tables using the DynamoDB Standard-IA table class or on-demand capacity mode.

Reserved capacity is purchased in allocations of 100 WCUs or 100 RCUs. The smallest reserved capacity offering is 100 capacity units (reads or writes). DynamoDB reserved capacity is offered as either a one-year commitment or in select Regions as a three-year commitment. You can save up to 54% off standard rates for a one-year term and 77% off standard rates for a three-year term. For more information about how and when you should purchase, see [Amazon DynamoDB Reserved Capacity](https://aws.amazon.com/dynamodb/reserved-capacity/).

**Note**  
You can purchase up to a combined 1,000,000 reserved capacity units for write capacity units (WCUs) and read capacity units (RCUs) using the AWS Management Console. If you want to purchase more than 1,000,000 provisioned capacity units in a single purchase, or have active reserved capacity and want to purchase additional reserved capacity that would result in more than 1,000,000 active provisioned capacity units, follow the process mentioned in "How to purchase reserved capacity" section in [Amazon DynamoDB Reserved Capacity](https://aws.amazon.com/dynamodb/reserved-capacity/).

When you purchase DynamoDB reserved capacity, you pay a one-time partial upfront payment and receive a discounted hourly rate for the committed provisioned usage. You pay for the entire committed provisioned usage, regardless of actual usage, so your cost savings are closely tied to use. Any capacity that you provision in excess of the purchased reserved capacity is billed at standard provisioned capacity rates. By reserving your read and write capacity units ahead of time, you realize significant cost savings on your provisioned capacity costs.

You can't sell, cancel, or transfer reserved capacity to another Region or account.

# Understanding DynamoDB warm throughput
<a name="warm-throughput"></a>

*Warm throughput* refers to the number of read and write operations your DynamoDB table can instantaneously support. These values are available by default for all tables and global secondary indexes (GSI) and represent how much they have scaled based on historical usage. If you are using on-demand mode, or if you update your provisioned throughput to these values, your application will be able to issue requests up to those values instantly.

DynamoDB will automatically adjust warm throughput values as your usage increases. You can also increase these values proactively when needed, which is especially useful for upcoming peak events like product launches or sales. For planned peak events, where request rates to your DynamoDB table might increase by 10x, 100x, or more, you can now assess whether the current warm throughput is sufficient to handle the expected traffic. If it’s not, you can increase the warm throughput value without changing your throughput settings or [billing mode](capacity-mode.md). This process is referred to as *pre-warming* a table, allowing you to set a baseline that your tables can instantly support. This ensures your applications can handle higher request rates from the moment they occur. Once increased, warm throughput values can't be decreased.

You can increase the warm throughput value for read operations, write operations, or both. You can increase this value for new and existing single-Region tables, global tables, and GSIs. For global tables, this feature is available for [version 2019.11.21 (Current)](GlobalTables.md), and the warm throughput settings you set will automatically apply to all replica tables in the global table. There is no limit to the number of DynamoDB tables you can pre-warm at any time. The time to complete pre-warming depends on the values you set and the size of the table or index. You can submit simultaneous pre-warm requests and these requests will not interfere with any table operations. You can pre-warm your table up to the table or index quota limit for your account in that Region. Use the [Service Quotas console](https://console.aws.amazon.com/servicequotas) to check your current limits and increase them if needed. 

Warm throughput values are available by default for all tables and secondary indexes at no cost. However, if you proactively increase these default warm throughput values to pre-warm the tables, you will be charged for those requests. For more information, see [Amazon DynamoDB pricing](https://aws.amazon.com/dynamodb/pricing/).

For more information about warm throughput, see the topics below:

**Topics**
+ [

# Check your DynamoDB table's current warm throughput
](check-warm-throughput.md)
+ [

# Increase your existing DynamoDB table's warm throughput
](update-warm-throughput.md)
+ [

# Create a new DynamoDB table with higher warm throughput
](create-table-warm-throughput.md)
+ [

# Understanding DynamoDB warm throughput in different scenarios
](warm-throughput-scenarios.md)

# Check your DynamoDB table's current warm throughput
<a name="check-warm-throughput"></a>

Use the following AWS CLI and AWS Console instructions to check your table or index's current warm throughput value.

## AWS Management Console
<a name="warm-throughput-check-console"></a>

To check your DynamoDB table's warm throughput using the DynamoDB console:

1. Sign in to the AWS Management Console and open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/).

1. In the left navigation pane, choose Tables.

1. On the **Tables** page, choose your desired table.

1. Select **Additional settings** to view your current warm throughput value. This value is shown as read units per second and write units per second.

## AWS CLI
<a name="warm-throughput-check-CLI"></a>

The following AWS CLI example shows you how to check your DynamoDB table's warm throughput.

1. Run the `describe-table` operation on your DynamoDB table.

   ```
   aws dynamodb describe-table --table-name GameScores
   ```

1. You’ll receive a response similar to the one below. Your `WarmThroughput` settings will be displayed as `ReadUnitsPerSecond` and `WriteUnitsPerSecond`. The `Status` will be `UPDATING` when the warm throughput value is being updated, and `ACTIVE` when the new warm throughput value is set.

   ```
   {
       "Table": {
           "AttributeDefinitions": [
               {
                   "AttributeName": "GameTitle",
                   "AttributeType": "S"
               },
               {
                   "AttributeName": "TopScore",
                   "AttributeType": "N"
               },
               {
                   "AttributeName": "UserId",
                   "AttributeType": "S"
               }
           ],
           "TableName": "GameScores",
           "KeySchema": [
               {
                   "AttributeName": "UserId",
                   "KeyType": "HASH"
               },
               {
                   "AttributeName": "GameTitle",
                   "KeyType": "RANGE"
               }
           ],
           "TableStatus": "ACTIVE",
           "CreationDateTime": 1726128388.729,
           "ProvisionedThroughput": {
               "NumberOfDecreasesToday": 0,
               "ReadCapacityUnits": 0,
               "WriteCapacityUnits": 0
           },
           "TableSizeBytes": 0,
           "ItemCount": 0,
           "TableArn": "arn:aws:dynamodb:us-east-1:XXXXXXXXXXXX:table/GameScores",
           "TableId": "XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
           "BillingModeSummary": {
               "BillingMode": "PAY_PER_REQUEST",
               "LastUpdateToPayPerRequestDateTime": 1726128388.729
           },
           "GlobalSecondaryIndexes": [
               {
                   "IndexName": "GameTitleIndex",
                   "KeySchema": [
                       {
                           "AttributeName": "GameTitle",
                           "KeyType": "HASH"
                       },
                       {
                           "AttributeName": "TopScore",
                           "KeyType": "RANGE"
                       }
                   ],
                   "Projection": {
                       "ProjectionType": "INCLUDE",
                       "NonKeyAttributes": [
                           "UserId"
                       ]
                   },
                   "IndexStatus": "ACTIVE",
                   "ProvisionedThroughput": {
                       "NumberOfDecreasesToday": 0,
                       "ReadCapacityUnits": 0,
                       "WriteCapacityUnits": 0
                   },
                   "IndexSizeBytes": 0,
                   "ItemCount": 0,
                   "IndexArn": "arn:aws:dynamodb:us-east-1:XXXXXXXXXXXX:table/GameScores/index/GameTitleIndex",
                   "WarmThroughput": {
                       "ReadUnitsPerSecond": 12000,
                       "WriteUnitsPerSecond": 4000,
                       "Status": "ACTIVE"
                   }
               }
           ],
           "DeletionProtectionEnabled": false,
           "WarmThroughput": {
               "ReadUnitsPerSecond": 12000,
               "WriteUnitsPerSecond": 4000,
               "Status": "ACTIVE"
           }
       }
   }
   ```

# Increase your existing DynamoDB table's warm throughput
<a name="update-warm-throughput"></a>

Once you've checked your DynamoDB table's current warm throughput value, you can update it with the following steps:

## AWS Management Console
<a name="warm-throughput-update-console"></a>

To check your DynamoDB table's warm throughput value using the DynamoDB console:

1. Sign in to the AWS Management Console and open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/).

1. In the left navigation pane, choose Tables.

1. On the **Tables** page, choose your desired table.

1. In the **Warm throughput** field, select **Edit**.

1. On the **Edit warm throughput** page, choose **Increase warm throughput**.

1. Adjust the **read units per second** and **write units pers second**. These two settings define the throughput your table can instantly handle.

1. Select **Save**.

1. Your **read units per second** and **write units per second** will be updated in the **Warm throughput** field when the request finishes processing.
**Note**  
Updating your warm throughput value is an asynchronous task. The `Status` will change from `UPDATING` to `ACTIVE` when the update is complete.

## AWS CLI
<a name="warm-throughput-update-CLI"></a>

The following AWS CLI example shows you how to update your DynamoDB table's warm throughput value.

1. Run the `update-table` operation on your DynamoDB table.

   ```
   aws dynamodb update-table \
       --table-name GameScores \
       --warm-throughput ReadUnitsPerSecond=12345,WriteUnitsPerSecond=4567 \
       --global-secondary-index-updates \
           "[
               {
                   \"Update\": {
                       \"IndexName\": \"GameTitleIndex\",
                       \"WarmThroughput\": {
                           \"ReadUnitsPerSecond\": 88,
                           \"WriteUnitsPerSecond\": 77
                       }
                   }
               }
           ]" \
       --region us-east-1
   ```

1. You’ll receive a response similar to the one below. Your `WarmThroughput` settings will be displayed as `ReadUnitsPerSecond` and `WriteUnitsPerSecond`. The `Status` will be `UPDATING` when the warm throughput value is being updated, and `ACTIVE` when the new warm throughput value is set.

   ```
   {
       "TableDescription": {
           "AttributeDefinitions": [
               {
                   "AttributeName": "GameTitle",
                   "AttributeType": "S"
               },
               {
                   "AttributeName": "TopScore",
                   "AttributeType": "N"
               },
               {
                   "AttributeName": "UserId",
                   "AttributeType": "S"
               }
           ],
           "TableName": "GameScores",
           "KeySchema": [
               {
                   "AttributeName": "UserId",
                   "KeyType": "HASH"
               },
               {
                   "AttributeName": "GameTitle",
                   "KeyType": "RANGE"
               }
           ],
           "TableStatus": "ACTIVE",
           "CreationDateTime": 1730242189.965,
           "ProvisionedThroughput": {
               "NumberOfDecreasesToday": 0,
               "ReadCapacityUnits": 20,
               "WriteCapacityUnits": 10
           },
           "TableSizeBytes": 0,
           "ItemCount": 0,
           "TableArn": "arn:aws:dynamodb:us-east-1:XXXXXXXXXXXX:table/GameScores",
           "TableId": "XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
           "GlobalSecondaryIndexes": [
               {
                   "IndexName": "GameTitleIndex",
                   "KeySchema": [
                       {
                           "AttributeName": "GameTitle",
                           "KeyType": "HASH"
                       },
                       {
                           "AttributeName": "TopScore",
                           "KeyType": "RANGE"
                       }
                   ],
                   "Projection": {
                       "ProjectionType": "INCLUDE",
                       "NonKeyAttributes": [
                           "UserId"
                       ]
                   },
                   "IndexStatus": "ACTIVE",
                   "ProvisionedThroughput": {
                       "NumberOfDecreasesToday": 0,
                       "ReadCapacityUnits": 50,
                       "WriteCapacityUnits": 25
                   },
                   "IndexSizeBytes": 0,
                   "ItemCount": 0,
                   "IndexArn": "arn:aws:dynamodb:us-east-1:XXXXXXXXXXXX:table/GameScores/index/GameTitleIndex",
                   "WarmThroughput": {
                       "ReadUnitsPerSecond": 50,
                       "WriteUnitsPerSecond": 25,
                       "Status": "UPDATING"
                   }
               }
           ],
           "DeletionProtectionEnabled": false,
           "WarmThroughput": {
               "ReadUnitsPerSecond": 12300,
               "WriteUnitsPerSecond": 4500,
               "Status": "UPDATING"
           }
       }
   }
   ```

## AWS SDK
<a name="warm-throughput-update-SDK"></a>

The following SDK examples shows you how to update your DynamoDB table's warm throughput value.

------
#### [ Java ]

```
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.DynamoDbException;
import software.amazon.awssdk.services.dynamodb.model.GlobalSecondaryIndexUpdate;
import software.amazon.awssdk.services.dynamodb.model.UpdateGlobalSecondaryIndexAction;
import software.amazon.awssdk.services.dynamodb.model.UpdateTableRequest;
import software.amazon.awssdk.services.dynamodb.model.WarmThroughput;

...
public static WarmThroughput buildWarmThroughput(final Long readUnitsPerSecond,
                                                 final Long writeUnitsPerSecond) {
    return WarmThroughput.builder()
            .readUnitsPerSecond(readUnitsPerSecond)
            .writeUnitsPerSecond(writeUnitsPerSecond)
            .build();
}

public static void updateDynamoDBTable(DynamoDbClient ddb,
                                       String tableName,
                                       Long tableReadUnitsPerSecond,
                                       Long tableWriteUnitsPerSecond,
                                       String globalSecondaryIndexName,
                                       Long globalSecondaryIndexReadUnitsPerSecond,
                                       Long globalSecondaryIndexWriteUnitsPerSecond) {

    final WarmThroughput tableWarmThroughput = buildWarmThroughput(tableReadUnitsPerSecond, tableWriteUnitsPerSecond);
    final WarmThroughput gsiWarmThroughput = buildWarmThroughput(globalSecondaryIndexReadUnitsPerSecond, globalSecondaryIndexWriteUnitsPerSecond);

    final GlobalSecondaryIndexUpdate globalSecondaryIndexUpdate = GlobalSecondaryIndexUpdate.builder()
            .update(UpdateGlobalSecondaryIndexAction.builder()
                    .indexName(globalSecondaryIndexName)
                    .warmThroughput(gsiWarmThroughput)
                    .build()
            ).build();

    final UpdateTableRequest request = UpdateTableRequest.builder()
            .tableName(tableName)
            .globalSecondaryIndexUpdates(globalSecondaryIndexUpdate)
            .warmThroughput(tableWarmThroughput)
            .build();

    try {
        ddb.updateTable(request);
    } catch (DynamoDbException e) {
        System.err.println(e.getMessage());
        System.exit(1);
    }

    System.out.println("Done!");
}
```

------
#### [ Python ]

```
from boto3 import resource
from botocore.exceptions import ClientError

def update_dynamodb_table_warm_throughput(table_name, table_read_units, table_write_units, gsi_name, gsi_read_units, gsi_write_units, region_name="us-east-1"):
    """
    Updates the warm throughput of a DynamoDB table and a global secondary index.

    :param table_name: The name of the table to update.
    :param table_read_units: The new read units per second for the table's warm throughput.
    :param table_write_units: The new write units per second for the table's warm throughput.
    :param gsi_name: The name of the global secondary index to update.
    :param gsi_read_units: The new read units per second for the GSI's warm throughput.
    :param gsi_write_units: The new write units per second for the GSI's warm throughput.
    :param region_name: The AWS Region name to target. defaults to us-east-1
    """
    try:
        ddb = resource('dynamodb', region_name)
        
        # Update the table's warm throughput
        table_warm_throughput = {
            "ReadUnitsPerSecond": table_read_units,
            "WriteUnitsPerSecond": table_write_units
        }

        # Update the global secondary index's warm throughput
        gsi_warm_throughput = {
            "ReadUnitsPerSecond": gsi_read_units,
            "WriteUnitsPerSecond": gsi_write_units
        }

        # Construct the global secondary index update
        global_secondary_index_update = [
            {
                "Update": {
                    "IndexName": gsi_name,
                    "WarmThroughput": gsi_warm_throughput
                }
            }
        ]

        # Construct the update table request
        update_table_request = {
            "TableName": table_name,
            "GlobalSecondaryIndexUpdates": global_secondary_index_update,
            "WarmThroughput": table_warm_throughput
        }

        # Update the table
        ddb.update_table(**update_table_request)
        print("Table updated successfully!")
    except ClientError as e:
        print(f"Error updating table: {e}")
        raise e
```

------
#### [ Javascript ]

```
import { DynamoDBClient, UpdateTableCommand } from "@aws-sdk/client-dynamodb";

async function updateDynamoDBTableWarmThroughput(
  tableName,
  tableReadUnits,
  tableWriteUnits,
  gsiName,
  gsiReadUnits,
  gsiWriteUnits,
  region = "us-east-1"
) {
  try {
    const ddbClient = new DynamoDBClient({ region: region });

    // Construct the update table request
    const updateTableRequest = {
      TableName: tableName,
      GlobalSecondaryIndexUpdates: [
        {
            Update: {
                IndexName: gsiName,
                WarmThroughput: {
                    ReadUnitsPerSecond: gsiReadUnits,
                    WriteUnitsPerSecond: gsiWriteUnits,
                },
            },
        },
      ],
      WarmThroughput: {
          ReadUnitsPerSecond: tableReadUnits,
          WriteUnitsPerSecond: tableWriteUnits,
      },
    };

    const command = new UpdateTableCommand(updateTableRequest);
    const response = await ddbClient.send(command);
    console.log(`Table updated successfully! Response: ${response}`);
  } catch (error) {
    console.error(`Error updating table: ${error}`);
    throw error;
  }
}
```

------

# Create a new DynamoDB table with higher warm throughput
<a name="create-table-warm-throughput"></a>

You can adjust the warm throughput values when you create your DynamoDB table by following the steps below. These steps also apply when creating a [global table](GlobalTables.md) or [secondary index](SecondaryIndexes.md).

## AWS Management Console
<a name="warm-throughput-create-console"></a>

To create a DynamoDB table and adjust the warm throughput values through the console:

1. Sign in to the AWS Management Console and open the DynamoDB console at [https://console.aws.amazon.com/dynamodb/](https://console.aws.amazon.com/dynamodb/).

1. Select **Create table**.

1. Choose a **Table name**, **Partition key**, and **Sort key (optional)**.

1. For **Table settings**, choose **Customize settings**.

1. In the **Warm throughput **field, choose **Increase warm throughput.**

1. Adjust the **read units per second** and **write units pers second**. These two settings define the maximum throughput your table can instantly handle.

1. Continue adding any remaining table details and then select **Create table**.

## AWS CLI
<a name="warm-throughput-create-CLI"></a>

The following AWS CLI example shows you how to create a DynamoDB table with customized warm throughput values.

1. Run the `create-table` operation to create the following DynamoDB table.

   ```
   aws dynamodb create-table \
       --table-name GameScores \
       --attribute-definitions AttributeName=UserId,AttributeType=S \
                               AttributeName=GameTitle,AttributeType=S \
                               AttributeName=TopScore,AttributeType=N  \
       --key-schema AttributeName=UserId,KeyType=HASH \
                    AttributeName=GameTitle,KeyType=RANGE \
       --provisioned-throughput ReadCapacityUnits=20,WriteCapacityUnits=10 \
       --global-secondary-indexes \
           "[
               {
                   \"IndexName\": \"GameTitleIndex\",
                   \"KeySchema\": [{\"AttributeName\":\"GameTitle\",\"KeyType\":\"HASH\"},
                                   {\"AttributeName\":\"TopScore\",\"KeyType\":\"RANGE\"}],
                   \"Projection\":{
                       \"ProjectionType\":\"INCLUDE\",
                       \"NonKeyAttributes\":[\"UserId\"]
                   },
                   \"ProvisionedThroughput\": {
                       \"ReadCapacityUnits\": 50,
                       \"WriteCapacityUnits\": 25
                   },\"WarmThroughput\": {
                       \"ReadUnitsPerSecond\": 1987,
                       \"WriteUnitsPerSecond\": 543
                   }
               }
           ]" \
       --warm-throughput ReadUnitsPerSecond=12345,WriteUnitsPerSecond=4567 \
       --region us-east-1
   ```

1. You’ll receive a response similar to the one below. Your `WarmThroughput` settings will be displayed as `ReadUnitsPerSecond` and `WriteUnitsPerSecond`. The `Status` will be `UPDATING` when the warm throughput value is being updated, and `ACTIVE` when the new warm throughput value is set.

   ```
   {
       "TableDescription": {
           "AttributeDefinitions": [
               {
                   "AttributeName": "GameTitle",
                   "AttributeType": "S"
               },
               {
                   "AttributeName": "TopScore",
                   "AttributeType": "N"
               },
               {
                   "AttributeName": "UserId",
                   "AttributeType": "S"
               }
           ],
           "TableName": "GameScores",
           "KeySchema": [
               {
                   "AttributeName": "UserId",
                   "KeyType": "HASH"
               },
               {
                   "AttributeName": "GameTitle",
                   "KeyType": "RANGE"
               }
           ],
           "TableStatus": "CREATING",
           "CreationDateTime": 1730241788.779,
           "ProvisionedThroughput": {
               "NumberOfDecreasesToday": 0,
               "ReadCapacityUnits": 20,
               "WriteCapacityUnits": 10
           },
           "TableSizeBytes": 0,
           "ItemCount": 0,
           "TableArn": "arn:aws:dynamodb:us-east-1:XXXXXXXXXXXX:table/GameScores",
           "TableId": "XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
           "GlobalSecondaryIndexes": [
               {
                   "IndexName": "GameTitleIndex",
                   "KeySchema": [
                       {
                           "AttributeName": "GameTitle",
                           "KeyType": "HASH"
                       },
                       {
                           "AttributeName": "TopScore",
                           "KeyType": "RANGE"
                       }
                   ],
                   "Projection": {
                       "ProjectionType": "INCLUDE",
                       "NonKeyAttributes": [
                           "UserId"
                       ]
                   },
                   "IndexStatus": "CREATING",
                   "ProvisionedThroughput": {
                       "NumberOfDecreasesToday": 0,
                       "ReadCapacityUnits": 50,
                       "WriteCapacityUnits": 25
                   },
                   "IndexSizeBytes": 0,
                   "ItemCount": 0,
                   "IndexArn": "arn:aws:dynamodb:us-east-1:XXXXXXXXXXXX:table/GameScores/index/GameTitleIndex",
                   "WarmThroughput": {
                       "ReadUnitsPerSecond": 1987,
                       "WriteUnitsPerSecond": 543,
                       "Status": "UPDATING"
                   }
               }
           ],
           "DeletionProtectionEnabled": false,
           "WarmThroughput": {
               "ReadUnitsPerSecond": 12345,
               "WriteUnitsPerSecond": 4567,
               "Status": "UPDATING"
           }
       }
   }
   ```

## AWS SDK
<a name="warm-throughput-create-SDK"></a>

The following SDK examples shows you how to create a DynamoDB table with customized warm throughput values.

------
#### [ Java ]

```
import software.amazon.awscdk.services.dynamodb.ProjectionType;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.CreateTableResponse;
import software.amazon.awssdk.services.dynamodb.model.CreateTableRequest;
import software.amazon.awssdk.services.dynamodb.model.KeySchemaElement;
import software.amazon.awssdk.services.dynamodb.model.KeyType;
import software.amazon.awssdk.services.dynamodb.model.ProvisionedThroughput;
import software.amazon.awssdk.services.dynamodb.model.Projection;
import software.amazon.awssdk.services.dynamodb.model.GlobalSecondaryIndex;
import software.amazon.awssdk.services.dynamodb.model.AttributeDefinition;
import software.amazon.awssdk.services.dynamodb.model.ScalarAttributeType;
import software.amazon.awssdk.services.dynamodb.model.WarmThroughput;
...

public static WarmThroughput buildWarmThroughput(final Long readUnitsPerSecond,
                                                 final Long writeUnitsPerSecond) {
    return WarmThroughput.builder()
            .readUnitsPerSecond(readUnitsPerSecond)
            .writeUnitsPerSecond(writeUnitsPerSecond)
            .build();
}
private static AttributeDefinition buildAttributeDefinition(final String attributeName, 
                                                            final ScalarAttributeType scalarAttributeType) {
    return AttributeDefinition.builder()
            .attributeName(attributeName)
            .attributeType(scalarAttributeType)
            .build();
}
private static KeySchemaElement buildKeySchemaElement(final String attributeName, 
                                                      final KeyType keyType) {
    return KeySchemaElement.builder()
            .attributeName(attributeName)
            .keyType(keyType)
            .build();
}
public static void createDynamoDBTable(DynamoDbClient ddb,
                                       String tableName,
                                       String partitionKey,
                                       String sortKey,
                                       String miscellaneousKeyAttribute,
                                       String nonKeyAttribute,
                                       Long tableReadCapacityUnits,
                                       Long tableWriteCapacityUnits,
                                       Long tableWarmReadUnitsPerSecond,
                                       Long tableWarmWriteUnitsPerSecond,
                                       String globalSecondaryIndexName,
                                       Long globalSecondaryIndexReadCapacityUnits,
                                       Long globalSecondaryIndexWriteCapacityUnits,
                                       Long globalSecondaryIndexWarmReadUnitsPerSecond,
                                       Long globalSecondaryIndexWarmWriteUnitsPerSecond) {

    // Define the table attributes
    final AttributeDefinition partitionKeyAttribute = buildAttributeDefinition(partitionKey, ScalarAttributeType.S);
    final AttributeDefinition sortKeyAttribute = buildAttributeDefinition(sortKey, ScalarAttributeType.S);
    final AttributeDefinition miscellaneousKeyAttributeDefinition = buildAttributeDefinition(miscellaneousKeyAttribute, ScalarAttributeType.N);
    final AttributeDefinition[] attributeDefinitions = {partitionKeyAttribute, sortKeyAttribute, miscellaneousKeyAttributeDefinition};

    // Define the table key schema
    final KeySchemaElement partitionKeyElement = buildKeySchemaElement(partitionKey, KeyType.HASH);
    final KeySchemaElement sortKeyElement = buildKeySchemaElement(sortKey, KeyType.RANGE);
    final KeySchemaElement[] keySchema = {partitionKeyElement, sortKeyElement};

    // Define the provisioned throughput for the table
    final ProvisionedThroughput provisionedThroughput = ProvisionedThroughput.builder()
            .readCapacityUnits(tableReadCapacityUnits)
            .writeCapacityUnits(tableWriteCapacityUnits)
            .build();

    // Define the Global Secondary Index (GSI)
    final KeySchemaElement globalSecondaryIndexPartitionKeyElement = buildKeySchemaElement(sortKey, KeyType.HASH);
    final KeySchemaElement globalSecondaryIndexSortKeyElement = buildKeySchemaElement(miscellaneousKeyAttribute, KeyType.RANGE);
    final KeySchemaElement[] gsiKeySchema = {globalSecondaryIndexPartitionKeyElement, globalSecondaryIndexSortKeyElement};

    final Projection gsiProjection = Projection.builder()
            .projectionType(String.valueOf(ProjectionType.INCLUDE))
            .nonKeyAttributes(nonKeyAttribute)
            .build();
    final ProvisionedThroughput gsiProvisionedThroughput = ProvisionedThroughput.builder()
            .readCapacityUnits(globalSecondaryIndexReadCapacityUnits)
            .writeCapacityUnits(globalSecondaryIndexWriteCapacityUnits)
            .build();
    // Define the warm throughput for the Global Secondary Index (GSI)
    final WarmThroughput gsiWarmThroughput = buildWarmThroughput(globalSecondaryIndexWarmReadUnitsPerSecond, globalSecondaryIndexWarmWriteUnitsPerSecond);
    final GlobalSecondaryIndex globalSecondaryIndex = GlobalSecondaryIndex.builder()
            .indexName(globalSecondaryIndexName)
            .keySchema(gsiKeySchema)
            .projection(gsiProjection)
            .provisionedThroughput(gsiProvisionedThroughput)
            .warmThroughput(gsiWarmThroughput)
            .build();

    // Define the warm throughput for the table
    final WarmThroughput tableWarmThroughput = buildWarmThroughput(tableWarmReadUnitsPerSecond, tableWarmWriteUnitsPerSecond);

    final CreateTableRequest request = CreateTableRequest.builder()
            .tableName(tableName)
            .attributeDefinitions(attributeDefinitions)
            .keySchema(keySchema)
            .provisionedThroughput(provisionedThroughput)
            .globalSecondaryIndexes(globalSecondaryIndex)
            .warmThroughput(tableWarmThroughput)
            .build();

    CreateTableResponse response = ddb.createTable(request);
    System.out.println(response);
}
```

------
#### [ Python ]

```
from boto3 import resource
from botocore.exceptions import ClientError

def create_dynamodb_table_warm_throughput(table_name, partition_key, sort_key, misc_key_attr, non_key_attr, table_provisioned_read_units, table_provisioned_write_units, table_warm_reads, table_warm_writes, gsi_name, gsi_provisioned_read_units, gsi_provisioned_write_units, gsi_warm_reads, gsi_warm_writes, region_name="us-east-1"):
    """
    Creates a DynamoDB table with a warm throughput setting configured.

    :param table_name: The name of the table to be created.
    :param partition_key: The partition key for the table being created.
    :param sort_key: The sort key for the table being created.
    :param misc_key_attr: A miscellaneous key attribute for the table being created.
    :param non_key_attr: A non-key attribute for the table being created.
    :param table_provisioned_read_units: The newly created table's provisioned read capacity units.
    :param table_provisioned_write_units: The newly created table's provisioned write capacity units.
    :param table_warm_reads: The read units per second setting for the table's warm throughput.
    :param table_warm_writes: The write units per second setting for the table's warm throughput.
    :param gsi_name: The name of the Global Secondary Index (GSI) to be created on the table.
    :param gsi_provisioned_read_units: The configured Global Secondary Index (GSI) provisioned read capacity units.
    :param gsi_provisioned_write_units: The configured Global Secondary Index (GSI) provisioned write capacity units.
    :param gsi_warm_reads: The read units per second setting for the Global Secondary Index (GSI)'s warm throughput.
    :param gsi_warm_writes: The write units per second setting for the Global Secondary Index (GSI)'s warm throughput.
    :param region_name: The AWS Region name to target. defaults to us-east-1
    """
    try:
        ddb = resource('dynamodb', region_name)
        
        # Define the table attributes
        attribute_definitions = [
            { "AttributeName": partition_key, "AttributeType": "S" },
            { "AttributeName": sort_key, "AttributeType": "S" },
            { "AttributeName": misc_key_attr, "AttributeType": "N" }
        ]
        
        # Define the table key schema
        key_schema = [
            { "AttributeName": partition_key, "KeyType": "HASH" },
            { "AttributeName": sort_key, "KeyType": "RANGE" }
        ]
        
        # Define the provisioned throughput for the table
        provisioned_throughput = {
            "ReadCapacityUnits": table_provisioned_read_units,
            "WriteCapacityUnits": table_provisioned_write_units
        }
        
        # Define the global secondary index
        gsi_key_schema = [
            { "AttributeName": sort_key, "KeyType": "HASH" },
            { "AttributeName": misc_key_attr, "KeyType": "RANGE" }
        ]
        gsi_projection = {
            "ProjectionType": "INCLUDE",
            "NonKeyAttributes": [non_key_attr]
        }
        gsi_provisioned_throughput = {
            "ReadCapacityUnits": gsi_provisioned_read_units,
            "WriteCapacityUnits": gsi_provisioned_write_units
        }
        gsi_warm_throughput = {
            "ReadUnitsPerSecond": gsi_warm_reads,
            "WriteUnitsPerSecond": gsi_warm_writes
        }
        global_secondary_indexes = [
            {
                "IndexName": gsi_name,
                "KeySchema": gsi_key_schema,
                "Projection": gsi_projection,
                "ProvisionedThroughput": gsi_provisioned_throughput,
                "WarmThroughput": gsi_warm_throughput
            }
        ]
        
        # Define the warm throughput for the table
        warm_throughput = {
            "ReadUnitsPerSecond": table_warm_reads,
            "WriteUnitsPerSecond": table_warm_writes
        }
        
        # Create the DynamoDB client and create the table
        response = ddb.create_table(
            TableName=table_name,
            AttributeDefinitions=attribute_definitions,
            KeySchema=key_schema,
            ProvisionedThroughput=provisioned_throughput,
            GlobalSecondaryIndexes=global_secondary_indexes,
            WarmThroughput=warm_throughput
        )
        
        print(response)
    except ClientError as e:
        print(f"Error creating table: {e}")
        raise e
```

------
#### [ Javascript ]

```
import { DynamoDBClient, CreateTableCommand } from "@aws-sdk/client-dynamodb";

async function createDynamoDBTableWithWarmThroughput(
  tableName,
  partitionKey,
  sortKey,
  miscKeyAttr,
  nonKeyAttr,
  tableProvisionedReadUnits,
  tableProvisionedWriteUnits,
  tableWarmReads,
  tableWarmWrites,
  indexName,
  indexProvisionedReadUnits,
  indexProvisionedWriteUnits,
  indexWarmReads,
  indexWarmWrites,
  region = "us-east-1"
) {
  try {
    const ddbClient = new DynamoDBClient({ region: region });
    const command = new CreateTableCommand({
      TableName: tableName,
      AttributeDefinitions: [
          { AttributeName: partitionKey, AttributeType: "S" },
          { AttributeName: sortKey, AttributeType: "S" },
          { AttributeName: miscKeyAttr, AttributeType: "N" },
      ],
      KeySchema: [
          { AttributeName: partitionKey, KeyType: "HASH" },
          { AttributeName: sortKey, KeyType: "RANGE" },
      ],
      ProvisionedThroughput: {
          ReadCapacityUnits: tableProvisionedReadUnits,
          WriteCapacityUnits: tableProvisionedWriteUnits,
      },
      WarmThroughput: {
          ReadUnitsPerSecond: tableWarmReads,
          WriteUnitsPerSecond: tableWarmWrites,
      },
      GlobalSecondaryIndexes: [
          {
            IndexName: indexName,
            KeySchema: [
                { AttributeName: sortKey, KeyType: "HASH" },
                { AttributeName: miscKeyAttr, KeyType: "RANGE" },
            ],
            Projection: {
                ProjectionType: "INCLUDE",
                NonKeyAttributes: [nonKeyAttr],
            },
            ProvisionedThroughput: {
                ReadCapacityUnits: indexProvisionedReadUnits,
                WriteCapacityUnits: indexProvisionedWriteUnits,
            },
            WarmThroughput: {
                ReadUnitsPerSecond: indexWarmReads,
                WriteUnitsPerSecond: indexWarmWrites,
            },
          },
      ],
    });
    const response = await ddbClient.send(command);
    console.log(response);
  } catch (error) {
    console.error(`Error creating table: ${error}`);
    throw error;
  }
}
```

------

# Understanding DynamoDB warm throughput in different scenarios
<a name="warm-throughput-scenarios"></a>

Here are some different scenarios you might encounter when working with DynamoDB warm throughput.

**Topics**
+ [

## Warm throughput and uneven access patterns
](#warm-throughput-scenarios-uneven)
+ [

## Warm throughput for a provisioned table
](#warm-throughput-scenarios-provisioned)
+ [

## Warm throughput for an on-demand table
](#warm-throughput-scenarios-ondemand)
+ [

## Warm throughput for an on-demand table with maximum throughput configured
](#warm-throughput-scenarios-max)

## Warm throughput and uneven access patterns
<a name="warm-throughput-scenarios-uneven"></a>

A table might have a warm throughput of 30,000 read units per second and 10,000 write units per second, but you could still experience throttling on reads or writes before hitting those values. This is likely due to a hot partition. While DynamoDB can keep scaling to support virtually unlimited throughput, each individual partition is limited to 1,000 write units per second and 3,000 read units per second. If your application drives too much traffic to a small portion of the table’s partitions, throttling can occur even before you reach the table's warm throughput values. We recommend following [DynamoDB best practices](bp-partition-key-design.md) to ensure seamless scalability and avoid hot partitions.

## Warm throughput for a provisioned table
<a name="warm-throughput-scenarios-provisioned"></a>

Consider a provisioned table that has a warm throughput of 30,000 read units per second and 10,000 write units per second but currently has a provisioned throughput of 4,000 RCU and 8,000 WCU. You can instantly scale the table's provisioned throughput up to 30,000 RCU or 10,000 WCU by updating your provisioned throughput settings. As you increase the provisioned throughput beyond these values, the warm throughput will automatically adjust to the new higher values, because you have established a new peak throughput. For example, if you set the provisioned throughput to 50,000 RCU, the warm throughput will increase to 50,000 read units per second.

```
"ProvisionedThroughput": 
    {
        "ReadCapacityUnits": 4000,
        "WriteCapacityUnits": 8000 
    }
"WarmThroughput": 
    { 
        "ReadUnitsPerSecond": 30000,
        "WriteUnitsPerSecond": 10000
    }
```

## Warm throughput for an on-demand table
<a name="warm-throughput-scenarios-ondemand"></a>

A new on-demand table starts with a warm throughput of 12,000 read units per second and 4,000 write units per second. Your table can instantly accommodate sustained traffic up to these levels. When your requests exceed 12,000 read units per second or 4,000 write units per second, the warm throughput will automatically adjust to higher values.

```
"WarmThroughput": 
    { 
        "ReadUnitsPerSecond": 12000,
        "WriteUnitsPerSecond": 4000
    }
```

## Warm throughput for an on-demand table with maximum throughput configured
<a name="warm-throughput-scenarios-max"></a>

Consider an on-demand table with a warm throughput of 30,000 read units per second but with a [maximum throughput](on-demand-capacity-mode-max-throughput.md) configured at 5,000 read request units (RRU). In this scenario, the table's throughput will be limited to the maximum of 5,000 RRU that you set. Any throughput requests exceeding this maximum will be throttled. However, you can modify the table-specific maximum throughput at any time based on your application's needs.

```
"OnDemandThroughput": 
    {
        "MaxReadRequestUnits": 5000,
        "MaxWriteRequestUnits": 4000
    }
"WarmThroughput": 
    { 
        "ReadUnitsPerSecond": 30000,
        "WriteUnitsPerSecond": 10000
    }
```

# DynamoDB burst and adaptive capacity
<a name="burst-adaptive-capacity"></a>

To minimize throttling because of throughput exceptions, DynamoDB uses *burst capacity* to handle usage spikes. DynamoDB uses *adaptive capacity* to help accommodate uneven access patterns.

## Burst capacity
<a name="burst-capacity"></a>

DynamoDB provides some flexibility for your throughput provisioning with *burst capacity*. Whenever you aren't fully using your available throughput, DynamoDB reserves a portion of that unused capacity for later *bursts* of throughput to handle usage spikes. With burst capacity, unexpected read or write requests can succeed where they otherwise would be throttled.

DynamoDB currently retains up to five minutes (300 seconds) of unused read and write capacity. During an occasional burst of read or write activity, these extra capacity units can be consumed quickly — even faster than the per-second provisioned throughput capacity that you've defined for your table.

DynamoDB can also consume burst capacity for background maintenance and other tasks without prior notice.

Note that these burst capacity details might change in the future.

## Adaptive capacity
<a name="adaptive-capacity"></a>

DynamoDB automatically distributes your data across [partitions](HowItWorks.Partitions.md), which are stored on multiple servers in the AWS Cloud. It's not always possible to evenly distribute read and write activity all the time. When data access is imbalanced, a "hot" partition can receive a higher volume of read and write traffic compared to other partitions. Because read and write operations on a partition are managed independently, throttling will occur if a single partition receives more than 3000 read operation or more than 1000 write operations. Adaptive capacity works by automatically increasing throughput capacity for partitions that receive more traffic.

 To better accommodate uneven access patterns, DynamoDB adaptive capacity enables your application to continue reading and writing to hot partitions without being throttled, provided that traffic does not exceed your table’s total provisioned capacity or the partition maximum capacity. Adaptive capacity works by automatically and instantly increasing throughput capacity for partitions that receive more traffic.

The following diagram illustrates how adaptive capacity works. The example table is provisioned with 400 WCUs evenly shared across four partitions, allowing each partition to sustain up to 100 WCUs per second. Partitions 1, 2, and 3 each receives write traffic of 50 WCU/sec. Partition 4 receives 150 WCU/sec. This hot partition can accept write traffic while it still has unused burst capacity, but eventually it throttles traffic that exceeds 100 WCU/sec.

DynamoDB adaptive capacity responds by increasing the capacity of partition 4 so that it can sustain the higher workload of 150 WCU/sec without being throttled.

![\[Adaptive capacity automatically increases throughput for partition 4 with higher traffic to avoid throttling.\]](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/images/adaptive-capacity.png)


Adaptive capacity is enabled automatically for every DynamoDB table, at no additional cost. You don't need to explicitly enable or disable it.

### Isolate frequently accessed items
<a name="isolate-frequent-access-items"></a>

If your application drives disproportionately high traffic to one or more items, adaptive capacity rebalances your partitions such that frequently accessed items don't reside on the same partition. This isolation of frequently accessed items reduces the likelihood of request throttling due to your workload exceeding the throughput quota on a single partition. You can also break up an item collection into segments by sort key, as long as the item collection isn't traffic that is tracked by a monotonic increase or decrease of the sort key.

If your application drives consistently high traffic to a single item, adaptive capacity might rebalance your data so that a partition contains only that single, frequently accessed item. In this case, DynamoDB can deliver throughput up to the partition maximum of 3,000 RCUs and 1,000 WCUs to that single item’s primary key. Adaptive capacity will not split item collections across multiple partitions of the table when there is a [local secondary index](LSI.md) on the table.

# Considerations when switching capacity modes in DynamoDB
<a name="bp-switching-capacity-modes"></a>

When you create a DynamoDB table, you must select either on-demand or provisioned capacity mode.

You can switch tables from provisioned capacity mode to on-demand mode up to four times in a 24-hour rolling window. You can switch tables from on-demand mode to provisioned capacity mode at any time. 

**Topics**
+ [

## Switching from provisioned capacity mode to on-demand capacity mode
](#switch-provisioned-to-ondemand)
+ [

## Switching from on-demand capacity mode to provisioned capacity mode
](#switch-ondemand-to-provisioned)

## Switching from provisioned capacity mode to on-demand capacity mode
<a name="switch-provisioned-to-ondemand"></a>

In provisioned mode, you set read and write capacity based on your expected application needs. When you update a table from provisioned to on-demand mode, you don't need to specify how much read and write throughput you expect your application to perform. DynamoDB on-demand offers simple pay-per-request pricing for read and write requests so that you only pay for what you use, making it easy to balance costs and performance. You can optionally configure maximum read or write (or both) throughput for individual on-demand tables and associated global secondary indexes to help keep costs and usage bounded. For more information about setting maximum throughput for a specific table or index, see [DynamoDB maximum throughput for on-demand tables](on-demand-capacity-mode-max-throughput.md).

When you switch from provisioned capacity mode to on-demand capacity mode, DynamoDB makes several changes to the structure of your table and partitions. This process can take several minutes. During the switching period, your table delivers throughput that is consistent with the previously provisioned write capacity unit and read capacity unit amounts.

### Initial throughput for on-demand capacity mode
<a name="initial-throughput-ondemand-mode"></a>

If you recently switched an existing table to on-demand capacity mode for the first time, the table has the following previous peak settings, even though the table has not served traffic previously using on-demand capacity mode.

Following are examples of possible scenarios:
+ **Any provisioned table configured below 4000 WCU and 12,000 RCU, that has never been previously provisioned for more.** When you switch this table to on-demand for the first time, DynamoDB will ensure it is scaled out to instantly sustain at least 4,000 write units/sec and 12,000 read units/sec.
+ **A provisioned table configured as 8,000 WCU and 24,000 RCU.** When you switch this table to on-demand, it will continue to be able to sustain at least 8,000 write units/sec and 24,000 read units/sec at any time.
+ **A provisioned table configured with 8,000 WCU and 24,000 RCU, that consumed 6,000 write units/sec and 18,000 read units/sec for a sustained period.** When you switch this table to on-demand, it will continue to be able to sustain at least 8,000 write units/sec and 24,000 read units/sec. The previous traffic may further allow the table to sustain much higher levels of traffic without throttling.
+ **A table previously provisioned with 10,000 WCU and 10,000 RCU, but currently provisioned with 10 RCU and 10 WCU.** When you switch this table to on-demand, it will be able to sustain at least 10,000 write units/sec and 10,000 read units/sec.

### Auto scaling settings
<a name="autoscaling-settings"></a>

When you update a table from provisioned to on-demand mode:
+ If you're using the console, all of your auto scaling settings (if any) will be deleted.
+ If you're using the AWS CLI or AWS SDK, all of your auto scaling settings will be preserved. These settings can be applied when you update your table to provisioned billing mode again.

### Bulk editing capacity mode in the [DynamoDB console](https://console.aws.amazon.com/dynamodb)
<a name="bulk-edit-capacity-mode"></a>

You can bulk edit multiple tables to switch from provisioned capacity mode to on-demand capacity mode using the [DynamoDB console](https://console.aws.amazon.com/dynamodb). To bulk edit capacity mode:

1. In the DynamoDB console, go to the **Tables** page.

1. Select the checkboxes for the tables you want to update to on-demand capacity mode.

1. Choose **Actions**, and then select **Update to on-demand capacity mode**.

This bulk operation allows you to efficiently switch multiple tables to on-demand capacity mode without having to update each table individually.

## Switching from on-demand capacity mode to provisioned capacity mode
<a name="switch-ondemand-to-provisioned"></a>

When switching from on-demand capacity mode back to provisioned capacity mode, your table delivers throughput consistent with the previous peak reached when the table was set to on-demand capacity mode.

### Managing capacity
<a name="switch-ondemand-capacity"></a>

Consider the following when you update a table from on-demand to provisioned mode:
+  If you're using the AWS CLI or AWS SDK, choose the right provisioned capacity settings of your table and global secondary indexes by using Amazon CloudWatch to look at your historical consumption (`ConsumedWriteCapacityUnits` and `ConsumedReadCapacityUnits` metrics) to determine the new throughput settings.
**Note**  
If you're switching a global table to provisioned mode, look at the maximum consumption across all your regional replicas for base tables and global secondary indexes when determining the new throughput settings. 
+ If you're switching from on-demand mode back to provisioned mode, make sure to set the initial provisioned units high enough to handle your table or index capacity during the transition.

### Managing auto scaling
<a name="switch-ondemand-autoscaling"></a>

When you update a table from on-demand to provisioned mode:
+ If you're using the console, we recommend enabling auto scaling with the following defaults:
  + Target utilization: 70%
  + Minimum provisioned capacity: 5 units
  + Maximum provisioned capacity: The Region maximum
+  If you're using the AWS CLI or SDK, your previous auto scaling settings (if any) are preserved. 