

# Managing serverless resources in Amazon Keyspaces (for Apache Cassandra)
<a name="serverless_resource_management"></a>

Amazon Keyspaces (for Apache Cassandra) is serverless. Instead of deploying, managing, and maintaining storage and compute resources for your workload through nodes in a cluster, Amazon Keyspaces allocates storage and read/write throughput resources directly to tables. 

Amazon Keyspaces provisions storage automatically based on the data stored in your tables. It scales storage up and down as you write, update, and delete data, and you pay only for the storage you use. Data is replicated across multiple [ Availability Zones](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) for high availability. Amazon Keyspaces monitors the size of your tables continuously to determine your storage charges. For more information about how Amazon Keyspaces calculates the billable size of the data, see [Estimate row size in Amazon Keyspaces](calculating-row-size.md). 

This chapter covers key aspects of resource management in Amazon Keyspaces.
+ **Estimate row size** – To estimate the encoded size of rows in Amazon Keyspaces, consider factors like partition key metadata, clustering column metadata, column identifiers, data types, and row metadata. This encoded row size is used for billing, quota management, and provisioned throughput capacity planning. 
+ **Estimate capacity consumption** – This section covers examples of how to estimate read and write capacity consumption for common scenarios like range queries, limit queries, table scans, lightweight transactions, static columns, and multi-Region tables. You can use Amazon CloudWatch to monitor actual capacity utilization. For more information about monitoring with CloudWatch, see [Monitoring Amazon Keyspaces with Amazon CloudWatch](monitoring-cloudwatch.md).
+ **Configure read/write capacity modes** – You can choose between two capacity modes for processing reads and writes on your tables: 
  + **On-demand mode (default)** – Pay per request for read and write throughput. Amazon Keyspaces can instantly scale capacity up to any previously reached traffic level.
  + **Provisioned mode** – Specify the required number of read and write capacity units in advance. This mode helps maintain predictable throughput performance. 
+ **Manage throughput capacity with automatic scaling** – For provisioned tables, you can enable automatic scaling to adjust throughput capacity automatically based on actual application traffic. Amazon Keyspaces uses target tracking to increase or decrease provisioned capacity, keeping utilization at your specified target. 
+ **Use burst capacity effectively** – Amazon Keyspaces provides burst capacity by reserving a portion of unused throughput for handling spikes in traffic. This flexibility allows occasional bursts of activity beyond your provisioned throughput. 

To troubleshoot capacity errors, see [Serverless capacity errors](troubleshooting.serverless.md#troubleshooting-serverless).

**Topics**
+ [Estimate row size in Amazon Keyspaces](calculating-row-size.md)
+ [Estimate capacity consumption of read and write throughput in Amazon Keyspaces](capacity-examples.md)
+ [Configure read/write capacity modes in Amazon Keyspaces](ReadWriteCapacityMode.md)
+ [Manage throughput capacity automatically with Amazon Keyspaces auto scaling](autoscaling.md)
+ [Use burst capacity effectively in Amazon Keyspaces](throughput-bursting.md)

# Estimate row size in Amazon Keyspaces
<a name="calculating-row-size"></a>

Amazon Keyspaces provides fully managed storage that offers single-digit millisecond read and write performance and stores data durably across multiple AWS Availability Zones. Amazon Keyspaces attaches metadata to all rows and primary key columns to support efficient data access and high availability.

This topic provides details about how to estimate the encoded size of rows in Amazon Keyspaces. The encoded row size is used when calculating your bill and quota use. You can also use the encoded row size when estimating provisioned throughput capacity requirements for tables.

To calculate the encoded size of rows in Amazon Keyspaces, you can use the following guidelines.

**Topics**
+ [Estimate the encoded size of columns](#calculating-row-size-columns)
+ [Estimate the encoded size of data values based on data type](#calculating-row-size-data-types)
+ [Consider the impact of Amazon Keyspaces features on row size](#calculating-row-size-features)
+ [Choose the right formula to calculate the encoded size of a row](#calculating-row-size-formula)
+ [Row size calculation example](#calculating-row-size-example)

## Estimate the encoded size of columns
<a name="calculating-row-size-columns"></a>

This section shows how to estimate the encoded size of columns in Amazon Keyspaces.
+ **Regular columns** – For regular columns, which are columns that aren't primary keys, clustering columns, or `STATIC` columns, use the raw size of the cell data based on the [data type](cql.elements.md#cql.data-types) and add the required metadata. The data types and some key differences in how Amazon Keyspaces stores data type values and metadata are listed in the next section.
+ **Partition key columns** – Partition keys can contain up to 2048 bytes of data. Each key column in the partition key requires up to 3 bytes of metadata. When calculating the size of your row, you should assume each partition key column uses the full 3 bytes of metadata.
+ **Clustering columns** – Clustering columns can store up to 850 bytes of data. In addition to the size of the data value, each clustering column requires up to 20% of the data value size for metadata. When calculating the size of your row, you should add 1 byte of metadata for each 5 bytes of clustering column data value.
**Note**  
To support efficient querying and built-in indexing, Amazon Keyspaces stores the data value of each partition key and clustering key column twice.
+ **Column names** – The space required for each column name is stored using a column identifier and added to each data value stored in the column. The storage value of the column identifier depends on the overall number of columns in your table:
  + 1–62 columns: 1 byte
  + 63–124 columns: 2 bytes
  + 125–186 columns: 3 bytes

  For each additional 62 columns add 1 byte. Note that in Amazon Keyspaces, up to 225 regular columns can be modified with a single `INSERT` or `UPDATE` statement. For more information, see [Amazon Keyspaces service quotas](quotas.md#table).

## Estimate the encoded size of data values based on data type
<a name="calculating-row-size-data-types"></a>

This section shows how to estimate the encoded size of different data types in Amazon Keyspaces.
+ **String types** – Cassandra `ASCII`, `TEXT`, and `VARCHAR` string data types are all stored in Amazon Keyspaces using Unicode with UTF-8 binary encoding. The size of a string in Amazon Keyspaces equals the number of UTF-8 encoded bytes.
+ **Numeric types** – Cassandra `INT`, `BIGINT`, `SMALLINT`, `TINYINT`, and `VARINT` data types are stored in Amazon Keyspaces as data values with variable length, with up to 38 significant digits. Leading and trailing zeroes are trimmed. The size of any of these data types is approximately 1 byte per two significant digits \$1 1 byte.
+ **Blob type** – A `BLOB` in Amazon Keyspaces is stored with the value's raw byte length.
+ **Boolean type** – The size of a `Boolean` value or a `Null` value is 1 byte.
+ **Collection types** – A column that stores collection data types like `LIST` or `MAP` requires 3 bytes of metadata, regardless of its contents. The size of a `LIST` or `MAP` is (column id) \$1 sum (size of nested elements) \$1 (3 bytes). The size of an empty `LIST` or `MAP` is (column id) \$1 (3 bytes). Each individual `LIST` or `MAP` element also requires 1 byte of metadata.
+ **User-defined types** – A [user-defined type (UDT)](udts.md) requires 3 bytes for metadata, regardless of its contents. For each UDT element, Amazon Keyspaces requires an additional 1 byte of metadata.

  To calculate the encoded size of a UDT, start with the `field name` and the `field value` for the fields of a UDT:
  + **field name** – Each field name of the top-level UDT is stored using an identifier. The storage value of the identifier depends on the overall number of fields in your top-level UDT, and can vary between 1 and 3 bytes: 
    + 1–62 fields: 1 byte
    + 63–124 fields: 2 bytes
    + 125– max fields: 3 bytes
  + **field value** – The bytes required to store the field values of the top-level UDT depend on the data type stored:
    + **Scalar data type** – The bytes required for storage are the same as for the same data type stored in a regular column.
    + **Frozen UDT** – For each frozen nested UDT, the nested UDT has the same size as it would have in the CQL binary protocol. For a nested UDT, 4 bytes are stored for each field (including empty fields) and the value of the stored field is the CQL binary protocol serialization format of the field value.
    + **Frozen collections**: 
      + **LIST** and **SET** – For a nested frozen `LIST` or `SET`, 4 bytes are stored for each element of the collection plus the CQL binary protocol serialization format of the collection’s value.
      + **MAP** – For a nested frozen `MAP`, each key-value pair has the following storage requirements:
        + For each key allocate 4 bytes, then add the CQL binary protocol serialization format of the key.
        + For each value allocate 4 bytes, then add the CQL binary protocol serialization format of the value.
+ **FROZEN keyword** – For frozen collections nested within frozen collections, Amazon Keyspaces doesn't require any additional bytes for meta data.
+ **STATIC keyword** – `STATIC` column data doesn't count towards the maximum row size of 1 MB. To calculate the data size of static columns, see [Calculate the static column size per logical partition in Amazon Keyspaces](static-columns-estimate.md).

## Consider the impact of Amazon Keyspaces features on row size
<a name="calculating-row-size-features"></a>

This section shows how features in Amazon Keyspaces impact the encoded size of a row.
+ **Client-side timestamps** – Client-side timestamps are stored for every column in each row when the feature is turned on. These timestamps take up approximately 20–40 bytes (depending on your data), and contribute to the storage and throughput cost for the row. For more information about client-side timestamps, see [Client-side timestamps in Amazon Keyspaces](client-side-timestamps.md).
+ **Time to Live (TTL)** – TTL metadata takes up approximately 8 bytes for a row when the feature is turned on. Additionally, TTL metadata is stored for every column of each row. The TTL metadata takes up approximately 8 bytes for each column storing a scalar data type or a frozen collection. If the column stores a collection data type that's not frozen, for each element of the collection TTL requires approximately 8 additional bytes for metadata. For a column that stores a collection data type when TTL is enabled, you can use the following formula.

  ```
  total encoded size of column = (column id) + sum (nested elements + collection metadata (1 byte) + TTL metadata (8 bytes)) +  collection column metadata (3 bytes)
  ```

  TTL metadata contributes to the storage and throughput cost for the row. For more information about TTL, see [Expire data with Time to Live (TTL) for Amazon Keyspaces (for Apache Cassandra)](TTL.md).

## Choose the right formula to calculate the encoded size of a row
<a name="calculating-row-size-formula"></a>

This section shows the different formulas that you can use to estimate either the storage or the capacity throughput requirements for a row of data in Amazon Keyspaces.

The total encoded size of a row of data can be estimated based on one of the following formulas, based on your goal:
+ **Throughput capacity** – To estimate the encoded size of a row to assess the required read/write request units (RRUs/WRUs) or read/write capacity units (RCUs/WCUs):

  ```
  total encoded size of row = partition key columns + clustering columns + regular columns
  ```
+ **Storage size** – To estimate the encoded size of a row to predict the `BillableTableSizeInBytes`, add the required metadata for the storage of the row:

  ```
  total encoded size of row = partition key columns + clustering columns + regular columns + row metadata (100 bytes)
  ```

**Important**  
All column metadata, for example column ids, partition key metadata, clustering column metadata, as well as client-side timestamps, TTL, and row metadata count towards the maximum row size of 1 MB.

## Row size calculation example
<a name="calculating-row-size-example"></a>

Consider the following example of a table where all columns are of type integer. The table has two partition key columns, two clustering columns, and one regular column. Because this table has five columns, the space required for the column name identifier is 1 byte.

```
CREATE TABLE mykeyspace.mytable(pk_col1 int, pk_col2 int, ck_col1 int, ck_col2 int, reg_col1 int, primary key((pk_col1, pk_col2),ck_col1, ck_col2));
```

In this example, we calculate the size of data when we write a row to the table as shown in the following statement:

```
INSERT INTO mykeyspace.mytable (pk_col1, pk_col2, ck_col1, ck_col2, reg_col1) values(1,2,3,4,5);
```

To estimate the total bytes required by this write operation, you can use the following steps.

1. Calculate the size of a partition key column by adding the bytes for the data type stored in the column and the metadata bytes. Repeat this for all partition key columns.

   1. Calculate the size of the first column of the partition key (pk\$1col1):

      ```
      (2 bytes for the integer data type) x 2 + 1 byte for the column id + 3 bytes for partition key metadata = 8 bytes
      ```

   1. Calculate the size of the second column of the partition key (pk\$1col2): 

      ```
      (2 bytes for the integer data type) x 2 + 1 byte for the column id + 3 bytes for partition key metadata = 8 bytes
      ```

   1. Add both columns to get the total estimated size of the partition key columns: 

      ```
      8 bytes + 8 bytes = 16 bytes for the partition key columns
      ```

1. Calculate the size of the clustering column by adding the bytes for the data type stored in the column and the metadata bytes. Repeat this for all clustering columns.

   1. Calculate the size of the first column of the clustering column (ck\$1col1):

      ```
      (2 bytes for the integer data type) x 2 + 20% of the data value (2 bytes) for clustering column metadata + 1 byte for the column id  = 6 bytes
      ```

   1. Calculate the size of the second column of the clustering column (ck\$1col2):

      ```
      (2 bytes for the integer data type) x 2 + 20% of the data value (2 bytes) for clustering column metadata + 1 byte for the column id = 6 bytes
      ```

   1. Add both columns to get the total estimated size of the clustering columns:

      ```
      6 bytes + 6 bytes = 12 bytes for the clustering columns
      ```

1. Add the size of the regular columns. In this example we only have one column that stores a single digit integer, which requires 2 bytes with 1 byte for the column id.

1. Finally, to get the total encoded row size, add up the bytes for all columns. To estimate the billable size for storage, add the additional 100 bytes for row metadata:

   ```
   16 bytes for the partition key columns + 12 bytes for clustering columns + 3 bytes for the regular column + 100 bytes for row metadata = 131 bytes.
   ```

To learn how to monitor serverless resources with Amazon CloudWatch, see [Monitoring Amazon Keyspaces with Amazon CloudWatch](monitoring-cloudwatch.md).

# Estimate capacity consumption of read and write throughput in Amazon Keyspaces
<a name="capacity-examples"></a>

When you read or write data in Amazon Keyspaces, the amount of read/write request units (RRUs/WRUs) or read/write capacity units (RCUs/WCUs) your query consumes depends on the total amount of data Amazon Keyspaces has to process to run the query. In some cases, the data returned to the client can be a subset of the data that Amazon Keyspaces had to read to process the query. For conditional writes, Amazon Keyspaces consumes write capacity even if the conditional check fails.

To estimate the total amount of data being processed for a request, you have to consider the encoded size of a row and the total number of rows. This topic covers some examples of common scenarios and access patterns to show how Amazon Keyspaces processes queries and how that affects capacity consumption. You can follow the examples to estimate the capacity requirements of your tables and use Amazon CloudWatch to observe the read and write capacity consumption for these use cases.

For information on how to calculate the encoded size of rows in Amazon Keyspaces, see [Estimate row size in Amazon Keyspaces](calculating-row-size.md).

**Topics**
+ [Estimate the capacity consumption of range queries in Amazon Keyspaces](range_queries.md)
+ [Estimate the read capacity consumption of limit queries](limit_queries.md)
+ [Estimate the read capacity consumption of table scans](table_scans.md)
+ [Estimate capacity consumption of lightweight transactions in Amazon Keyspaces](lightweight_transactions.md)
+ [Estimate capacity consumption for static columns in Amazon Keyspaces](static-columns.md)
+ [Estimate and provision capacity for a multi-Region table in Amazon Keyspaces](tables-multi-region-capacity.md)
+ [Estimate read and write capacity consumption with Amazon CloudWatch in Amazon Keyspaces](estimate_consumption_cw.md)

# Estimate the capacity consumption of range queries in Amazon Keyspaces
<a name="range_queries"></a>

 To look at the read capacity consumption of a range query, we use the following example table which is using on-demand capacity mode. 

```
pk1 | pk2 | pk3 | ck1 | ck2 | ck3 | value
-----+-----+-----+-----+-----+-----+-------
a | b | 1 | a | b | 50 | <any value that results in a row size larger than 4KB>
a | b | 1 | a | b | 60 | value_1
a | b | 1 | a | b | 70 | <any value that results in a row size larger than 4KB>
```

Now run the following query on this table.

```
SELECT * FROM amazon_keyspaces.example_table_1 WHERE pk1='a' AND pk2='b' AND pk3=1 AND ck1='a' AND ck2='b' AND ck3 > 50 AND ck3 < 70;
```

You receive the following result set from the query and the read operation performed by Amazon Keyspaces consumes 2 RRUs in `LOCAL_QUORUM` consistency mode.

```
pk1 | pk2 | pk3 | ck1 | ck2 | ck3 | value
-----+-----+-----+-----+-----+-----+-------
a | b | 1 | a | b | 60 | value_1
```

Amazon Keyspaces consumes 2 RRUs to evaluate the rows with the values `ck3=60` and `ck3=70` to process the query. However, Amazon Keyspaces only returns the row where the `WHERE` condition specified in the query is true, which is the row with value `ck3=60`. To evaluate the range specified in the query, Amazon Keyspaces reads the row matching the upper bound of the range, in this case `ck3 = 70`, but doesn’t return that row in the result. The read capacity consumption is based on the data read when processing the query, not on the data returned.

# Estimate the read capacity consumption of limit queries
<a name="limit_queries"></a>

 When processing a query that uses the `LIMIT` clause, Amazon Keyspaces reads rows up to the maximum page size when trying to match the condition specified in the query. If Amazon Keyspaces can't find sufficient matching data that meets the `LIMIT` value on the first page, one or more paginated calls could be needed. To continue reads on the next page, you can use a pagination token. The default page size is 1MB. To consume less read capacity when using `LIMIT` clauses, you can reduce the page size. For more information about pagination, see [Paginate results in Amazon Keyspaces](paginating-results.md).

For an example, let's look at the following query.

```
SELECT * FROM my_table WHERE partition_key=1234 LIMIT 1;
```

If you don’t set the page size, Amazon Keyspaces reads 1MB of data even though it returns only 1 row to you. To only have Amazon Keyspaces read one row, you can set the page size to 1 for this query. In this case, Amazon Keyspaces would only read one row provided you don’t have expired rows based on Time-to-live settings or client-side timestamps. 

The `PAGE SIZE` parameter determines how many rows Amazon Keyspaces scans from disk for each request, not how many rows Amazon Keyspaces returns to the client. Amazon Keyspaces applies the filters you provide, for example inequality on non-key columns or a `LIMIT` after it scans the data on disk. If you don’t explicitly set the `PAGE SIZE`, Amazon Keyspaces reads up to 1MB of data before applying filters. For example, if you're using `LIMIT 1` without specifying the `PAGE SIZE`, Amazon Keyspaces could read thousands of rows from disk before applying the limit clause and returning only a single row.

To avoid over-reading, reduce the `PAGE SIZE` which reduces the number of rows Amazon Keyspaces scans for each fetch. For example, if you define `LIMIT 5` in your query, set the `PAGE SIZE` to a value between 5 - 10 so that Amazon Keyspaces only scans 5 - 10 rows on each paginated call. You can modify this number to reduce the number of fetches. For limits that are larger than the page size, Amazon Keyspaces maintains the total result count with pagination state. In the case of a `LIMIT` of 10,000 rows, Amazon Keyspaces can fetch these results in two pages of 5,000 rows each. The 1MB limit is the upper bound for any page size set.

# Estimate the read capacity consumption of table scans
<a name="table_scans"></a>

Queries that result in full table scans, for example queries using the `ALLOW FILTERING` option, are another example of queries that process more reads than what they return as results. And the read capacity consumption is based on the data read, not the data returned.

For the table scan example we use the following example table in on-demand capacity mode.

```
pk | ck | value
---+----+---------
pk | 10 | <any value that results in a row size larger than 4KB>
pk | 20 | value_1 
pk | 30 | <any value that results in a row size larger than 4KB>
```

Amazon Keyspaces creates a table in on-demand capacity mode with four partitions by default. In this example table, all the data is stored in one partition and the remaining three partitions are empty.

Now run the following query on the table.

```
SELECT * from amazon_keyspaces.example_table_2;
```

This query results in a table scan operation where Amazon Keyspaces scans all four partitions of the table and consumes 6 RRUs in `LOCAL_QUORUM` consistency mode. First, Amazon Keyspaces consumes 3 RRUs for reading the three rows with `pk=‘pk’`. Then, Amazon Keyspaces consumes the additional 3 RRUs for scanning the three empty partitions of the table. Because this query results in a table scan, Amazon Keyspaces scans all the partitions in the table, including partitions without data. 

# Estimate capacity consumption of lightweight transactions in Amazon Keyspaces
<a name="lightweight_transactions"></a>

Lightweight transactions (LWT) allow you to perform conditional write operations against your table data. Conditional update operations are useful when inserting, updating and deleting records based on conditions that evaluate the current state. 

In Amazon Keyspaces, all write operations require LOCAL\$1QUORUM consistency and there is no additional charge for using LWTs. The difference for LWTs is that when a LWT condition check results in `FALSE`, Amazon Keyspaces consumes write capacity units (WCUs) or write request units (WRUs). The number of WCUs/WRUs consumed depends on the size of the row. 

For example, if the row size is 2 KB, the failed conditional write consumes two WCUs/WRUs. If the row doesn’t currently exist in the table, the operation consumes one WCUs/WRUs. 

To determine the number of requests that resulted in condition check failures, you can monitor the `ConditionalCheckFailed` metric in CloudWatch.

## Estimate LWT costs for tables with Time to Live (TTL)
<a name="lightweight_transactions_ttl"></a>

LWTs can require additional read capacity units (RCUs) or read request units (RRUs) for tables configured with TTL that don't use client-side timestamps. When using `IF EXISTS` or `IF NOT EXISTS` keywords condition check results in `FALSE`, the following capacity units are consumed:
+ RCUs/RRUs – If the row exists, the RCUs/RRUs consumed are based on the size of the existing row.
+ RCUs/RRUs – If the row doesn't exist, a single RCU/RRU is consumed.

If the evaluated condition results in a successful write operation, WCUs/WRUs are consumed based on the size of the new row.

# Estimate capacity consumption for static columns in Amazon Keyspaces
<a name="static-columns"></a>

In an Amazon Keyspaces table with clustering columns, you can use the `STATIC` keyword to create a static column. The value stored in a static column is shared between all rows in a logical partition. When you update the value of this column, Amazon Keyspaces applies the change automatically to all rows in the partition. 

This section describes how to calculate the encoded size of data when you're writing to static columns. This process is handled separately from the process that writes data to the nonstatic columns of a row. In addition to size quotas for static data, read and write operations on static columns also affect metering and throughput capacity for tables independently. For functional differences with Apache Cassandra when using static columns and paginated range read results, see [Pagination](functional-differences.md#functional-differences.paging).

**Topics**
+ [Calculate the static column size per logical partition in Amazon Keyspaces](static-columns-estimate.md)
+ [Estimate capacity throughput requirements for read/write operations on static data in Amazon Keyspaces](static-columns-metering.md)

# Calculate the static column size per logical partition in Amazon Keyspaces
<a name="static-columns-estimate"></a>

This section provides details about how to estimate the encoded size of static columns in Amazon Keyspaces. The encoded size is used when you're calculating your bill and quota use. You should also use the encoded size when you calculate provisioned throughput capacity requirements for tables. To calculate the encoded size of static columns in Amazon Keyspaces, you can use the following guidelines.
+ Partition keys can contain up to 2048 bytes of data. Each key column in the partition key requires up to 3 bytes of metadata. These metadata bytes count towards your static data size quota of 1 MB per partition. When calculating the size of your static data, you should assume that each partition key column uses the full 3 bytes of metadata.
+ Use the raw size of the static column data values based on the data type. For more information about data types, see [Data types](cql.elements.md#cql.data-types).
+ Add 104 bytes to the size of the static data for metadata.
+ Clustering columns and regular, nonprimary key columns do not count towards the size of static data. To learn how to estimate the size of nonstatic data within rows, see [Estimate row size in Amazon Keyspaces](calculating-row-size.md).

The total encoded size of a static column is based on the following formula:

```
partition key columns + static columns + metadata = total encoded size of static data
```

Consider the following example of a table where all columns are of type integer. The table has two partition key columns, two clustering columns, one regular column, and one static column.

```
CREATE TABLE mykeyspace.mytable(pk_col1 int, pk_col2 int, ck_col1 int, ck_col2 int, reg_col1 int, static_col1 int static, primary key((pk_col1, pk_col2),ck_col1, ck_col2));
```

In this example, we calculate the size of static data of the following statement:

```
INSERT INTO mykeyspace.mytable (pk_col1, pk_col2, static_col1) values(1,2,6);
```

To estimate the total bytes required by this write operation, you can use the following steps.

1. Calculate the size of a partition key column by adding the bytes for the data type stored in the column and the metadata bytes. Repeat this for all partition key columns.

   1. Calculate the size of the first column of the partition key (pk\$1col1):

      ```
      4 bytes for the integer data type + 3 bytes for partition key metadata = 7 bytes
      ```

   1. Calculate the size of the second column of the partition key (pk\$1col2): 

      ```
      4 bytes for the integer data type + 3 bytes for partition key metadata = 7 bytes
      ```

   1. Add both columns to get the total estimated size of the partition key columns: 

      ```
      7 bytes + 7 bytes = 14 bytes for the partition key columns
      ```

1. Add the size of the static columns. In this example, we only have one static column that stores an integer (which requires 4 bytes).

1. Finally, to get the total encoded size of the static column data, add up the bytes for the primary key columns and static columns, and add the additional 104 bytes for metadata:

   ```
   14 bytes for the partition key columns + 4 bytes for the static column + 104 bytes for metadata = 122 bytes.
   ```

You can also update static and nonstatic data with the same statement. To estimate the total size of the write operation, you must first calculate the size of the nonstatic data update. Then calculate the size of the row update as shown in the example at [Estimate row size in Amazon Keyspaces](calculating-row-size.md), and add the results. 

In this case, you can write a total of 2 MB—1 MB is the maximum row size quota, and 1 MB is the quota for the maximum static data size per logical partition.

To calculate the total size of an update of static and nonstatic data in the same statement, you can use the following formula:

```
(partition key columns + static columns + metadata = total encoded size of static data) + (partition key columns + clustering columns + regular columns + row metadata = total encoded size of row)
= total encoded size of data written
```

Consider the following example of a table where all columns are of type integer. The table has two partition key columns, two clustering columns, one regular column, and one static column.

```
CREATE TABLE mykeyspace.mytable(pk_col1 int, pk_col2 int, ck_col1 int, ck_col2 int, reg_col1 int, static_col1 int static, primary key((pk_col1, pk_col2),ck_col1, ck_col2));
```

In this example, we calculate the size of data when we write a row to the table, as shown in the following statement:

```
INSERT INTO mykeyspace.mytable (pk_col1, pk_col2, ck_col1, ck_col2, reg_col1, static_col1) values(2,3,4,5,6,7);
```

To estimate the total bytes required by this write operation, you can use the following steps.

1. Calculate the total encoded size of static data as shown earlier. In this example, it's 122 bytes.

1. Add the size of the total encoded size of the row based on the update of nonstatic data, following the steps at [Estimate row size in Amazon Keyspaces](calculating-row-size.md). In this example, the total size of the row update is 134 bytes.

   ```
   122 bytes for static data + 134 bytes for nonstatic data = 256 bytes.
   ```

# Estimate capacity throughput requirements for read/write operations on static data in Amazon Keyspaces
<a name="static-columns-metering"></a>

Static data is associated with logical partitions in Cassandra, not individual rows. Logical partitions in Amazon Keyspaces can be virtually unbound in size by spanning across multiple physical storage partitions. As a result, Amazon Keyspaces meters write operations on static and nonstatic data separately. Furthermore, writes that include both static and nonstatic data require additional underlying operations to provide data consistency. 

If you perform a mixed write operation of both static and nonstatic data, this results in two separate write operations—one for nonstatic and one for static data. This applies to both on-demand and provisioned read/write capacity modes.

The following example provides details about how to estimate the required read capacity units (RCUs) and write capacity units (WCUs) when you're calculating provisioned throughput capacity requirements for tables in Amazon Keyspaces that have static columns. You can estimate how much capacity your table needs to process writes that include both static and nonstatic data by using the following formula:

```
2 x WCUs required for nonstatic data + 2 x WCUs required for static data
```

For example, if your application writes 27 KBs of data per second and each write includes 25.5 KBs of nonstatic data and 1.5 KBs of static data, then your table requires 56 WCUs (2 x 26 WCUs \$1 2 x 2 WCUs).

Amazon Keyspaces meters the reads of static and nonstatic data the same as reads of multiple rows. As a result, the price of reading static and nonstatic data in the same operation is based on the aggregate size of the data processed to perform the read.

To learn how to monitor serverless resources with Amazon CloudWatch, see [Monitoring Amazon Keyspaces with Amazon CloudWatch](monitoring-cloudwatch.md).

# Estimate and provision capacity for a multi-Region table in Amazon Keyspaces
<a name="tables-multi-region-capacity"></a>

You can configure the throughput capacity of a multi-Region table in one of two ways:
+ On-demand capacity mode, measured in write request units (WRUs)
+ Provisioned capacity mode with auto scaling, measured in write capacity units (WCUs)

You can use provisioned capacity mode with auto scaling or on-demand capacity mode to help ensure that a multi-Region table has sufficient capacity to perform replicated writes to all AWS Regions.

**Note**  
Changing the capacity mode of the table in one of the Regions changes the capacity mode for all replicas.

By default, Amazon Keyspaces uses on-demand mode for multi-Region tables. With on-demand mode, you don't need to specify how much read and write throughput that you expect your application to perform. Amazon Keyspaces instantly accommodates your workloads as they ramp up or down to any previously reached traffic level. If a workload’s traffic level hits a new peak, Amazon Keyspaces adapts rapidly to accommodate the workload.

If you choose provisioned capacity mode for a table, you have to configure the number of read capacity units (RCUs) and write capacity units (WCUs) per second that your application requires. 

To plan a multi-Region table's throughput capacity needs, you should first estimate the number of WCUs per second needed for each Region. Then you add the writes from all Regions that your table is replicated in, and use the sum to provision capacity for each Region. This is required because every write that is performed in one Region must also be repeated in each replica Region. 

If the table doesn't have enough capacity to handle the writes from all Regions, capacity exceptions will occur. In addition, inter-Regional replication wait times will rise.

For example, if you have a multi-Region table where you expect 5 writes per second in US East (N. Virginia), 10 writes per second in US East (Ohio), and 5 writes per second in Europe (Ireland), you should expect the table to consume 20 WCUs in each Region: US East (N. Virginia), US East (Ohio), and Europe (Ireland). That means that in this example, you need to provision 20 WCUs for each of the table's replicas. You can monitor your table's capacity consumption using Amazon CloudWatch. For more information, see [Monitoring Amazon Keyspaces with Amazon CloudWatch](monitoring-cloudwatch.md). 

Each write is billed as 1 WCU, so you would see a total of 60 WCUs billed in this example. For more information about pricing, see [Amazon Keyspaces (for Apache Cassandra) pricing](https://aws.amazon.com/keyspaces/pricing). 

For more information about provisioned capacity with Amazon Keyspaces auto scaling, see [Manage throughput capacity automatically with Amazon Keyspaces auto scaling](autoscaling.md). 

**Note**  
If a table is running in provisioned capacity mode with auto scaling, the provisioned write capacity is allowed to float within those auto scaling settings for each Region. 

# Estimate read and write capacity consumption with Amazon CloudWatch in Amazon Keyspaces
<a name="estimate_consumption_cw"></a>

To estimate and monitor read and write capacity consumption, you can use a CloudWatch dashboard. For more information about available metrics for Amazon Keyspaces, see [Amazon Keyspaces metrics and dimensions](metrics-dimensions.md). 

To monitor read and write capacity units consumed by a specific statement with CloudWatch, you can follow these steps.

1. Create a new table with sample data

1. Configure a Amazon Keyspaces CloudWatch dashboard for the table. To get started, you can use a dashboard template available on [Github](https://github.com/aws-samples/amazon-keyspaces-cloudwatch-cloudformation-templates).

1. Run the CQL statement, for example using the `ALLOW FILTERING` option, and check the read capacity units consumed for the full table scan in the dashboard.

# Configure read/write capacity modes in Amazon Keyspaces
<a name="ReadWriteCapacityMode"></a>

Amazon Keyspaces has two read/write capacity modes for processing reads and writes on your tables: 
+  On-demand (default) 
+  Provisioned 

 The read/write capacity mode that you choose controls how you are charged for read and write throughput and how table throughput capacity is managed. 

**Topics**
+ [Configure on-demand capacity mode](ReadWriteCapacityMode.OnDemand.md)
+ [Configure provisioned capacity mode](ReadWriteCapacityMode.Provisioned.md)
+ [View the capacity mode of a table in Amazon Keyspaces](ReadWriteCapacityMode.ProvisionedThroughput.ManagingCapacity.md)
+ [Change the capacity mode of a table in Amazon Keyspaces](ReadWriteCapacityMode.SwitchReadWriteCapacityMode.md)
+ [Configure pre-warming for tables in Amazon Keyspaces](warm-throughput.md)

# Configure on-demand capacity mode
<a name="ReadWriteCapacityMode.OnDemand"></a>

Amazon Keyspaces (for Apache Cassandra) *on-demand* capacity mode is a flexible billing option capable of serving thousands of requests per second without capacity planning. This option offers pay-per-request pricing for read and write requests so that you pay only for what you use. 

 When you choose on-demand mode, Amazon Keyspaces can scale the throughput capacity for your table up to any previously reached traffic level instantly, and then back down when application traffic decreases. If a workload’s traffic level hits a new peak, the service adapts rapidly to increase throughput capacity for your table. You can enable on-demand capacity mode for both new and existing tables.

On-demand mode is a good option if any of the following is true: 
+ You create new tables with unknown workloads. 
+ You have unpredictable application traffic. 
+ You prefer the ease of paying for only what you use. 

To get started with on-demand mode, you can create a new table or update an existing table to use on-demand capacity mode using the console or with a few lines of Cassandra Query Language (CQL) code. For more information, see [Tables](cql.ddl.table.md).

**Topics**
+ [Read request units and write request units](#ReadWriteCapacityMode.requests)
+ [Peak traffic and scaling properties](#ReadWriteCapacityMode.PeakTraffic)
+ [Initial throughput for on-demand capacity mode](#ReadWriteCapacityMode.InitialThroughput)

## Read request units and write request units
<a name="ReadWriteCapacityMode.requests"></a>

 With on-demand capacity mode tables, you don't need to specify how much read and write throughput you expect your application to use in advance. Amazon Keyspaces charges you for the reads and writes that you perform on your tables in terms of read request units (RRUs) and write request units (WRUs). 
+ One *RRU* represents one `LOCAL_QUORUM` read request, or two` LOCAL_ONE` read requests, for a row up to 4 KB in size. If you need to read a row that is larger than 4 KB, the read operation uses additional RRUs. The total number of RRUs required depends on the row size, and whether you want to use `LOCAL_QUORUM` or `LOCAL_ONE` read consistency. For example, reading an 8 KB row requires 2 RRUs using `LOCAL_QUORUM` read consistency, and 1 RRU if you choose `LOCAL_ONE` read consistency. 
+ One *WRU* represents one write for a row up to 1 KB in size. All writes are using `LOCAL_QUORUM` consistency, and there is no additional charge for using lightweight transactions (LWTs). If you need to write a row that is larger than 1 KB, the write operation uses additional WRUs. The total number of WRUs required depends on the row size. For example, if your row size is 2 KB, you require 2 WRUs to perform one write request. 

For information about supported consistency levels, see [Supported Apache Cassandra read and write consistency levels and associated costs](consistency.md).

## Peak traffic and scaling properties
<a name="ReadWriteCapacityMode.PeakTraffic"></a>

Amazon Keyspaces tables that use on-demand capacity mode automatically adapt to your application’s traffic volume. On-demand capacity mode instantly accommodates up to double the previous peak traffic on a table. For example, your application's traffic pattern might vary between 5,000 and 10,000 `LOCAL_QUORUM` reads per second, where 10,000 reads per second is the previous traffic peak. 

With this pattern, on-demand capacity mode instantly accommodates sustained traffic of up to 20,000 reads per second. If your application sustains traffic of 20,000 reads per second, that peak becomes your new previous peak, enabling subsequent traffic to reach up to 40,000 reads per second.

 If you need more than double your previous peak on a table, Amazon Keyspaces automatically allocates more capacity as your traffic volume increases. This helps ensure that your table has enough throughput capacity to process the additional requests. However, you might observe insufficient throughput capacity errors if you exceed double your previous peak within 30 minutes. 

For example, suppose that your application's traffic pattern varies between 5,000 and 10,000 strongly consistent reads per second, where 20,000 reads per second is the previously reached traffic peak. In this case, the service recommends that you space your traffic growth over at least 30 minutes before driving up to 40,000 reads per second. 

To learn how to estimate read and write capacity consumption of a table, see [Estimate capacity consumption of read and write throughput in Amazon Keyspaces](capacity-examples.md).

To learn more about default quotas for your account and how to increase them, see [Quotas for Amazon Keyspaces (for Apache Cassandra)](quotas.md).

## Initial throughput for on-demand capacity mode
<a name="ReadWriteCapacityMode.InitialThroughput"></a>

If you create a new table with on-demand capacity mode enabled or switch an existing table to on-demand capacity mode for the first time, the table has the following previous peak settings, even though it hasn't served traffic previously using on-demand capacity mode:
+  ** Newly created table with on-demand capacity mode:** The previous peak is 2,000 WRUs and 6,000 RRUs. You can drive up to double the previous peak immediately. Doing this enables newly created on-demand tables to serve up to 4,000 WRUs and 12,000 RRUs. 
+  **Existing table switched to on-demand capacity mode:** The previous peak is half the previous WCUs and RCUs provisioned for the table or the settings for a newly created table with on-demand capacity mode, whichever is higher. 

# Configure provisioned capacity mode
<a name="ReadWriteCapacityMode.Provisioned"></a>

 If you choose *provisioned throughput* capacity mode, you specify the number of reads and writes per second that are required for your application. This helps you manage your Amazon Keyspaces usage to stay at or below a defined request rate to maintain predictability. To learn more about automatic scaling for provisioned throughput see [Manage throughput capacity automatically with Amazon Keyspaces auto scaling](autoscaling.md). 

Provisioned throughput capacity mode is a good option if any of the following is true: 
+ You have predictable application traffic. 
+ You run applications whose traffic is consistent or ramps up gradually. 
+ You can forecast capacity requirements.

## Read capacity units and write capacity units
<a name="ReadWriteCapacityMode.Provisioned.Units"></a>

 For provisioned throughput capacity mode tables, you specify throughput capacity in terms of read capacity units (RCUs) and write capacity units (WCUs): 
+ One *RCU* represents one `LOCAL_QUORUM` read per second, or two `LOCAL_ONE` reads per second, for a row up to 4 KB in size. If you need to read a row that is larger than 4 KB, the read operation uses additional RCUs. 

  The total number of RCUs required depends on the row size, and whether you want `LOCAL_QUORUM` or `LOCAL_ONE` reads. For example, if your row size is 8 KB, you require 2 RCUs to sustain one `LOCAL_QUORUM` read per second, and 1 RCU if you choose `LOCAL_ONE` reads. 
+ One *WCU* represents one write per second for a row up to 1 KB in size. All writes are using `LOCAL_QUORUM` consistency, and there is no additional charge for using lightweight transactions (LWTs). If you need to write a row that is larger than 1 KB, the write operation uses additional WCUs. 

  The total number of WCUs required depends on the row size. For example, if your row size is 2 KB, you require 2 WCUs to sustain one write request per second. For more information about how to estimate read and write capacity consumption of a table, see [Estimate capacity consumption of read and write throughput in Amazon Keyspaces](capacity-examples.md).

If your application reads or writes larger rows (up to the Amazon Keyspaces maximum row size of 1 MB), it consumes more capacity units. To learn more about how to estimate the row size, see [Estimate row size in Amazon Keyspaces](calculating-row-size.md). For example, suppose that you create a provisioned table with 6 RCUs and 6 WCUs. With these settings, your application could do the following:
+ Perform `LOCAL_QUORUM` reads of up to 24 KB per second (4 KB × 6 RCUs).
+ Perform `LOCAL_ONE` reads of up to 48 KB per second (twice as much read throughput).
+ Write up to 6 KB per second (1 KB × 6 WCUs).

 *Provisioned throughput* is the maximum amount of throughput capacity an application can consume from a table. If your application exceeds your provisioned throughput capacity, you might observe insufficient capacity errors. 

For example, a read request that doesn’t have enough throughput capacity fails with a `Read_Timeout` exception and is posted to the `ReadThrottleEvents` metric. A write request that doesn’t have enough throughput capacity fails with a `Write_Timeout` exception and is posted to the `WriteThrottleEvents` metric. 

You can use Amazon CloudWatch to monitor your provisioned and actual throughput metrics and insufficient capacity events. For more information about these metrics, see [Amazon Keyspaces metrics and dimensions](metrics-dimensions.md). 

**Note**  
Repeated errors due to insufficient capacity can lead to client-side driver specific exceptions, for example the DataStax Java driver fails with a `NoHostAvailableException`. 

To change the throughput capacity settings for tables, you can use the AWS Management Console or the `ALTER TABLE` statement using CQL, for more information see [ALTER TABLE](cql.ddl.table.md#cql.ddl.table.alter).

To learn more about default quotas for your account and how to increase them, see [Quotas for Amazon Keyspaces (for Apache Cassandra)](quotas.md).

# View the capacity mode of a table in Amazon Keyspaces
<a name="ReadWriteCapacityMode.ProvisionedThroughput.ManagingCapacity"></a>

You can query the system table in the Amazon Keyspaces system keyspace to review capacity mode information about a table. You can also see whether a table is using on-demand or provisioned throughput capacity mode. If the table is configured with provisioned throughput capacity mode, you can see the throughput capacity provisioned for the table. 

You can also use the AWS CLI to view the capacity mode of a table.

To change the provisioned throughput of a table, see [Change the capacity mode of a table in Amazon Keyspaces](ReadWriteCapacityMode.SwitchReadWriteCapacityMode.md).

------
#### [ Cassandra Query Language (CQL) ]

**Example**

```
SELECT * from system_schema_mcs.tables where keyspace_name = 'mykeyspace' and table_name = 'mytable';
```

A table configured with on-demand capacity mode returns the following.

```
{
   "capacity_mode":{
      "last_update_to_pay_per_request_timestamp":"1579551547603",
      "throughput_mode":"PAY_PER_REQUEST"
   }
}
```

A table configured with provisioned throughput capacity mode returns the following.

```
{
   "capacity_mode":{
      "last_update_to_pay_per_request_timestamp":"1579048006000",
      "read_capacity_units":"5000",
      "throughput_mode":"PROVISIONED",
      "write_capacity_units":"6000"
   }
}
```

The `last_update_to_pay_per_request_timestamp` value is measured in milliseconds.

------
#### [ CLI ]

View a table's throughput capacity mode using the AWS CLI

```
aws keyspaces get-table --keyspace-name myKeyspace --table-name myTable
```

The output of the command can look similar to this for a table in provisioned capacity mode.

```
"capacitySpecification": {
        "throughputMode": "PROVISIONED",
        "readCapacityUnits": 4000,
        "writeCapacityUnits": 2000
    }
```

The output for a table in on-demand mode looks like this.

```
"capacitySpecification": {
        "throughputMode": "PAY_PER_REQUEST",
        "lastUpdateToPayPerRequestTimestamp": "2024-10-03T10:48:19.092000+00:00"
    }
```

------

# Change the capacity mode of a table in Amazon Keyspaces
<a name="ReadWriteCapacityMode.SwitchReadWriteCapacityMode"></a>

When you switch a table from provisioned capacity mode to on-demand capacity mode, Amazon Keyspaces makes several changes to the structure of your table and partitions. This process can take several minutes. During the switching period, your table delivers throughput that is consistent with the previously provisioned WCU and RCU amounts. 

When you switch from on-demand capacity mode back to provisioned capacity mode, your table delivers throughput that is consistent with the previous peak reached when the table was set to on-demand capacity mode.

The following waiting periods apply when you switch capacity modes:
+ You can switch a newly created table in on-demand mode to provisioned capacity mode at any time. However, you can only switch it back to on-demand mode 24 hours after the table’s creation timestamp. 
+ You can switch an existing table in on-demand mode to provisioned capacity mode at any time. However, you can switch capacity modes from provisioned to on-demand only once in a 24-hour period.

------
#### [ Cassandra Query Language (CQL) ]

**Change a table's throughput capacity mode using CQL**

1. To change a table's capacity mode to `PROVIOSIONED` you have to configure the read capacity and write capacity units based on your workloads expected peak values. the following statement is an example of this. You can also run this statement to adjust the read capacity or the write capacity units of the table.

   ```
   ALTER TABLE catalog.book_awards WITH CUSTOM_PROPERTIES={'capacity_mode':{'throughput_mode': 'PROVISIONED', 'read_capacity_units': 6000, 'write_capacity_units': 3000}};
   ```

   To configure provisioned capacity mode with auto-scaling, see [Configure automatic scaling on an existing table](autoscaling.configureTable.md).

1. To change the capacity mode of a table to on-demand mode, set the throughput mode to `PAY_PER_REQUEST`. The following statement is an example of this.

   ```
   ALTER TABLE catalog.book_awards WITH CUSTOM_PROPERTIES={'capacity_mode':{'throughput_mode': 'PAY_PER_REQUEST'}};
   ```

1. You can use the following statement to confirm the table's capacity mode.

   ```
   SELECT * from system_schema_mcs.tables where keyspace_name = 'catalog' and table_name = 'book_awards';
   ```

   A table configured with on-demand capacity mode returns the following.

   ```
   {
      "capacity_mode":{
         "last_update_to_pay_per_request_timestamp":"1727952499092",
         "throughput_mode":"PAY_PER_REQUEST"
      }
   }
   ```

   The `last_update_to_pay_per_request_timestamp` value is measured in milliseconds.

------
#### [ CLI ]

**Change a table's throughput capacity mode using the AWS CLI**

1. To change the table's capacity mode to `PROVIOSIONED` you have to configure the read capacity and write capacity units based on the expected peak values of your workload. The following command is an example of this. You can also run this command to adjust the read capacity or the write capacity units of the table.

   ```
   aws keyspaces update-table --keyspace-name catalog --table-name book_awards  
                                       \--capacity-specification throughputMode=PROVISIONED,readCapacityUnits=6000,writeCapacityUnits=3000
   ```

   To configure provisioned capacity mode with auto-scaling, see [Configure automatic scaling on an existing table](autoscaling.configureTable.md).

1. To change the capacity mode of a table to on-demand mode, you set the throughput mode to `PAY_PER_REQUEST`. The following statement is an example of this.

   ```
   aws keyspaces update-table --keyspace-name catalog --table-name book_awards 
                                       \--capacity-specification throughputMode=PAY_PER_REQUEST
   ```

1. You can use the following command to review the capacity mode that's configured for a table.

   ```
   aws keyspaces get-table --keyspace-name catalog --table-name book_awards
   ```

   The output for a table in on-demand mode looks like this.

   ```
   "capacitySpecification": {
           "throughputMode": "PAY_PER_REQUEST",
           "lastUpdateToPayPerRequestTimestamp": "2024-10-03T10:48:19.092000+00:00"
       }
   ```

------

# Configure pre-warming for tables in Amazon Keyspaces
<a name="warm-throughput"></a>

Amazon Keyspaces automatically scales storage partitions based on on-demand or provisioned throughput, but for new tables or sudden throughput peaks, it can take longer to allocate the required storage partitions. To insure that a new or existing table has enough capacity to support anticipated peak throughput, you can manually set specific *warm throughput* values to *pre-warm* your table. 

*Warm throughput* refers to the number of read and write operations your Amazon Keyspaces table can instantaneously support. These values are available by default for all new and existing tables. If you are using on-demand mode, or if you update your provisioned throughput, Amazon Keyspaces ensures that your application is able to issue requests up to those values instantly.

Amazon Keyspaces automatically adjusts warm throughput values as your usage increases. To adjust throughput capacity for upcoming peak events, for example when you're migrating data from another database, which may require loading terabytes of data in a short period of time, you can manually increase your tables warm throughput values. This is useful for planned peak events where request rates might increase by 10x, 100x, or more. First, assess whether the current warm throughput is sufficient to handle the expected traffic. Then, if you need to pre-warm the table for the planned peak workload, you can increase the warm throughput value manually without changing your throughput settings or [capacity mode](ReadWriteCapacityMode.md). 

You can pre-warm tables for read operations, write operations, or both. You can increase this value for new and existing single-Region tables and multi-Region tables and the warm throughput settings you set apply automatically to all replicas of the multi-Region tables. There is no limit to the number of Amazon Keyspaces tables you can pre-warm at any time. The time to complete pre-warming depends on the values you set and the size of the table. You can submit simultaneous pre-warm requests and these requests don't interfere with any table operations. You can pre-warm your table up to the table quota limit for your account in that Region. Use the [Service Quotas console](https://console.aws.amazon.com/servicequotas) to check your current quotas and increase them if needed. 

The warm throughput values that Amazon Keyspaces adjusts based on your on-demand usage or provisioned capacity are available by default for all tables without additional charges. However, if you manually increase the default warm throughput values to pre-warm tables for peak traffic events, additional charges apply. For more information, see [Amazon Keyspaces pricing](https://aws.amazon.com/keyspaces/pricing/).

Here are some different scenarios and best practices you might consider when pre-warming Amazon Keyspaces tables.

## Warm throughput and uneven access patterns
<a name="warm-throughput-scenarios-uneven"></a>

A table might have a warm throughput of 30,000 read units per second and 10,000 write units per second, but you could still experience capacity exceeded events on reads or writes before hitting those values. This is likely due to a hot partition. While Amazon Keyspaces can keep scaling to support virtually unlimited throughput, each individual partition is limited to 1,000 write units per second and 3,000 read units per second. If your application drives too much traffic to a small portion of the table’s partitions, capacity exceeded events can occur even before you reach the table's warm throughput values. We recommend following [Amazon Keyspaces best practices](bp-partition-key-design.md) to ensure seamless scalability and avoid hot partitions.

## Warm throughput for a provisioned table
<a name="warm-throughput-scenarios-provisioned"></a>

Consider a provisioned table that has a warm throughput of 30,000 read units per second and 10,000 write units per second but currently has a provisioned throughput of 4,000 RCUs and 8,000 WCUs. You can instantly scale the table's provisioned throughput up to 30,000 RCUs or 10,000 WCUs by updating your provisioned throughput settings. As you increase the provisioned throughput beyond these values, the warm throughput adjusts automatically to the new higher values, because you have established a new peak throughput. For example, if you set the provisioned throughput to 50,000 RCU, the warm throughput increases to 50,000 read units per second.

```
"ProvisionedThroughput": 
    {
        "ReadCapacityUnits": 4000,
        "WriteCapacityUnits": 8000 
    }
"WarmThroughput": 
    { 
        "ReadUnitsPerSecond": 30000,
        "WriteUnitsPerSecond": 10000
    }
```

## Warm throughput for an on-demand table
<a name="warm-throughput-scenarios-ondemand"></a>

A new on-demand table starts with a warm throughput of 12,000 read units per second and 4,000 write units per second. Your table can instantly accommodate sustained traffic up to these levels. When your requests exceed 12,000 read units per second or 4,000 write units per second, the warm throughput adjusts automatically to the higher values.

```
"WarmThroughput": 
    { 
        "ReadUnitsPerSecond": 12000,
        "WriteUnitsPerSecond": 4000
    }
```

## Best practices for pre-warming Amazon Keyspaces tables
<a name="prewarming-best-practices"></a>

Follow these best practices when implementing pre-warming for your Amazon Keyspaces tables:

Accurately estimate the required capacity  
Because pre-warming incurs a one-time cost, carefully calculate the throughput needed based on expected workload to avoid over-provisioning.

Consider the table's schema  
Tables with larger rows may require more partitions for the same throughput. Factor in your average row size when estimating pre-warming requirements.

Monitor table performance  
After pre-warming, use CloudWatch metrics to verify that your table is handling the load as expected. For more information, see [Monitor the performance of a pre-warmed table using Amazon CloudWatch](monitor-prewarming-cloudwatch.md).

Manage quotas  
If your application requires higher throughput than the default quotas allow (40,000 RCUs/WCUs or 2,000 partitions), request quota increases well in advance of your high-traffic event. To request a quota increase, use the [Service Quotas console](https://console.aws.amazon.com/servicequotas).

Optimize costs  
For temporary high-traffic events, consider using pre-warming instead of switching to provisioned mode with high capacity, as it may be more cost-effective for short duration events. For more information about pricing, see [Amazon Keyspaces pricing](https://aws.amazon.com/keyspaces/pricing/).

**Note**  
Monitor your application's performance metrics during the test phase to validate that your pre-warming configuration adequately supports your workload requirements.

**Topics**
+ [Warm throughput and uneven access patterns](#warm-throughput-scenarios-uneven)
+ [Warm throughput for a provisioned table](#warm-throughput-scenarios-provisioned)
+ [Warm throughput for an on-demand table](#warm-throughput-scenarios-ondemand)
+ [Best practices for pre-warming Amazon Keyspaces tables](#prewarming-best-practices)
+ [Create a new Amazon Keyspaces table with higher warm throughput](create-table-warm-throughput.md)
+ [Increase your existing Amazon Keyspaces table's warm throughput](update-warm-throughput.md)
+ [View warm throughput of an Amazon Keyspaces table](view-warm-throughput.md)
+ [Monitor the performance of a pre-warmed table using Amazon CloudWatch](monitor-prewarming-cloudwatch.md)

# Create a new Amazon Keyspaces table with higher warm throughput
<a name="create-table-warm-throughput"></a>

You can adjust the warm throughput values when you create your Amazon Keyspaces table by using the console, CQL, or the AWS CLI.

------
#### [ Console ]

**How to create a new table with warm-throughput settings**

1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at [https://console.aws.amazon.com/keyspaces/home](https://console.aws.amazon.com/keyspaces/home).

1. In the navigation pane, choose **Tables**, and then choose **Create table**.

1. On the **Create table** page in the **Table details** section, select a keyspace and provide a name for the new table.

1. In the **Columns** section, create the schema for your table.

1. In the **Primary key** section, define the primary key of the table and select optional clustering columns.

1. In the **Table settings** section, choose **Customize settings**.

1. Continue to **Read/write capacity settings**.

1. For **Capacity mode**, you can choose either **On-demand** or **Provisioned**.

1. In the **Pre-warming for tables** section, you can increase the values for **Read units per second** and **Write units per second** as needed to prepare your table to handle planned peak events.

   The warm throughput values that Amazon Keyspaces adjusts based on your on-demand usage or provisioned capacity are available by default for all tables without additional charges. Note that if you manually increase the default warm throughput values to pre-warm the table for peak traffic events, additional charges apply. 

1. Configure other optional table features as needed. Then choose **Create table**.

------
#### [ Cassandra Query Language (CQL) ]
+ Create a table with warm throughput using one of the following methods:
  + For provisioned mode, create a table and specify the expected peak capacity for reads and writes using the following CQL syntax:

    ```
    CREATE TABLE catalog.book_awards (
       year int,
       award text,
       rank int,
       category text,
       book_title text,
       author text,
       publisher text,
       PRIMARY KEY ((year, award), category, rank))
    WITH CUSTOM_PROPERTIES = {  
        'capacity_mode': {
           'throughput_mode': 'PROVISIONED',
           'read_capacity_units': 20000,
           'write_capacity_units': 10000
         },
        'warm_throughput': {  
            'read_units_per_second': 40000,  
            'write_units_per_second': 20000  
         }
    };
    ```
  + For on-demand mode, create a table and specify the expected peak capacity for reads and writes using the following CQL syntax:

    ```
    CREATE TABLE catalog.book_awards (
       year int,
       award text,
       rank int,
       category text,
       book_title text,
       author text,
       publisher text,
       PRIMARY KEY ((year, award), category, rank))
    WITH CUSTOM_PROPERTIES = {  
        'capacity_mode': {
           'throughput_mode': 'PAY_PER_REQUEST'
         },
        'warm_throughput': {  
            'read_units_per_second': 40000,  
            'write_units_per_second': 20000  
         }
    };
    ```

  To confirm the capacity settings of the table, see [View warm throughput of an Amazon Keyspaces table](view-warm-throughput.md).

------
#### [ CLI ]

1. Create a table with warm throughput using one of the following methods using the AWS CLI
   + Create a new table in provisioned mode and specify the expected peak capacity values for reads and writes for the new table. The following statement is an example of this.

     ```
     aws keyspaces create-table \
     --keyspace-name 'catalog' \
     --table-name 'book_awards' \
     --schema-definition 'allColumns=[{name=year,type=int},{name=award,type=text},{name=rank,type=int},{name=category,type=text},{name=book_title,type=text},{name=author,type=text},{name=publisher,type=text}],partitionKeys=[{name=year},{name=award}],clusteringKeys=[{name=category,orderBy=ASC},{name=rank,orderBy=ASC}]' \
     --capacity-specification throughputMode=PROVISIONED,readCapacityUnits=20000,writeCapacityUnits=10000 \
     --warm-throughput-specification readUnitsPerSecond=40000,writeUnitsPerSecond=20000
     ```
   + Create a new table in on-demand mode and specify the expected peak capacity values for reads and writes for the new table. The following statement is an example of this.

     ```
     aws keyspaces create-table \
     --keyspace-name 'catalog' \
     --table-name 'book_awards' \
     --schema-definition 'allColumns=[{name=year,type=int},{name=award,type=text},{name=rank,type=int},{name=category,type=text},{name=book_title,type=text},{name=author,type=text},{name=publisher,type=text}],partitionKeys=[{name=year},{name=award}],clusteringKeys=[{name=category,orderBy=ASC},{name=rank,orderBy=ASC}]' \
     --warmThroughputSpecification readUnitsPerSecond=40000,writeUnitsPerSecond=20000
     ```

1. The output of the command returns the ARN of the table as shown in the following example.

   ```
   {
       "resourceArn": "arn:aws::cassandra:us-east-1:111122223333:/keyspace/catalog/table/book_awards>"
   }
   ```

   To confirm the capacity settings of the table, see [View warm throughput of an Amazon Keyspaces table](view-warm-throughput.md).

------
#### [ Java ]

**Create a new table using the SDK for Java.**
+ Create a new table in provisioned mode and specify the expected peak capacity values for reads and writes for the new table. The following code example is an example of this.

  ```
  import software.amazon.awssdk.services.keyspaces.KeyspacesClient;
  import software.amazon.awssdk.services.keyspaces.model.*;
  
  public class PreWarmingExample {
      public static void main(String[] args) {
          KeyspacesClient keyspacesClient = KeyspacesClient.builder().build();
  
          // Define schema
          List<ColumnDefinition> columns = Arrays.asList(
              ColumnDefinition.builder().name("year").type("int").build(),
              ColumnDefinition.builder().name("award").type("text").build(),
              ColumnDefinition.builder().name("rank").type("int").build(),
              ColumnDefinition.builder().name("category").type("text").build(),
              ColumnDefinition.builder().name("book_title").type("text").build(),
              ColumnDefinition.builder().name("author").type("text").build(),
              ColumnDefinition.builder().name("publisher").type("text").build()
          );
          
          List<PartitionKey> partitionKeys = Arrays.asList(
              PartitionKey.builder().name("year").build(),
              PartitionKey.builder().name("award").build()
          );
          
          List<ClusteringKey> clusteringKeys = Arrays.asList(
              ClusteringKey.builder().name("category").orderBy("ASC").build(),
              ClusteringKey.builder().name("rank").orderBy("ASC").build()
          );
          
          SchemaDefinition schema = SchemaDefinition.builder()
              .allColumns(columns)
              .partitionKeys(partitionKeys)
              .clusteringKeys(clusteringKeys)
              .build();
  
          // Define capacity specification
          CapacitySpecification capacitySpec = CapacitySpecification.builder()
              .throughputMode(ThroughputMode.PROVISIONED)
              .readCapacityUnits(20000)
              .writeCapacityUnits(10000)
              .build();
              
          // Define warm throughput specification
          WarmThroughputSpecification warmThroughput = WarmThroughputSpecification.builder()
              .readUnitsPerSecond(40000L)
              .writeUnitsPerSecond(20000L)
              .build();
  
          // Create table with PreWarming
          CreateTableRequest request = CreateTableRequest.builder()
              .keyspaceName("catalog")
              .tableName("book_awards")
              .schemaDefinition(schema)
              .capacitySpecification(capacitySpec)
              .warmThroughputSpecification(warmThroughput)
              .build();
              
          CreateTableResponse response = keyspacesClient.createTable(request);
          System.out.println("Table created with ARN: " + response.resourceArn());
      }
  }
  ```

------

# Increase your existing Amazon Keyspaces table's warm throughput
<a name="update-warm-throughput"></a>

You can increase your Amazon Keyspaces table's current warm throughput values using the console, CQL, or the AWS CLI.

------
#### [ Console ]

**How to increase pre-warm settings of a table using the console**

1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at [https://console.aws.amazon.com/keyspaces/home](https://console.aws.amazon.com/keyspaces/home).

1. In the navigation pane, choose **Tables**, and then choose the table that you want to update.

1. On the **Capacity** tab of the table, continue to **Pre-warming for tables**.

1. In the **Pre-warming for tables** section, choose **Edit**.

1. On the **Edit pre-warming for tables** page, you can updated the values for **Read units per second** and for **Write units per second**.

1. Choose **Save changes**. Your table is getting updated with the specified pre-warming settings. 

------
#### [ Cassandra Query Language (CQL) ]

**Increase the warm throughput settings of a table using CQL**
+ Use the `ALTER TABLE`statement to increase warm-throughput of a table. The following statement is an example of this.

  ```
  ALTER TABLE catalog.book_awards 
  WITH CUSTOM_PROPERTIES = {
      'warm_throughput': {  
          'read_units_per_second': 60000,  
          'write_units_per_second': 30000  
      }
  };
  ```

  To confirm the updated capacity settings of the table, see [View warm throughput of an Amazon Keyspaces table](view-warm-throughput.md).

------
#### [ CLI ]

**Increase the pre-warming settings of a table using the AWS CLI**
+ To increase the warm-throughput of table, you can use the `update-table` command. The following statement is an example of this.

  ```
  aws keyspaces update-table \
  --keyspace-name 'catalog' \
  --table-name 'book_awards' \
  --warmThroughputSpecification readUnitsPerSecond=60000,writeUnitsPerSecond=30000
  ```

  To confirm the updated capacity settings of the table, see [View warm throughput of an Amazon Keyspaces table](view-warm-throughput.md).

------
#### [ Java ]

**Update the pre-warming settings of a table using the SDK for Java.**
+ Update the warm-throughput settings for a table. The following code example is an example of this.

  ```
  import software.amazon.awssdk.services.keyspaces.KeyspacesClient;
  import software.amazon.awssdk.services.keyspaces.model.*;
  
  public class UpdatePreWarmingExample {
      public static void main(String[] args) {
          KeyspacesClient keyspacesClient = KeyspacesClient.builder().build();
  
          // Define new warm throughput specification
          WarmThroughputSpecification warmThroughput = WarmThroughputSpecification.builder()
              .readUnitsPerSecond(60000L)
              .writeUnitsPerSecond(30000L)
              .build();
  
          // Update table with new PreWarming settings
          UpdateTableRequest request = UpdateTableRequest.builder()
              .keyspaceName("catalog")
              .tableName("book_awards")
              .warmThroughputSpecification(warmThroughput)
              .build();
              
          UpdateTableResponse response = keyspacesClient.updateTable(request);
          System.out.println("Table update requested: " + response.resourceArn());
      }
  }
  ```

------

# View warm throughput of an Amazon Keyspaces table
<a name="view-warm-throughput"></a>

You can view your Amazon Keyspaces table's current warm throughput values using the console, CQL, or the AWS CLI.

------
#### [ Console ]

**How to view your table's pre-warming settings using the console.**

1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at [https://console.aws.amazon.com/keyspaces/home](https://console.aws.amazon.com/keyspaces/home).

1. In the navigation pane, choose **Tables**, and then choose the table that you want to review.

1. On the **Capacity** tab of the table, continue to **Pre-warming for tables**. 

------
#### [ Cassandra Query Language (CQL) ]

**View the warm-throughput settings of a table using CQL**
+ To view the warm-throughput settings of a table, you can use the following CQL statement.

  ```
  SELECT custom_properties
  FROM system_schema_mcs.tables 
  WHERE keyspace_name='catalog' and table_name='book_awards';
  
  // Output:
  ...
  custom_properties
  ----------------------------------------------------------------------------------
  {
      'warm_throughput': 
      {
          'read_units_per_second': '40000', 
          'write_units_per_second': '20000', 
          'status': 'AVAILABLE'
      }
  }
  ...
  ```

------
#### [ CLI ]

**View the warm-throughput settings of a table using the AWS CLI**
+ You can view the warm-throughput settings of a table using the `get-table` command as shown in the following example.

  ```
  aws keyspaces get-table \
  --keyspace-name 'catalog' \
  --table-name 'book_awards'
  ```

  The following is showing the example output of the `get-table` command for a single-Region table in provisioned mode.

  ```
  {
      "keyspaceName": "catalog",
      "tableName": "book_awards",
      ... Existing Fields ...,
      "capacitySpecificationSummary": {
          "throughputMode": "PROVISIONED",
          "readCapacityUnits": 20000,
          "writeCapacityUnits": 10000
      },
      "warmThroughputSpecificationSummary": {
          "readUnitsPerSecond": 40000,
          "writeUnitsPerSecond": 20000,
          "status": "AVAILABLE"
      }
  }
  ```

  The following is showing the example output for a single-Region table in on-demand mode.

  ```
  {
      "keyspaceName": "catalog",
      "tableName": "book_awards_ondemand",
      ... Existing Fields ...,
      "capacitySpecification": {
          "throughputMode": "PAY_PER_REQUEST"
      },
      "warmThroughputSpecificationSummary": {
          "readUnitsPerSecond": 40000,
          "writeUnitsPerSecond": 20000,
          "status": "AVAILABLE"
      }
  }
  ```

------
#### [ Java ]

**Read the pre-warming settings of a table using the SDK for Java.**
+ Read the warm-throughput values of a table using `get-table`. The following code example is an example of this.

  ```
  import software.amazon.awssdk.services.keyspaces.KeyspacesClient;
  import software.amazon.awssdk.services.keyspaces.model.*;
  
  public class GetTableWithPreWarmingExample {
      public static void main(String[] args) {
          KeyspacesClient keyspacesClient = KeyspacesClient.builder().build();
  
          // Get table details including PreWarming specification
          GetTableRequest request = GetTableRequest.builder()
              .keyspaceName("catalog")
              .tableName("book_awards")
              .build();
              
          GetTableResponse response = keyspacesClient.getTable(request);
          
          // Access PreWarming details
          if (response.warmThroughputSpecification() != null) {
              WarmThroughputSpecificationSummary warmThroughputSummary = response.warmThroughputSpecification();
              System.out.println("PreWarming Status: " + warmThroughputSummary.status());
              System.out.println("Read Units: " + warmThroughputSummary.readUnitsPerSecond());
              System.out.println("Write Units: " + warmThroughputSummary.writeUnitsPerSecond());
              
              // Check if PreWarming is active
              if (warmThroughputSummary.status().equals("AVAILABLE")) {
                  System.out.println("Table is fully pre-warmed and ready for high throughput");
              } else if (warmThroughputSummary.status().equals("UPDATING")) {
                  System.out.println("Table PreWarming is currently being updated");
              }
          } else {
              System.out.println("Table does not have PreWarming enabled");
          }
      }
  }
  ```

------

# Monitor the performance of a pre-warmed table using Amazon CloudWatch
<a name="monitor-prewarming-cloudwatch"></a>

Amazon Keyspaces pre-warming doesn't introduce new CloudWatch metrics, but you can monitor the performance of pre-warmed tables using existing Amazon Keyspaces metrics:

SuccessfulRequestLatency  
Monitor this metric to verify that the pre-warmed table is handling requests with expected latency.

WriteThrottleEvents and ReadThrottleEvents  
These metrics should remain low for a properly pre-warmed table. If you see insufficient capacity errors despite pre-warming, you might need to adjust your warm-throughput values.

ConsumedReadCapacityUnits and ConsumedWriteCapacityUnits  
These metrics show the actual consumption of capacity, which can help validate if your pre-warming configuration is appropriate.

ProvisionedReadCapacityUnits and ProvisionedWriteCapacityUnits  
For provisioned tables, these metrics show the currently allocated capacity.

These metrics can be viewed in the CloudWatch console or queried using the CloudWatch API. For more information, see [Monitoring Amazon Keyspaces with Amazon CloudWatch](monitoring-cloudwatch.md).

# Manage throughput capacity automatically with Amazon Keyspaces auto scaling
<a name="autoscaling"></a>

Many database workloads are cyclical in nature or are difficult to predict in advance. For example, consider a social networking app where most of the users are active during daytime hours. The database must be able to handle the daytime activity, but there's no need for the same levels of throughput at night. 

Another example might be a new mobile gaming app that is experiencing rapid adoption. If the game becomes very popular, it could exceed the available database resources, which would result in slow performance and unhappy customers. These kinds of workloads often require manual intervention to scale database resources up or down in response to varying usage levels.

Amazon Keyspaces (for Apache Cassandra) helps you provision throughput capacity efficiently for variable workloads by adjusting throughput capacity automatically in response to actual application traffic. Amazon Keyspaces uses the Application Auto Scaling service to increase and decrease a table's read and write capacity on your behalf. For more information about Application Auto Scaling, see the [Application Auto Scaling User Guide](https://docs.aws.amazon.com/autoscaling/application/userguide/). 

**Note**  
To get started with Amazon Keyspaces automatic scaling quickly, see [Configure and update Amazon Keyspaces automatic scaling policies](autoscaling.configure.md).

## How Amazon Keyspaces automatic scaling works
<a name="autoscaling.HowItWorks"></a>

The following diagram provides a high-level overview of how Amazon Keyspaces automatic scaling manages throughput capacity for a table.

![\[A diagram showing the different services involved when a user makes a change to an Amazon Keyspaces table. The services are Amazon CloudWatch, Amazon SNS, and Application Auto Scaling, which issues the ALTER TABLE statement to change the capacity based on the users read or write usage.\]](http://docs.aws.amazon.com/keyspaces/latest/devguide/images/keyspaces_auto-scaling.png)




To enable automatic scaling for a table, you create a *scaling policy*. The scaling policy specifies whether you want to scale read capacity or write capacity (or both), and the minimum and maximum provisioned capacity unit settings for the table.

The scaling policy also defines a *target utilization*. Target utilization is the ratio of consumed capacity units to provisioned capacity units at a point in time, expressed as a percentage. Automatic scaling uses a *target tracking* algorithm to adjust the provisioned throughput of the table upward or downward in response to actual workloads. It does this so that the actual capacity utilization remains at or near your target utilization.

 You can set the automatic scaling target utilization values between 20 and 90 percent for your read and write capacity. The default target utilization rate is 70 percent. You can set the target utilization to be a lower percentage if your traffic changes quickly and you want capacity to begin scaling up sooner. You can also set the target utilization rate to a higher rate if your application traffic changes more slowly and you want to reduce the cost of throughput. 

For more information about scaling policies, see [ Target tracking scaling policies for Application Auto Scaling](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html) in the [https://docs.aws.amazon.com/autoscaling/application/userguide/](https://docs.aws.amazon.com/autoscaling/application/userguide/).

When you create a scaling policy, Amazon Keyspaces creates two pairs of Amazon CloudWatch alarms on your behalf. Each pair represents your upper and lower boundaries for provisioned and consumed throughput settings. These CloudWatch alarms are triggered when the table's actual utilization deviates from your target utilization for a sustained period of time. To learn more about Amazon CloudWatch, see the [Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/).

When one of the CloudWatch alarms is triggered, Amazon Simple Notification Service (Amazon SNS) sends you a notification (if you have enabled it). The CloudWatch alarm then invokes Application Auto Scaling to evaluate your scaling policy. This in turn issues an Alter Table request to Amazon Keyspaces to adjust the table's provisioned capacity upward or downward as appropriate. To learn more about Amazon SNS notifications, see [ Setting up Amazon SNS notifications](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/US_SetupSNS.html).

Amazon Keyspaces processes the Alter Table request by increasing (or decreasing) the table's provisioned throughput capacity so that it approaches your target utilization.

**Note**  
Amazon Keyspaces auto scaling modifies provisioned throughput settings only when the actual workload stays elevated (or depressed) for a sustained period of several minutes. The target tracking algorithm seeks to keep the target utilization at or near your chosen value over the long term. Sudden, short-duration spikes of activity are accommodated by the table's built-in burst capacity. 

## How auto scaling works for multi-Region tables
<a name="autoscaling.multi-region"></a>

To ensure that there's always enough read and write capacity for all table replicas in all AWS Regions of a multi-Region table in provisioned capacity mode, we recommend that you configure Amazon Keyspaces auto scaling.

When you use a multi-Region table in provisioned mode with auto scaling, you can't disable auto scaling for a single table replica. But you can adjust the table's read auto scaling settings for different Regions. For example, you can specify different read capacity and read auto scaling settings for each Region that the table is replicated in. 

The read auto scaling settings that you configure for a table replica in a specified Region overwrite the general auto scaling settings of the table. The write capacity, however, has to remain synchronized across all table replicas to ensure that there's enough capacity to replicate writes in all Regions.

Amazon Keyspaces auto scaling independently updates the provisioned capacity of the table in each AWS Region based on the usage in that Region. As a result, the provisioned capacity in each Region for a multi-Region table might be different when auto scaling is active.

You can configure the auto scaling settings of a multi-Region table and its replicas using the Amazon Keyspaces console, API, AWS CLI, or CQL. For more information on how to create and update auto scaling settings for multi-Region tables, see [Update the provisioned capacity and auto scaling settings for a multi-Region table in Amazon Keyspaces](tables-mrr-autoscaling.md).

**Note**  
If you use auto scaling for multi-Region tables, you must always use Amazon Keyspaces API operations to configure auto scaling settings. If you use Application Auto Scaling API operations directly to configure auto scaling settings, you don't have the ability to specify the AWS Regions of the multi-Region table. This can result in unsupported configurations.

## Usage notes
<a name="autoscaling.UsageNotes"></a>

Before you begin using Amazon Keyspaces automatic scaling, you should be aware of the following:
+ Amazon Keyspaces automatic scaling is not available in the Middle East (UAE) Region.
+ Amazon Keyspaces automatic scaling can increase read capacity or write capacity as often as necessary, in accordance with your scaling policy. All Amazon Keyspaces quotas remain in effect, as described in [Quotas for Amazon Keyspaces (for Apache Cassandra)](quotas.md). 
+ Amazon Keyspaces automatic scaling doesn't prevent you from manually modifying provisioned throughput settings. These manual adjustments don't affect any existing CloudWatch alarms that are attached to the scaling policy.
+ If you use the console to create a table with provisioned throughput capacity, Amazon Keyspaces automatic scaling is enabled by default. You can modify your automatic scaling settings at any time. For more information, see [Turn off Amazon Keyspaces auto scaling for a table](autoscaling.turnoff.md).
+ If you're using CloudFormation to create scaling policies, you should manage the scaling policies from CloudFormation so that the stack is in sync with the stack template. If you change scaling policies from Amazon Keyspaces, they will get overwritten with the original values from the CloudFormation stack template when the stack is reset.
+ If you use CloudTrail to monitor Amazon Keyspaces automatic scaling, you might see alerts for calls made by Application Auto Scaling as part of its configuration validation process. You can filter out these alerts by using the `invokedBy` field, which contains `application-autoscaling.amazonaws.com` for these validation checks.

# Configure and update Amazon Keyspaces automatic scaling policies
<a name="autoscaling.configure"></a>

You can use the console, CQL, or the AWS Command Line Interface (AWS CLI) to configure Amazon Keyspaces automatic scaling for new and existing tables. You can also modify automatic scaling settings or disable automatic scaling.

 For more advanced features like setting scale-in and scale-out cooldown times, we recommend that you use CQL or the AWS CLI to manage Amazon Keyspaces scaling policies.

**Topics**
+ [Configure permissions for Amazon Keyspaces automatic scaling](autoscaling.permissions.md)
+ [Create a new table with automatic scaling](autoscaling.createTable.md)
+ [Configure automatic scaling on an existing table](autoscaling.configureTable.md)
+ [View your table's Amazon Keyspaces auto scaling configuration](autoscaling.viewPolicy.md)
+ [Turn off Amazon Keyspaces auto scaling for a table](autoscaling.turnoff.md)
+ [View auto scaling activity for a Amazon Keyspaces table in Amazon CloudWatch](autoscaling.activity.md)

# Configure permissions for Amazon Keyspaces automatic scaling
<a name="autoscaling.permissions"></a>

To get started, confirm that the principal has the appropriate permissions to create and manage automatic scaling settings. In AWS Identity and Access Management (IAM), the AWS managed policy `AmazonKeyspacesFullAccess` is required to manage Amazon Keyspaces scaling policies. 

**Important**  
 `application-autoscaling:*` permissions are required to disable automatic scaling on a table. You must turn off auto scaling for a table before you can delete it. 

To set up an IAM user or role for Amazon Keyspaces console access and Amazon Keyspaces automatic scaling, add the following policy.

**To attach the `AmazonKeyspacesFullAccess` policy**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. On the IAM console dashboard, choose **Users**, and then choose your IAM user or role from the list.

1. On the **Summary** page, choose **Add permissions**.

1. Choose **Attach existing policies directly**.

1. From the list of policies, choose **AmazonKeyspacesFullAccess**, and then choose **Next: Review**.

1. Choose **Add permissions**.

# Create a new table with automatic scaling
<a name="autoscaling.createTable"></a>

When you create a new Amazon Keyspaces table, you can automatically enable auto scaling for the table's write or read capacity. This allows Amazon Keyspaces to contact Application Auto Scaling on your behalf to register the table as a scalable target and adjust the provisioned write or read capacity. 

For more information on how to create a multi-Region table and configure different auto scaling settings for table replicas, see [Create a multi-Region table in provisioned mode with auto scaling in Amazon Keyspaces](tables-mrr-create-provisioned.md).

**Note**  
Amazon Keyspaces automatic scaling requires the presence of a service-linked role (`AWSServiceRoleForApplicationAutoScaling_CassandraTable`) that performs automatic scaling actions on your behalf. This role is created automatically for you. For more information, see [Using service-linked roles for Amazon Keyspaces](using-service-linked-roles.md).

------
#### [ Console ]

**Create a new table with automatic scaling enabled using the console**

1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at [https://console.aws.amazon.com/keyspaces/home](https://console.aws.amazon.com/keyspaces/home).

1. In the navigation pane, choose **Tables**, and then choose **Create table**.

1. On the **Create table** page in the **Table details** section, select a keyspace and provide a name for the new table.

1. In the **Columns** section, create the schema for your table.

1. In the **Primary key** section, define the primary key of the table and select optional clustering columns.

1. In the **Table settings** section, choose **Customize settings**.

1. Continue to **Read/write capacity settings**.

1. For **Capacity mode**, choose **Provisioned**.

1. In the **Read capacity** section, confirm that **Scale automatically** is selected.

   In this step, you select the minimum and maximum read capacity units for the table, as well as the target utilization.
   + **Minimum capacity units** – Enter the value for the minimum level of throughput that the table should always be ready to support. The value must be between 1 and the maximum throughput per second quota for your account (40,000 by default).
   + **Maximum capacity units** – Enter the maximum amount of throughput you want to provision for the table. The value must be between 1 and the maximum throughput per second quota for your account (40,000 by default).
   + **Target utilization** – Enter a target utilization rate between 20% and 90%. When traffic exceeds the defined target utilization rate, capacity is automatically scaled up. When traffic falls below the defined target, it is automatically scaled down again.
**Note**  
To learn more about default quotas for your account and how to increase them, see [Quotas for Amazon Keyspaces (for Apache Cassandra)](quotas.md).

1. In the **Write capacity** section, choose the same settings as defined in the previous step for read capacity, or configure capacity values manually.

1. Choose **Create table**. Your table is created with the specified automatic scaling parameters.

------
#### [ Cassandra Query Language (CQL) ]

**Create a new table with Amazon Keyspaces automatic scaling using CQL**

To configure auto scaling settings for a table programmatically, you use the `AUTOSCALING_SETTINGS` statement that contains the parameters for Amazon Keyspaces auto scaling. The parameters define the conditions that direct Amazon Keyspaces to adjust your table's provisioned throughput, and what additional optional actions to take. In this example, you define the auto scaling settings for *mytable*.

The policy contains the following elements:
+ `AUTOSCALING_SETTINGS` – Specifies if Amazon Keyspaces is allowed to adjust throughput capacity on your behalf. The following values are required:
  + `provisioned_write_capacity_autoscaling_update`:
    + `minimum_units`
    + `maximum_units`
  + `provisioned_read_capacity_autoscaling_update`:
    + `minimum_units`
    + `maximum_units`
  + `scaling_policy` – Amazon Keyspaces supports the target tracking policy. To define the target tracking policy, you configure the following parameters.
    + `target_value` – Amazon Keyspaces auto scaling ensures that the ratio of consumed capacity to provisioned capacity stays at or near this value. You define `target_value` as a percentage.
    + `disableScaleIn`: (Optional) A `boolean` that specifies if `scale-in` is disabled or enabled for the table. This parameter is disabled by default. To turn on `scale-in`, set the `boolean` value to `FALSE`. This means that capacity is automatically scaled down for a table on your behalf. 
    + `scale_out_cooldown` – A scale-out activity increases the provisioned throughput of your table. To add a cooldown period for scale-out activities, specify a value, in seconds, for `scale_out_cooldown`. If you don't specify a value, the default value is 0. For more information about target tracking and cooldown periods, see [ Target Tracking Scaling Policies](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html) in the *Application Auto Scaling User Guide*. 
    + `scale_in_cooldown` – A scale-in activity decreases the provisioned throughput of your table. To add a cooldown period for scale-in activities, specify a value, in seconds, for `scale_in_cooldown`. If you don't specify a value, the default value is 0. For more information about target tracking and cooldown periods, see [ Target Tracking Scaling Policies](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html) in the *Application Auto Scaling User Guide*.

**Note**  
To further understand how `target_value` works, suppose that you have a table with a provisioned throughput setting of 200 write capacity units. You decide to create a scaling policy for this table, with a `target_value` of 70 percent.  
Now suppose that you begin driving write traffic to the table so that the actual write throughput is 150 capacity units. The consumed-to-provisioned ratio is now (150 / 200), or 75 percent. This ratio exceeds your target, so auto scaling increases the provisioned write capacity to 215 so that the ratio is (150 / 215), or 69.77 percent—as close to your `target_value` as possible, but not exceeding it.

For *mytable*, you set `TargetValue` for both read and write capacity to 50 percent. Amazon Keyspaces auto scaling adjusts the table's provisioned throughput within the range of 5–10 capacity units so that the consumed-to-provisioned ratio remains at or near 50 percent. For read capacity, you set the values for `ScaleOutCooldown` and `ScaleInCooldown` to 60 seconds.

You can use the following statement to create a new Amazon Keyspaces table with auto scaling enabled. 

```
CREATE TABLE mykeyspace.mytable(pk int, ck int, PRIMARY KEY (pk, ck))
WITH CUSTOM_PROPERTIES = {  
    'capacity_mode': {  
        'throughput_mode': 'PROVISIONED',  
        'read_capacity_units': 1,  
        'write_capacity_units': 1  
    }
} AND AUTOSCALING_SETTINGS = {
    'provisioned_write_capacity_autoscaling_update': {
        'maximum_units': 10,  
        'minimum_units': 5,  
        'scaling_policy': {
            'target_tracking_scaling_policy_configuration': {
                'target_value': 50
            }  
        }  
    },  
    'provisioned_read_capacity_autoscaling_update': {  
        'maximum_units': 10,  
        'minimum_units': 5,  
        'scaling_policy': {  
            'target_tracking_scaling_policy_configuration': {  
                'target_value': 50,
                'scale_in_cooldown': 60,  
                'scale_out_cooldown': 60
            }  
        }  
    }
};
```

------
#### [ CLI ]

**Create a new table with Amazon Keyspaces automatic scaling using the AWS CLI**

To configure auto scaling settings for a table programmatically, you use the `autoScalingSpecification` action that defines the parameters for Amazon Keyspaces auto scaling. The parameters define the conditions that direct Amazon Keyspaces to adjust your table's provisioned throughput, and what additional optional actions to take. In this example, you define the auto scaling settings for *mytable*.

The policy contains the following elements:
+ `autoScalingSpecification` – Specifies if Amazon Keyspaces is allowed to adjust capacity throughput on your behalf. You can enable auto scaling for read and for write capacity separately. Then you must specify the following parameters for `autoScalingSpecification`:
  + `writeCapacityAutoScaling` – The maximum and minimum write capacity units.
  + `readCapacityAutoScaling` – The maximum and minimum read capacity units.
  + `scalingPolicy` – Amazon Keyspaces supports the target tracking policy. To define the target tracking policy, you configure the following parameters.
    + `targetValue` – Amazon Keyspaces auto scaling ensures that the ratio of consumed capacity to provisioned capacity stays at or near this value. You define `targetValue` as a percentage.
    + `disableScaleIn`: (Optional) A `boolean` that specifies if `scale-in` is disabled or enabled for the table. This parameter is disabled by default. To turn on `scale-in`, set the `boolean` value to `FALSE`. This means that capacity is automatically scaled down for a table on your behalf. 
    + `scaleOutCooldown` – A scale-out activity increases the provisioned throughput of your table. To add a cooldown period for scale-out activities, specify a value, in seconds, for `ScaleOutCooldown`. The default value is 0. For more information about target tracking and cooldown periods, see [ Target Tracking Scaling Policies](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html) in the *Application Auto Scaling User Guide*. 
    + `scaleInCooldown` – A scale-in activity decreases the provisioned throughput of your table. To add a cooldown period for scale-in activities, specify a value, in seconds, for `ScaleInCooldown`. The default value is 0. For more information about target tracking and cooldown periods, see [ Target Tracking Scaling Policies](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html) in the *Application Auto Scaling User Guide*.

**Note**  
To further understand how `TargetValue` works, suppose that you have a table with a provisioned throughput setting of 200 write capacity units. You decide to create a scaling policy for this table, with a `TargetValue` of 70 percent.  
Now suppose that you begin driving write traffic to the table so that the actual write throughput is 150 capacity units. The consumed-to-provisioned ratio is now (150 / 200), or 75 percent. This ratio exceeds your target, so auto scaling increases the provisioned write capacity to 215 so that the ratio is (150 / 215), or 69.77 percent—as close to your `TargetValue` as possible, but not exceeding it.

For *mytable*, you set `TargetValue` for both read and write capacity to 50 percent. Amazon Keyspaces auto scaling adjusts the table's provisioned throughput within the range of 5–10 capacity units so that the consumed-to-provisioned ratio remains at or near 50 percent. For read capacity, you set the values for `ScaleOutCooldown` and `ScaleInCooldown` to 60 seconds.

When creating tables with complex auto scaling settings, it's helpful to load the auto scaling settings from a JSON file. For the following example, you can download the example JSON file from [auto-scaling.zip](samples/auto-scaling.zip) and extract `auto-scaling.json`, taking note of the path to the file. In this example, the JSON file is located in the current directory. For different file path options, see [ How to load parameters from a file](https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-parameters-file.html#cli-usage-parameters-file-how).

```
aws keyspaces create-table --keyspace-name mykeyspace --table-name mytable 
            \ --schema-definition 'allColumns=[{name=pk,type=int},{name=ck,type=int}],partitionKeys=[{name=pk},{name=ck}]' 
            \ --capacity-specification throughputMode=PROVISIONED,readCapacityUnits=1,writeCapacityUnits=1 
            \ --auto-scaling-specification file://auto-scaling.json
```

------

# Configure automatic scaling on an existing table
<a name="autoscaling.configureTable"></a>

You can update an existing Amazon Keyspaces table to turn on auto scaling for the table's write or read capacity. If you're updating a table that is currently in on-demand capacity mode, than you first have to change the table's capacity mode to provisioned capacity mode.

For more information on how to update auto scaling settings for a multi-Region table, see [Update the provisioned capacity and auto scaling settings for a multi-Region table in Amazon Keyspaces](tables-mrr-autoscaling.md).

Amazon Keyspaces automatic scaling requires the presence of a service-linked role (`AWSServiceRoleForApplicationAutoScaling_CassandraTable`) that performs automatic scaling actions on your behalf. This role is created automatically for you. For more information, see [Using service-linked roles for Amazon Keyspaces](using-service-linked-roles.md).

------
#### [ Console ]

**Configure Amazon Keyspaces automatic scaling for an existing table**

1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at [https://console.aws.amazon.com/keyspaces/home](https://console.aws.amazon.com/keyspaces/home).

1. Choose the table that you want to work with, and go to the **Capacity** tab.

1. In the **Capacity settings** section, choose **Edit**.

1. Under **Capacity mode**, make sure that the table is using **Provisioned** capacity mode.

1. Select **Scale automatically** and see step 6 in [Create a new table with automatic scaling](autoscaling.createTable.md) to edit read and write capacity.

1. When the automatic scaling settings are defined, choose **Save**.

------
#### [ Cassandra Query Language (CQL) ]

**Configure an existing table with Amazon Keyspaces automatic scaling using CQL**

You can use the `ALTER TABLE` statement for an existing Amazon Keyspaces table to configure auto scaling for the table's write or read capacity. If you're updating a table that is currently in on-demand capacity mode, you have to set `capacity_mode` to provisioned. If your table is already in provisioned capacity mode, this field can be omitted. 

In the following example, the statement updates the table *mytable*, which is in on-demand capacity mode. The statement changes the capacity mode of the table to provisioned mode with auto scaling enabled. 

The write capacity is configured within the range of 5–10 capacity units with a target value of 50%. The read capacity is also configured within the range of 5–10 capacity units with a target value of 50%. For read capacity, you set the values for `scale_out_cooldown` and `scale_in_cooldown` to 60 seconds.

```
ALTER TABLE mykeyspace.mytable
WITH CUSTOM_PROPERTIES = {  
    'capacity_mode': {  
        'throughput_mode': 'PROVISIONED',  
        'read_capacity_units': 1,  
        'write_capacity_units': 1  
    }
} AND AUTOSCALING_SETTINGS = {
    'provisioned_write_capacity_autoscaling_update': {
        'maximum_units': 10,  
        'minimum_units': 5,  
        'scaling_policy': {
            'target_tracking_scaling_policy_configuration': {
                'target_value': 50
            }  
        }  
    },
    'provisioned_read_capacity_autoscaling_update': {  
        'maximum_units': 10,  
        'minimum_units': 5,  
        'scaling_policy': {  
            'target_tracking_scaling_policy_configuration': {  
                'target_value': 50,
                'scale_in_cooldown': 60,  
                'scale_out_cooldown': 60
            }  
        }  
    }
};
```

------
#### [ CLI ]

**Configure an existing table with Amazon Keyspaces automatic scaling using the AWS CLI**

For an existing Amazon Keyspaces table, you can turn on auto scaling for the table's write or read capacity using the `UpdateTable` operation. 

You can use the following command to turn on Amazon Keyspaces auto scaling for an existing table. The auto scaling settings for the table are loaded from a JSON file. For the following example, you can download the example JSON file from [auto-scaling.zip](samples/auto-scaling.zip) and extract `auto-scaling.json`, taking note of the path to the file. In this example, the JSON file is located in the current directory. For different file path options, see [ How to load parameters from a file](https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-parameters-file.html#cli-usage-parameters-file-how).

For more information about the auto scaling settings used in the following example, see [Create a new table with automatic scaling](autoscaling.createTable.md).

```
aws keyspaces update-table --keyspace-name mykeyspace --table-name mytable 
            \ --capacity-specification throughputMode=PROVISIONED,readCapacityUnits=1,writeCapacityUnits=1 
            \ --auto-scaling-specification file://auto-scaling.json
```

------

# View your table's Amazon Keyspaces auto scaling configuration
<a name="autoscaling.viewPolicy"></a>

You can use the console, CQL, or the AWS CLI to view and update the Amazon Keyspaces automatic scaling settings of a table.

------
#### [ Console ]

****

**View automatic scaling settings using the console**

1. Choose the table you want to view and go to the **Capacity** tab.

1. In the **Capacity settings** section, choose **Edit**. You can now modify the settings in the **Read capacity** or **Write capacity** sections. For more information about these settings, see [Create a new table with automatic scaling](autoscaling.createTable.md).

------
#### [ Cassandra Query Language (CQL) ]

**View your table's Amazon Keyspaces automatic scaling policy using CQL**

To view details of the auto scaling configuration of a table, use the following command.

```
SELECT * FROM system_schema_mcs.autoscaling WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable';
```

The output for this command looks like this.

```
 keyspace_name | table_name | provisioned_read_capacity_autoscaling_update                                                                                                                                                                      | provisioned_write_capacity_autoscaling_update
---------------+------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 mykeyspace    | mytable    | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 60, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 60}}} | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 0, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 0}}}
```

------
#### [ CLI ]

**View your table's Amazon Keyspaces automatic scaling policy using the AWS CLI**

To view the auto scaling configuration of a table, you can use the `get-table-auto-scaling-settings` operation. The following CLI command is an example of this.

```
aws keyspaces get-table-auto-scaling-settings --keyspace-name mykeyspace --table-name mytable
```

The output for this command looks like this.

```
{
    "keyspaceName": "mykeyspace",
    "tableName": "mytable",
    "resourceArn": "arn:aws:cassandra:us-east-1:111122223333:/keyspace/mykeyspace/table/mytable",
    "autoScalingSpecification": {
        "writeCapacityAutoScaling": {
            "autoScalingDisabled": false,
            "minimumUnits": 5,
            "maximumUnits": 10,
            "scalingPolicy": {
                "targetTrackingScalingPolicyConfiguration": {
                    "disableScaleIn": false,
                    "scaleInCooldown": 0,
                    "scaleOutCooldown": 0,
                    "targetValue": 50.0
                }
            }
        },
        "readCapacityAutoScaling": {
            "autoScalingDisabled": false,
            "minimumUnits": 5,
            "maximumUnits": 10,
            "scalingPolicy": {
                "targetTrackingScalingPolicyConfiguration": {
                    "disableScaleIn": false,
                    "scaleInCooldown": 60,
                    "scaleOutCooldown": 60,
                    "targetValue": 50.0
                }
            }
        }
    }
}
```

------

# Turn off Amazon Keyspaces auto scaling for a table
<a name="autoscaling.turnoff"></a>

You can turn off Amazon Keyspaces auto scaling for your table at any time. If you no longer need to scale your table's read or write capacity, you should consider turning off auto scaling so that Amazon Keyspaces doesn't continue modifying your table’s read or write capacity settings. You can update the table using the console, CQL, or the AWS CLI.

Turning off auto scaling also deletes the CloudWatch alarms that were created on your behalf.

To delete the service-linked role used by Application Auto Scaling to access your Amazon Keyspaces table, follow the steps in [Deleting a service-linked role for Amazon Keyspaces](using-service-linked-roles-app-auto-scaling.md#delete-service-linked-role-app-auto-scaling). 

**Note**  
To delete the service-linked role that Application Auto Scaling uses, you must disable automatic scaling on all tables in the account across all AWS Regions.

------
#### [ Console ]

**Turn off Amazon Keyspaces automatic scaling for your table using the console**

**Using the Amazon Keyspaces console**

1. Sign in to the AWS Management Console, and open the Amazon Keyspaces console at [https://console.aws.amazon.com/keyspaces/home](https://console.aws.amazon.com/keyspaces/home).

1. Choose the table you want to update and go to the **Capacity** tab. 

1. In the **Capacity settings** section, choose **Edit**. 

1. To disable Amazon Keyspaces automatic scaling, clear the **Scale automatically** check box. Disabling automatic scaling deregisters the table as a scalable target with Application Auto Scaling. 

------
#### [ Cassandra Query Language (CQL) ]

**Turn off Amazon Keyspaces automatic scaling for your table using CQL**

The following statement turns off auto scaling for write capacity of the table *mytable*. 

```
ALTER TABLE mykeyspace.mytable
WITH AUTOSCALING_SETTINGS = {
    'provisioned_write_capacity_autoscaling_update': {
        'autoscaling_disabled': true
    }
};
```

------
#### [ CLI ]

**Turn off Amazon Keyspaces automatic scaling for your table using the AWS CLI**

The following command turns off auto scaling for the table's read capacity. It also deletes the CloudWatch alarms that were created on your behalf.

```
aws keyspaces update-table --keyspace-name mykeyspace --table-name mytable 
            \ --auto-scaling-specification readCapacityAutoScaling={autoScalingDisabled=true}
```

------

# View auto scaling activity for a Amazon Keyspaces table in Amazon CloudWatch
<a name="autoscaling.activity"></a>

You can monitor how Amazon Keyspaces automatic scaling uses resources by using Amazon CloudWatch, which generates metrics about your usage and performance. Follow the steps in the [Application Auto Scaling User Guide](https://docs.aws.amazon.com/autoscaling/application/userguide/monitoring-cloudwatch.html) to create a CloudWatch dashboard.

# Use burst capacity effectively in Amazon Keyspaces
<a name="throughput-bursting"></a>

Amazon Keyspaces provides some flexibility in your per-partition throughput provisioning by providing *burst capacity*. Whenever you're not fully using a partition's throughput, Amazon Keyspaces reserves a portion of that unused capacity for later *bursts* of throughput to handle usage spikes.

Amazon Keyspaces currently retains up to 5 minutes (300 seconds) of unused read and write capacity. During an occasional burst of read or write activity, these extra capacity units can be consumed quickly—even faster than the per-second provisioned throughput capacity that you've defined for your table.

Amazon Keyspaces can also consume burst capacity for background maintenance and other tasks without prior notice.

Note that these burst capacity details might change in the future.