

# CQL language reference for Amazon Keyspaces (for Apache Cassandra)
<a name="cql"></a>

After you connect to an Amazon Keyspaces endpoint, you use Cassandra Query Language (CQL) to work with your database. CQL is similar in many ways to Structured Query Language (SQL).
+ **CQL elements** – This section covers the fundamental elements of CQL supported in Amazon Keyspaces, including identifiers, constants, terms, and data types. It explains concepts like string types, numeric types, collection types, and more. 
+ **Data Definition Language (DDL)** – DDL statements are used to manage data structures like keyspaces and tables in Amazon Keyspaces. This section covers statements for creating, altering, and dropping keyspaces and tables, as well as restoring tables from a point-in-time backup. 
+ **Data Manipulation Language (DML) ** – DML statements are used to manage data within tables. This section covers statements for selecting, inserting, updating, and deleting data. It also explains advanced querying capabilities like using the `IN` operator, ordering results, and pagination. 
+ **Built-in functions** – Amazon Keyspaces supports a variety of built-in scalar functions that you can use in CQL statements. This section provides an overview of these functions, including examples of their usage. 

Throughout this topic, you'll find detailed syntax, examples, and best practices for using CQL effectively in Amazon Keyspaces.

**Topics**
+ [Cassandra Query Language (CQL) elements in Amazon Keyspaces](cql.elements.md)
+ [DDL statements (data definition language) in Amazon Keyspaces](cql.ddl.md)
+ [DML statements (data manipulation language) in Amazon Keyspaces](cql.dml.md)
+ [Built-in functions in Amazon Keyspaces](cql.functions.md)

# Cassandra Query Language (CQL) elements in Amazon Keyspaces
<a name="cql.elements"></a>

Learn about the Cassandra Query Language (CQL) elements that are supported by Amazon Keyspaces, including identifiers, constants, terms, and data types.

**Topics**
+ [identifier](#cql.elements.identifier)
+ [constants](#cql.elements.constants)
+ [term](#cql.elements.term)
+ [Data types](#cql.data-types)
+ [JSON encoding of Amazon Keyspaces data types](#cql.data-types.JSON)

## Identifiers
<a name="cql.elements.identifier"></a>

Identifiers (or names) are used to identify tables, columns, and other objects. An identifier can be quoted or not quoted. The following applies.

```
identifier          ::=  unquoted_identifier | quoted_identifier
unquoted_identifier ::=  re('[a-zA-Z][a-zA-Z0-9_]*')
quoted_identifier   ::=  '"' (any character where " can appear if doubled)+ '"'
```

## Constants
<a name="cql.elements.constants"></a>

The following constants are defined.

```
constant ::=  string | integer | float | boolean | uuid | blob | NULL
string   ::=  '\'' (any character where ' can appear if doubled)+ '\''
              '$$' (any character other than '$$') '$$'
integer  ::=  re('-?[0-9]+')
float    ::=  re('-?[0-9]+(\.[0-9]*)?([eE][+-]?[0-9+])?') | NAN | INFINITY
boolean  ::=  TRUE | FALSE
uuid     ::=  hex{8}-hex{4}-hex{4}-hex{4}-hex{12}
hex      ::=  re("[0-9a-fA-F]")
blob     ::=  '0' ('x' | 'X') hex+
```

## Terms
<a name="cql.elements.term"></a>

A term denotes the kind of values that are supported. Terms are defined by the following.

```
term                 ::=  constant | literal | function_call | arithmetic_operation | type_hint | bind_marker
literal              ::=  collection_literal | tuple_literal
function_call        ::=  identifier '(' [ term (',' term)* ] ')'
arithmetic_operation ::=  '-' term | term ('+' | '-' | '*' | '/' | '%') term
```

## Data types
<a name="cql.data-types"></a>

Amazon Keyspaces supports the following data types:

### String types
<a name="cql.data-types.string"></a>


****  

| Data type | Description | 
| --- | --- | 
|  `ascii`  | Represents an ASCII character string. | 
|  `text`  | Represents a UTF-8 encoded string. | 
|  `varchar`  |  Represents a UTF-8 encoded string (`varchar` is an alias for `text`).  | 

### Numeric types
<a name="cql.data-types.numeric"></a>


****  

| Data type | Description | 
| --- | --- | 
|  `bigint`  | Represents a 64-bit signed long. | 
|  `counter`  | Represents a 64-bit signed integer counter. For more information, see [Counters](#cql.data-types.numeric.counters). | 
|  `decimal`  | Represents a variable-precision decimal. | 
|  `double`  | Represents a 64-bit IEEE 754 floating point. | 
|  `float`  | Represents a 32-bit IEEE 754 floating point. | 
|  `int`  |  Represents a 32-bit signed int.  | 
|  `varint`  |  Represents an integer value within the \$1/-10^38 range.  | 

#### Counters
<a name="cql.data-types.numeric.counters"></a>

A `counter` column contains a 64-bit signed integer. The counter value is incremented or decremented using the [UPDATE](cql.dml.update.md) statement, and it cannot be set directly. This makes `counter` columns useful for tracking counts. For example, you can use counters to track the number of entries in a log file or the number of times a post has been viewed on a social network. The following restrictions apply to `counter` columns:
+ A column of type `counter` cannot be part of the `primary key` of a table.
+ In a table that contains one or more columns of type `counter`, all columns in that table must be of type `counter`.

In cases where a counter update fails (for example, because of timeouts or loss of connection with Amazon Keyspaces), the client doesn't know whether the counter value was updated. If the update is retried, the update to the counter value might get applied a second time.

### Blob type
<a name="cql.data-types.blob"></a>


****  

| Data type | Description | 
| --- | --- | 
|  `blob`  | Represents arbitrary bytes. | 

### Boolean type
<a name="cql.data-types.boolean"></a>


****  

| Data type | Description | 
| --- | --- | 
|  `boolean`  | Represents true or false. | 

### Time-related types
<a name="cql.data-types.time"></a>


****  

| Data type | Description | 
| --- | --- | 
|  `date`  | A string in the format <yyyy>-<mm>-<dd>. | 
|  `timestamp`  | 64-bit signed integer representing the date and time since epoch (January 1 1970 at 00:00:00 GMT) in milliseconds. | 
|  `timeuuid`  | Represents a [version 1 UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier#Version_1_(date-time_and_MAC_address)). | 

### Collection types
<a name="cql.data-types.collection"></a>


****  

| Data type | Description | 
| --- | --- | 
|  `list`  | Represents an ordered collection of literal elements. | 
|  `map`  | Represents an unordered collection of key-value pairs. | 
|  `set`  | Represents an unordered collection of one or more literal elements. | 

You declare a collection column by using the collection type followed by another data type (for example, `TEXT` or `INT`) in angled brackets. You can create a column with a `SET` of `TEXT`, or you can create a `MAP` of `TEXT` and `INT` key-value pairs, as shown in the following example.

```
SET <TEXT>
MAP <TEXT, INT>
```

A *non-frozen* collection allows you to make updates to each individual collection element. Client-side timestamps and Time to Live (TTL) settings are stored for individual elements.

When you use the `FROZEN` keyword on a collection type, the values of the collection are serialized into a single immutable value, and Amazon Keyspaces treats them like a `BLOB`. This is a *frozen* collection. An `INSERT` or `UPDATE` statement overwrites the entire frozen collection. You can't make updates to individual elements inside a frozen collection.

Client-side timestamps and Time to Live (TTL) settings apply to the entire frozen collection, not to individual elements. `Frozen` collection columns can be part of the `PRIMARY KEY` of a table.

You can nest frozen collections. For example, you can define a `MAP` within a `SET` if the `MAP` is using the `FROZEN` keyword, as shown in the following example. 

```
SET <FROZEN> <MAP <TEXT, INT>>>
```

Amazon Keyspaces supports nesting of up to 8 levels of frozen collections by default. For more information, see [Amazon Keyspaces service quotas](quotas.md#table). For more information about functional differences with Apache Cassandra, see [`FROZEN` collections](functional-differences.md#functional-differences.frozen-collections). For more information about CQL syntax, see [CREATE TABLE](cql.ddl.table.md#cql.ddl.table.create) and [ALTER TABLE](cql.ddl.table.md#cql.ddl.table.alter).

### Tuple type
<a name="cql.data-types.tuple"></a>

The `tuple` data type represents a bounded group of literal elements. You can use a tuple as an alternative to a `user defined type`. You don't need to use the `FROZEN` keyword for tuples. This is because a tuple is always frozen and you can't update elements individually. 

### Other types
<a name="cql.data-types.other"></a>


****  

| Data type | Description | 
| --- | --- | 
|  `inet`  | A string representing an IP address, in either IPv4 or IPv6 format. | 

### Static
<a name="cql.data-types.static"></a>

In an Amazon Keyspaces table with clustering columns, you can use the `STATIC` keyword to create a static column of any type. 

The following statement is an example of this.

```
my_column INT STATIC
```

For more information about working with static columns, see [Estimate capacity consumption for static columns in Amazon Keyspaces](static-columns.md).

### User-defined types (UDTs)
<a name="cql.data-types.user-defined"></a>

Amazon Keyspaces supports user-defined types (UDTs). You can use any valid Amazon Keyspaces data type to create a UDT, including collections and other existing UDTs. You create UDTs in a keyspace and can use them to define columns in any table in the keyspace.

For more information about CQL syntax, see [User-defined types (UDTs)](cql.ddl.type.md). For more information about working with UDTs, see [User-defined types (UDTs) in Amazon Keyspaces](udts.md).

To review how many UDTs are supported per keyspace, supported levels of nesting, and other default values and quotas related to UDTs, see [Quotas and default values for user-defined types (UDTs) in Amazon Keyspaces](quotas.md#quotas-udts).

## JSON encoding of Amazon Keyspaces data types
<a name="cql.data-types.JSON"></a>

Amazon Keyspaces offers the same JSON data type mappings as Apache Cassandra. The following table describes the data types Amazon Keyspaces accepts in `INSERT JSON` statements and the data types Amazon Keyspaces uses when returning data with the `SELECT JSON` statement.

For single-field data types such as `float`, `int`, `UUID`, and `date`, you also can insert data as a `string`. For compound data types and collections, such as `tuple`, `map`, and `list`, you can also insert data as JSON or as an encoded `JSON string`.


****  

| JSON data type | Data types accepted in `INSERT JSON` statements | Data types returned in `SELECT JSON` statements | Notes | 
| --- | --- | --- | --- | 
|  `ascii`  | string | string | Uses JSON character escape `\u`. | 
|  `bigint`  | integer, string | integer | String must be a valid 64-bit integer. | 
|  `blob`  | string | string | String should begin with `0x` followed by an even number of hex digits. | 
|  `boolean`  | boolean, string | boolean | String must be either `true` or `false`. | 
|  `date`  | string | string | Date in format `YYYY-MM-DD`, timezone UTC. | 
|  `decimal`  | integer, float, string | float | Can exceed 32-bit or 64-bit IEEE-754 floating point precision in client-side decoder. | 
|  `double`  | integer, float, string | float | String must be a valid integer or float. | 
|  `float`  | integer, float, string | float | String must be a valid integer or float. | 
|  `inet`  | string | string | IPv4 or IPv6 address. | 
|  `int`  | integer, string | integer | String must be a valid 32-bit integer. | 
|  `list`  | list, string | list | Uses the native JSON list representation. | 
|  `map`  | map, string | map | Uses the native JSON map representation. | 
|  `smallint`  | integer, string | integer | String must be a valid 16-bit integer. | 
|  `set`  | list, string | list | Uses the native JSON list representation. | 
|  `text`  | string | string | Uses JSON character escape `\u`. | 
|  `time`  | string | string | Time of day in format `HH-MM-SS[.fffffffff]`. | 
|  `timestamp`  | integer, string | string | A timestamp. String constants allow you to store timestamps as dates. Date stamps with format `YYYY-MM-DD HH:MM:SS.SSS` are returned. | 
|  `timeuuid`  | string | string | Type 1 UUID. See [constants](#cql.elements.constants) for the UUID format. | 
|  `tinyint`  | integer, string | integer | String must be a valid 8-bit integer. | 
|  `tuple`  | list, string | list | Uses the native JSON list representation. | 
|  `UDT`  | map, string | map | Uses the native JSON map representation with field names as keys. | 
|  `uuid`  | string | string | See [constants](#cql.elements.constants) for the UUID format. | 
|  `varchar`  | string | string | Uses JSON character escape `\u`. | 
|  `varint`  | integer, string | integer | Variable length; might overflow 32-bit or 64-bit integers in client-side decoder. | 

# DDL statements (data definition language) in Amazon Keyspaces
<a name="cql.ddl"></a>

*Data definition language* (DDL) is the set of Cassandra Query Language (CQL) statements that you use to manage data structures in Amazon Keyspaces (for Apache Cassandra), such as keyspaces and tables. You use DDL to create these data structures, modify them after they are created, and remove them when they're no longer in use. Amazon Keyspaces performs DDL operations asynchronously. For more information about how to confirm that an asynchronous operation has completed, see [Asynchronous creation and deletion of keyspaces and tables](functional-differences.md#functional-differences.table-keyspace-management).

 The following DDL statements are supported: 
+  [CREATE KEYSPACE](cql.ddl.keyspace.md#cql.ddl.keyspace.create) 
+  [ALTER KEYSPACE](cql.ddl.keyspace.md#cql.ddl.keyspace.alter) 
+  [DROP KEYSPACE](cql.ddl.keyspace.md#cql.ddl.keyspace.drop) 
+  [USE](cql.ddl.keyspace.md#cql.ddl.keyspace.use) 
+  [CREATE TABLE](cql.ddl.table.md#cql.ddl.table.create) 
+  [ALTER TABLE](cql.ddl.table.md#cql.ddl.table.alter) 
+  [RESTORE TABLE](cql.ddl.table.md#cql.ddl.table.restore) 
+  [DROP TABLE](cql.ddl.table.md#cql.ddl.table.drop) 
+  [CREATE TYPE](cql.ddl.type.md#cql.ddl.type.create) 
+  [DROP TYPE](cql.ddl.type.md#cql.ddl.type.drop) 

**Topics**
+ [Keyspaces](cql.ddl.keyspace.md)
+ [Tables](cql.ddl.table.md)
+ [User-defined types (UDTs)](cql.ddl.type.md)

# Keyspaces
<a name="cql.ddl.keyspace"></a>

A *keyspace* groups related tables that are relevant for one or more applications. In terms of a relational database management system (RDBMS), keyspaces are roughly similar to databases, tablespaces, or similar constructs.

**Note**  
In Apache Cassandra, keyspaces determine how data is replicated among multiple storage nodes. However, Amazon Keyspaces is a fully managed service: The details of its storage layer are managed on your behalf. For this reason, keyspaces in Amazon Keyspaces are logical constructs only, and aren't related to the underlying physical storage.

For information about quota limits and constraints for Amazon Keyspaces keyspaces, see [Quotas for Amazon Keyspaces (for Apache Cassandra)](quotas.md).

**Topics**
+ [CREATE KEYSPACE](#cql.ddl.keyspace.create)
+ [ALTER KEYSPACE](#cql.ddl.keyspace.alter)
+ [DROP KEYSPACE](#cql.ddl.keyspace.drop)
+ [USE](#cql.ddl.keyspace.use)

## CREATE KEYSPACE
<a name="cql.ddl.keyspace.create"></a>

Use the `CREATE KEYSPACE` statement to create a new keyspace.

**Syntax**

```
create_keyspace_statement ::= 
    CREATE KEYSPACE [ IF NOT EXISTS ] keyspace_name
    WITH options
```

Where:
+ `keyspace_name` is the name of the keyspace to be created.
+ *options* are one or more of the following:
  + `REPLICATION` – A map that indicates the replication strategy for the keyspace:
    + `SingleRegionStrategy` – For a single-Region keyspace. (Required)
    + `NetworkTopologyStrategy` – Specify at least two AWS Regions. The replication factor for each Region is three. (Optional)
  + `DURABLE_WRITES` – Writes to Amazon Keyspaces are always durable, so this option isn't required. However, if specified, the value must be `true`.
  + `TAGS` – A list of key-value pair tags to be attached to the resource when you create it. (Optional)

**Example**

Create a keyspace as follows.

```
CREATE KEYSPACE my_keyspace
    WITH REPLICATION = {'class': 'SingleRegionStrategy'} and TAGS ={'key1':'val1', 'key2':'val2'} ;
```

To create a multi-Region keyspace, specify `NetworkTopologyStrategy` and include at least two AWS Regions. The replication factor for each Region is three.

```
CREATE KEYSPACE my_keyspace
    WITH REPLICATION = {'class':'NetworkTopologyStrategy', 'us-east-1':'3', 'ap-southeast-1':'3','eu-west-1':'3'};
```

## ALTER KEYSPACE
<a name="cql.ddl.keyspace.alter"></a>

You can use the `ALTER KEYSPACE WITH` statement for the following *options*
+ `REPLICATION` – Use this option to add a new AWS Region replica to a keyspace. You can add a new Region to a single-Region or to a multi-Region keyspace. 
+ `TAGS` – Use this option to add or remove tags from a keyspace.

**Syntax**

```
alter_keyspace_statement ::= 
    ALTER KEYSPACE keyspace_name
    WITH options
```

Where:
+ `keyspace_name` is the name of the keyspace to be altered.
+ *options* are one of the following:
  + `ADD | DROP TAGS` – A list of key-value pair tags to be added or removed from the keyspace. 
  + `REPLICATION` – A map that indicates the replication strategy for the keyspace; 
    + `class`– `NetworkTopologyStrategy` defines the keyspace as a multi-Region keyspace.
    + `region`– Specify one additional AWS Region for this keyspace. The replication factor for each Region is three.
    + `CLIENT_SIDE_TIMESTAMPS` – The default is `DISABLED`. You can only change the status to `ENABLED`.

**Examples**

Alter a keyspace as shown in the following example to add tags.

```
ALTER KEYSPACE my_keyspace ADD TAGS {'key1':'val1', 'key2':'val2'};
```

To add a third Region to a multi-Region keyspace, you can use the following statement.

```
ALTER KEYSPACE my_keyspace
WITH REPLICATION = {
    'class': 'NetworkTopologyStrategy',
    'us-east-1': '3',
    'us-west-2': '3',
    'us-west-1': '3'
} AND CLIENT_SIDE_TIMESTAMPS = {'status': 'ENABLED'};
```

## DROP KEYSPACE
<a name="cql.ddl.keyspace.drop"></a>

Use the `DROP KEYSPACE` statement to remove a keyspace—including all of its contents, such as tables.

**Syntax**

```
drop_keyspace_statement ::= 
    DROP KEYSPACE [ IF EXISTS ] keyspace_name
```

Where:
+ *keyspace\$1name* is the name of the keyspace to be dropped.

**Example**

```
DROP KEYSPACE my_keyspace;
```

## USE
<a name="cql.ddl.keyspace.use"></a>

Use the `USE` statement to define the current keyspace. This allows you to refer to objects bound to a specific keyspace, for example tables and types, without using the fully qualified name that includes the keyspace prefix. 

**Syntax**

```
use_statement ::= 
    USE keyspace_name
```

Where:
+ *keyspace\$1name* is the name of the keyspace to be used.

**Example**

```
USE my_keyspace;
```

# Tables
<a name="cql.ddl.table"></a>

*Tables* are the primary data structures in Amazon Keyspaces. Data in a table is organized into rows and columns. A subset of those columns is used to determine partitioning (and ultimately data placement) through the specification of a partition key.

Another set of columns can be defined into clustering columns, which means that they can participate as predicates in query execution. 

By default, new tables are created with *on-demand* throughput capacity. You can change the capacity mode for new and existing tables. For more information about read/write capacity throughput modes, see [Configure read/write capacity modes in Amazon Keyspaces](ReadWriteCapacityMode.md). 

For tables in provisioned mode, you can configure optional `AUTOSCALING_SETTINGS`. For more information about Amazon Keyspaces auto scaling and the available options, see [Configure automatic scaling on an existing table](autoscaling.configureTable.md).

For information about quota limits and constraints for Amazon Keyspaces tables, see [Quotas for Amazon Keyspaces (for Apache Cassandra)](quotas.md).

**Topics**
+ [CREATE TABLE](#cql.ddl.table.create)
+ [ALTER TABLE](#cql.ddl.table.alter)
+ [RESTORE TABLE](#cql.ddl.table.restore)
+ [DROP TABLE](#cql.ddl.table.drop)

## CREATE TABLE
<a name="cql.ddl.table.create"></a>

Use the `CREATE TABLE` statement to create a new table.

 **Syntax**

```
create_table_statement ::=  CREATE TABLE [ IF NOT EXISTS ] table_name
    '('
        column_definition 
        ( ',' column_definition )*
        [ ',' PRIMARY KEY '(' primary_key ')' ]
    ')' [ WITH table_options ]

column_definition      ::=  column_name cql_type [ FROZEN ][ STATIC ][ PRIMARY KEY]

primary_key            ::=  partition_key [ ',' clustering_columns ]

partition_key          ::=  column_name
                              | '(' column_name ( ',' column_name )* ')'

clustering_columns     ::=  column_name ( ',' column_name )*

table_options          ::=  [table_options]
                              | CLUSTERING ORDER BY '(' clustering_order ')' [ AND table_options ]
                              | cdc
                              | CUSTOM_PROPERTIES
                              | AUTOSCALING_SETTINGS
                              | default_time_to_live
                              | TAGS

clustering_order       ::=  column_name (ASC | DESC) ( ',' column_name (ASC | DESC) )*
```

Where:
+ `table_name` is the name of the table to be created. The fully qualified name includes the keyspace prefix. Alternatively, you can set the current keyspace with the `USE` keyspace statement.
+ `column_definition` consists of the following:
  +  *`column_name`* – The name of the column.
  + `cql_type` – An Amazon Keyspaces data type (see [Data types](cql.elements.md#cql.data-types)).
  + `FROZEN` – Designates this column that is user-defined or of type `collection` (for example, `LIST`, `SET`, or `MAP`) as frozen. A *frozen* collection is serialized into a single immutable value and treated like a `BLOB`. For more information, see [Collection types](cql.elements.md#cql.data-types.collection).
  + `STATIC` – Designates this column as static. Static columns store values that are shared by all rows in the same partition.
  + `PRIMARY KEY` – Designates this column as the table's primary key.
+ `primary_key` consists of the following:
  + `partition_key`
  + `clustering_columns`
+ `partition_key`:
  + The partition key can be a single column, or it can be a compound value composed of two or more columns. The partition key portion of the primary key is required and determines how Amazon Keyspaces stores your data. 
+ `clustering_columns`:
  + The optional clustering column portion of your primary key determines how the data is clustered and sorted within each partition. 
+ `table_options` consist of the following:
  + *`CLUSTERING ORDER BY`* – The default CLUSTERING ORDER on a table is composed of your clustering keys in the `ASC` (ascending) sort direction. Specify it to override the default sort behavior. 
  +  *`cdc`* – A boolean that specifies if Amazon Keyspaces creates a change data capture (CDC) stream for the table. The default is `false`. To specify the `view type` when enabling a stream, set the `cdc_specification` with `CUSTOM_PROPERTIES` . 
  +  *`CUSTOM_PROPERTIES`* – A map of settings that are specific to Amazon Keyspaces.
    +  `capacity_mode`: Specifies the read/write throughput capacity mode for the table. The options are `throughput_mode:PAY_PER_REQUEST` and `throughput_mode:PROVISIONED`. The provisioned capacity mode requires `read_capacity_units` and `write_capacity_units` as inputs. The default is `throughput_mode:PAY_PER_REQUEST`.
    +  `cdc_specification`: Specifies the `view_type` of the CDC stream. The options are:
      + `NEW_AND_OLD_IMAGES` – both versions of the row, before and after the change. This is the default.
      + `NEW_IMAGE` – the version of the row after the change.
      + `OLD_IMAGE` – the version of the row before the change.
      + `KEYS_ONLY` – the partition and clustering keys of the row that was changed.

      For more information about CDC streams, see [Working with change data capture (CDC) streams in Amazon Keyspaces](cdc.md). For code examples, see [Enable a CDC stream when creating a new table in Amazon Keyspaces](keyspaces-enable-cdc-new-table.md).
    +  `client_side_timestamps`: Specifies if client-side timestamps are enabled or disabled for the table. The options are `{'status': 'enabled'}` and `{'status': 'disabled'}`. If it's not specified, the default is `status:disabled`. After client-side timestamps are enabled for a table, this setting cannot be disabled. 
    +  `encryption_specification`: Specifies the encryption options for encryption at rest. If it's not specified, the default is `encryption_type:AWS_OWNED_KMS_KEY`. The encryption option customer managed key requires the AWS KMS key in Amazon Resource Name (ARN) format as input: `kms_key_identifier:ARN`: `kms_key_identifier:ARN`. 
    +  `point_in_time_recovery`: Specifies if point-in-time restore is enabled or disabled for the table. The options are `status:enabled` and `status:disabled`. If it's not specified, the default is `status:disabled`.
    + `replica_updates`: Specifies the settings of a multi-Region table that are specific to an AWS Region. For a multi-Region table, you can configure the table's read capacity differently per AWS Region. You can do this by configuring the following parameters. For more information and examples, see [Create a multi-Region table in provisioned mode with auto scaling in Amazon Keyspaces](tables-mrr-create-provisioned.md).
      + `region` – The AWS Region of the table replica with the following settings:
        + `read_capacity_units`
    +  `TTL`: Enables Time to Live custom settings for the table. To enable, use `status:enabled`. The default is `status:disabled`. After `TTL` is enabled, you can't disable it for the table.
  + `AUTOSCALING_SETTINGS` includes the following optional settings for tables in provisioned mode. For more information and examples, see [Create a new table with automatic scaling](autoscaling.createTable.md).
    + `provisioned_write_capacity_autoscaling_update`:
      + `autoscaling_disabled` – To enable auto scaling for write capacity, set the value to `false`. The default is `true`. (Optional)
      + `minimum_units` – The minimum level of write throughput that the table should always be ready to support. The value must be between 1 and the max throughput per second quota for your account (40,000 by default).
      + `maximum_units` – The maximum level of write throughput that the table should always be ready to support. The value must be between 1 and the max throughput per second quota for your account (40,000 by default).
      + `scaling_policy` – Amazon Keyspaces supports the target tracking policy. The auto scaling target is the provisioned write capacity of the table.
        + `target_tracking_scaling_policy_configuration` – To define the target tracking policy, you must define the target value. For more information about target tracking and cooldown periods, see [ Target Tracking Scaling Policies](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html) in the *Application Auto Scaling User Guide*. 
          + `target_value` – The target utilization rate of the table. Amazon Keyspaces auto scaling ensures that the ratio of consumed capacity to provisioned capacity stays at or near this value. You define `target_value` as a percentage. A double between 20 and 90. (Required)
          + `scale_in_cooldown` – A cooldown period in seconds between scaling activities that lets the table stabilize before another scale in activity starts. If no value is provided, the default is 0. (Optional)
          + `scale_out_cooldown` – A cooldown period in seconds between scaling activities that lets the table stabilize before another scale out activity starts. If no value is provided, the default is 0. (Optional)
          + `disable_scale_in`: A `boolean` that specifies if `scale-in` is disabled or enabled for the table. This parameter is disabled by default. To turn on `scale-in`, set the `boolean` value to `FALSE`. This means that capacity is automatically scaled down for a table on your behalf. (Optional) 
    + `provisioned_read_capacity_autoscaling_update`:
      + `autoscaling_disabled` – To enable auto scaling for read capacity, set the value to `false`. The default is `true`. (Optional)
      + `minimum_units` – The minimum level of throughput that the table should always be ready to support. The value must be between 1 and the max throughput per second quota for your account (40,000 by default).
      + `maximum_units` – The maximum level of throughput that the table should always be ready to support. The value must be between 1 and the max throughput per second quota for your account (40,000 by default).
      + `scaling_policy` – Amazon Keyspaces supports the target tracking policy. The auto scaling target is the provisioned read capacity of the table.
        + `target_tracking_scaling_policy_configuration` – To define the target tracking policy, you must define the target value. For more information about target tracking and cooldown periods, see [ Target Tracking Scaling Policies](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html) in the *Application Auto Scaling User Guide*. 
          + `target_value` – The target utilization rate of the table. Amazon Keyspaces auto scaling ensures that the ratio of consumed capacity to provisioned capacity stays at or near this value. You define `target_value` as a percentage. A double between 20 and 90. (Required)
          + `scale_in_cooldown` – A cooldown period in seconds between scaling activities that lets the table stabilize before another scale in activity starts. If no value is provided, the default is 0. (Optional)
          + `scale_out_cooldown` – A cooldown period in seconds between scaling activities that lets the table stabilize before another scale out activity starts. If no value is provided, the default is 0. (Optional)
          + `disable_scale_in`: A `boolean` that specifies if `scale-in` is disabled or enabled for the table. This parameter is disabled by default. To turn on `scale-in`, set the `boolean` value to `FALSE`. This means that capacity is automatically scaled down for a table on your behalf. (Optional) 
    + `replica_updates`: Specifies the AWS Region specific auto scaling settings of a multi-Region table. For a multi-Region table, you can configure the table's read capacity differently per AWS Region. You can do this by configuring the following parameters. For more information and examples, see [Update the provisioned capacity and auto scaling settings for a multi-Region table in Amazon Keyspaces](tables-mrr-autoscaling.md).
      + `region` – The AWS Region of the table replica with the following settings:
        + `provisioned_read_capacity_autoscaling_update`
          + `autoscaling_disabled` – To enable auto scaling for the table's read capacity, set the value to `false`. The default is `true`. (Optional) 
**Note**  
Auto scaling for a multi-Region table has to be either enabled or disabled for all replicas of the table.
          + `minimum_units` – The minimum level of read throughput that the table should always be ready to support. The value must be between 1 and the max throughput per second quota for your account (40,000 by default).
          + `maximum_units` – The maximum level of read throughput that the table should always be ready to support. The value must be between 1 and the max throughput per second quota for your account (40,000 by default).
          + `scaling_policy` – Amazon Keyspaces supports the target tracking policy. The auto scaling target is the provisioned read capacity of the table.
            + `target_tracking_scaling_policy_configuration` – To define the target tracking policy, you must define the target value. For more information about target tracking and cooldown periods, see [ Target Tracking Scaling Policies](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html) in the *Application Auto Scaling User Guide*. 
              + `target_value` – The target utilization rate of the table. Amazon Keyspaces auto scaling ensures that the ratio of consumed read capacity to provisioned read capacity stays at or near this value. You define `target_value` as a percentage. A double between 20 and 90. (Required)
              + `scale_in_cooldown` – A cooldown period in seconds between scaling activities that lets the table stabilize before another scale in activity starts. If no value is provided, the default is 0. (Optional)
              + `scale_out_cooldown` – A cooldown period in seconds between scaling activities that lets the table stabilize before another scale out activity starts. If no value is provided, the default is 0. (Optional)
              + `disable_scale_in`: A `boolean` that specifies if `scale-in` is disabled or enabled for the table. This parameter is disabled by default. To turn on `scale-in`, set the `boolean` value to `FALSE`. This means that read capacity is automatically scaled down for a table on your behalf. (Optional) 
  + `default_time_to_live` – The default Time to Live setting in seconds for the table.
  + `TAGS` – A list of key-value pair tags to be attached to the resource when it's created. 
  + `clustering_order` consists of the following:
    +  *`column_name`* – The name of the column. 
    +  *`ASC | DESC`* – Sets the ascendant (`ASC`) or descendant (`DESC`) order modifier. If it's not specified, the default order is ASC. 

**Example**

```
CREATE TABLE IF NOT EXISTS my_keyspace.my_table (
                                            id text,
                                            name text,
                                            region text,
                                            division text,
                                            project text,
                                            role text,
                                            pay_scale int,
                                            vacation_hrs float,
                                            manager_id text,
                                            PRIMARY KEY (id,division))
                                            WITH CUSTOM_PROPERTIES={
                                                'capacity_mode':{
                                                        'throughput_mode': 'PROVISIONED', 'read_capacity_units': 10, 'write_capacity_units': 20
                                                    },
                                                'point_in_time_recovery':{'status': 'enabled'},
                                                'encryption_specification':{
                                                        'encryption_type': 'CUSTOMER_MANAGED_KMS_KEY', 
                                                        'kms_key_identifier':'arn:aws:kms:eu-west-1:5555555555555:key/11111111-1111-111-1111-111111111111'
                                                    }
                                            }
                                            AND CLUSTERING ORDER BY (division ASC) 
                                            AND TAGS={'key1':'val1', 'key2':'val2'}
                                            AND default_time_to_live = 3024000;
```

In a table that uses clustering columns, non-clustering columns can be declared as static in the table definition. For more information about static columns, see [Estimate capacity consumption for static columns in Amazon Keyspaces](static-columns.md).

**Example**

```
CREATE TABLE my_keyspace.my_table (
                                            id int,
                                            name text,
                                            region text,
                                            division text,
                                            project text STATIC,
                                            PRIMARY KEY (id,division));
```

You can create a table with a column that uses a user-defined type (UDT). The first statement in the examples creates a type, the second statement creates a table with a column that uses the type.

**Example**

```
CREATE TYPE my_keyspace."udt""N@ME" (my_field int);
CREATE TABLE my_keyspace.my_table (my_col1 int pri key, my_col2 "udt""N@ME");
```

## ALTER TABLE
<a name="cql.ddl.table.alter"></a>

Use the `ALTER TABLE` statement to add new columns, add tags, or change the table's custom properties.

**Syntax**

```
alter_table_statement ::=  ALTER TABLE table_name    
 
        [ ADD ( column_definition | column_definition_list)  ] 
        [[ADD | DROP] TAGS {'key1':'val1', 'key2':'val2'}]            
        [ WITH table_options [ , ... ] ] ;
        
column_definition      ::=  column_name cql_type
```

Where:
+ `table_name` is the name of the table to be altered.
+ `column_definition` is the name of the column and data type to be added.
+ `column_definition_list` is a comma-separated list of columns placed inside parentheses.
+ `table_options` consist of the following:
  + `AUTOSCALING_SETTINGS` includes the optional auto scaling settings for provisioned tables. For syntax and detailed descriptions, see [CREATE TABLE](#cql.ddl.table.create). For examples, see [Configure automatic scaling on an existing table](autoscaling.configureTable.md).
  +  *`cdc`* – A boolean that specifies if Amazon Keyspaces creates a change data capture (CDC) stream for the table. The default is `false`. To specify the `view type` when enabling a stream, set the `cdc_specification` with `CUSTOM_PROPERTIES` . 
  +  *`CUSTOM_PROPERTIES`* – A map of settings specific to Amazon Keyspaces.
    +  `capacity_mode`: Specifies the read/write throughput capacity mode for the table. The options are `throughput_mode:PAY_PER_REQUEST` and `throughput_mode:PROVISIONED`. The provisioned capacity mode requires `read_capacity_units` and `write_capacity_units` as inputs. The default is `throughput_mode:PAY_PER_REQUEST`.
    +  `cdc_specification`: Specifies the `view_type` of the CDC stream. The options are:
      + `NEW_AND_OLD_IMAGES` – both versions of the row, before and after the change. This is the default.
      + `NEW_IMAGE` – the version of the row after the change.
      + `OLD_IMAGE` – the version of the row before the change.
      + `KEYS_ONLY` – the partition and clustering keys of the row that was changed.

      For more information about CDC streams, see [Working with change data capture (CDC) streams in Amazon Keyspaces](cdc.md). For code examples, see [Enable a CDC stream for an existing table in Amazon Keyspaces](keyspaces-enable-cdc-alter-table.md).
    +  `client_side_timestamps`: Specifies if client-side timestamps are enabled or disabled for the table. The options are `{'status': 'enabled'}` and `{'status': 'disabled'}`. If it's not specified, the default is `status:disabled`. After client-side timestamps are enabled for a table, this setting cannot be disabled. 
    +  `encryption_specification`: Specifies the encryption option for encryption at rest. The options are `encryption_type:AWS_OWNED_KMS_KEY` and `encryption_type:CUSTOMER_MANAGED_KMS_KEY`. The encryption option customer managed key requires the AWS KMS key in Amazon Resource Name (ARN) format as input: `kms_key_identifier:ARN`. 
    +  `point_in_time_recovery`: Specifies if point-in-time restore is enabled or disabled for the table. The options are `status:enabled` and `status:disabled`. The default is `status:disabled`.
    + `replica_updates`: Specifies the AWS Region specific settings of a multi-Region table. For a multi-Region table, you can configure the table's read capacity differently per AWS Region. You can do this by configuring the following parameters. For more information and examples, see [Update the provisioned capacity and auto scaling settings for a multi-Region table in Amazon Keyspaces](tables-mrr-autoscaling.md).
      + `region` – The AWS Region of the table replica with the following settings:
        + `read_capacity_units` 
    +  `ttl`: Enables Time to Live custom settings for the table. To enable, use `status:enabled`. The default is `status:disabled`. After `ttl`is enabled, you can't disable it for the table.
+ `default_time_to_live`: The default Time to Live setting in seconds for the table.
+ `TAGS` is a list of key-value pair tags to be attached to the resource.

**Note**  
With ALTER TABLE, you can only change a single custom property. You can't combine more than one ALTER TABLE command in the same statement.

**Examples**

The following statement shows how to add a column to an existing table.

```
ALTER TABLE mykeyspace.mytable ADD (ID int);
```

This statement shows how to add two collection columns to an existing table: 
+ A frozen collection column `col_frozen_list` that contains a nested frozen collection
+ A non-frozen collection column `col_map` that contains a nested frozen collection

```
ALTER TABLE my_Table ADD(col_frozen_list FROZEN<LIST<FROZEN<SET<TEXT>>>>, col_map MAP<INT, FROZEN<SET<INT>>>);
```

The following example shows how to add a column that uses a user-defined type (UDT) to a table.

```
ALTER TABLE my_keyspace.my_table ADD (my_column, my_udt;);
```

To change a table's capacity mode and specify read and write capacity units, you can use the following statement.

```
ALTER TABLE mykeyspace.mytable WITH CUSTOM_PROPERTIES={'capacity_mode':{'throughput_mode': 'PROVISIONED', 'read_capacity_units': 10, 'write_capacity_units': 20}};
```

The following statement specifies a customer managed KMS key for the table.

```
ALTER TABLE mykeyspace.mytable WITH CUSTOM_PROPERTIES={     
              'encryption_specification':{ 
                      'encryption_type': 'CUSTOMER_MANAGED_KMS_KEY', 
                      'kms_key_identifier':'arn:aws:kms:eu-west-1:5555555555555:key/11111111-1111-111-1111-111111111111'     
                  } 
         };
```

To enable point-in-time restore for a table, you can use the following statement.

```
ALTER TABLE mykeyspace.mytable WITH CUSTOM_PROPERTIES={'point_in_time_recovery': {'status': 'enabled'}};
```

To set a default Time to Live value in seconds for a table, you can use the following statement.

```
ALTER TABLE my_table WITH default_time_to_live = 2592000;
```

This statement enables custom Time to Live settings for a table.

```
ALTER TABLE mytable WITH CUSTOM_PROPERTIES={'ttl':{'status': 'enabled'}};
```

## RESTORE TABLE
<a name="cql.ddl.table.restore"></a>

Use the `RESTORE TABLE` statement to restore a table to a point in time. This statement requires point-in-time recovery to be enabled on a table. For more information, see [Backup and restore data with point-in-time recovery for Amazon Keyspaces](PointInTimeRecovery.md).

**Syntax**

```
restore_table_statement ::=  
    RESTORE TABLE restored_table_name FROM TABLE source_table_name 
                    [ WITH table_options [ , ... ] ];
```

Where:
+ `restored_table_name` is the name of the restored table.
+ `source_table_name` is the name of the source table.
+ `table_options` consists of the following:
  + `restore_timestamp` is the restore point time in ISO 8601 format. If it's not specified, the current timestamp is used.
  +  *`CUSTOM_PROPERTIES`* – A map of settings specific to Amazon Keyspaces.
    +  `capacity_mode`: Specifies the read/write throughput capacity mode for the table. The options are `throughput_mode:PAY_PER_REQUEST` and `throughput_mode:PROVISIONED`. The provisioned capacity mode requires `read_capacity_units` and `write_capacity_units` as inputs. The default is the current setting from the source table.
    +  `encryption_specification`: Specifies the encryption option for encryption at rest. The options are `encryption_type:AWS_OWNED_KMS_KEY` and `encryption_type:CUSTOMER_MANAGED_KMS_KEY`. The encryption option customer managed key requires the AWS KMS key in Amazon Resource Name (ARN) format as input: `kms_key_identifier:ARN`. To restore a table encrypted with a customer managed key to a table encrypted with an AWS owned key, Amazon Keyspaces requires access to the AWS KMS key of the source table.
    +  `point_in_time_recovery`: Specifies if point-in-time restore is enabled or disabled for the table. The options are `status:enabled` and `status:disabled`. Unlike when you create new tables, the default status for restored tables is `status:enabled` because the setting is inherited from the source table. To disable PITR for restored tables, you must set `status:disabled` explicitly.
    + `replica_updates`: Specifies the AWS Region specific settings of a multi-Region table. For a multi-Region table, you can configure the table's read capacity differently per AWS Region. You can do this by configuring the following parameters. 
      + `region` – The AWS Region of the table replica with the following settings:
        + `read_capacity_units` 
  + `AUTOSCALING_SETTINGS` includes the optional auto scaling settings for provisioned tables. For detailed syntax and descriptions, see [CREATE TABLE](#cql.ddl.table.create).
  + `TAGS` is a list of key-value pair tags to be attached to the resource.

**Note**  
Deleted tables can only be restored to the time of deletion.

**Example**

```
RESTORE TABLE mykeyspace.mytable_restored from table mykeyspace.my_table 
WITH restore_timestamp = '2020-06-30T04:05:00+0000'
AND custom_properties = {'point_in_time_recovery':{'status':'disabled'}, 'capacity_mode':{'throughput_mode': 'PROVISIONED', 'read_capacity_units': 10, 'write_capacity_units': 20}}
AND TAGS={'key1':'val1', 'key2':'val2'};
```

## DROP TABLE
<a name="cql.ddl.table.drop"></a>

Use the `DROP TABLE` statement to remove a table from the keyspace.

**Syntax**

```
drop_table_statement ::=  
    DROP TABLE [ IF EXISTS ] table_name
```

Where:
+ `IF EXISTS` prevents `DROP TABLE` from failing if the table doesn't exist. (Optional)
+ `table_name` is the name of the table to be dropped.

**Example**

```
DROP TABLE my_keyspace.my_table;
```

# User-defined types (UDTs)
<a name="cql.ddl.type"></a>

*UDT* – A grouping of fields and data types that you can use to define a single column in Amazon Keyspaces. Valid data types for UDTs are all supported Cassandra data types, including collections and other UDTs that you've already created in the same keyspace. For more information about supported Cassandra data types, see [Cassandra data type support](cassandra-apis.md#cassandra-data-type).

```
user_defined_type::= udt_name
udt_name::= [ keyspace_name '.' ] identifier
```

**Topics**
+ [CREATE TYPE](#cql.ddl.type.create)
+ [DROP TYPE](#cql.ddl.type.drop)

## CREATE TYPE
<a name="cql.ddl.type.create"></a>

Use the `CREATE TYPE` statement to create a new type.

 **Syntax**

```
create_type_statement ::=  CREATE TYPE [ IF NOT EXISTS ] udt_name
    '('field_definition ( ',' field_definition)* ')'
            field_definition::= identifier cql_type
```

Where:
+ `IF NOT EXISTS` prevents `CREATE TYPE` from failing if the type already exists. (Optional)
+ `udt_name` is the fully-qualified name of the UDT in type format, for example `my_keyspace.my_type`. If you define the current keyspace with the `USE` statement, you don't need to specify the keyspace name.
+ `field_definition` consists of a name and a type.

The following table shows examples of allowed UDT names. The first columns shows how to enter the name when you create the type, the second column shows how Amazon Keyspaces formats the name internally. Amazon Keyspaces expects the formatted name for operations like `GetType`.


| Entered name | Formatted name | Note | 
| --- | --- | --- | 
|  MY\$1UDT  | my\$1udt | Without double-quotes, Amazon Keyspaces converts all upper-case characters to lower-case. | 
|  "MY\$1UDT"  | MY\$1UDT | With double-quotes, Amazon Keyspaces respects the upper-case characters, and removes the double-quotes from the formatted name. | 
|  "1234"  | 1234 | With double-quotes, the name can begin with a number, and Amazon Keyspaces removes the double-quotes from the formatted name. | 
|  "Special\$1Ch@r@cters<>\$1\$1"  | Special\$1Ch@r@cters<>\$1\$1 | With double-quotes, the name can contain special characters, and Amazon Keyspaces removes the double-quotes from the formatted name. | 
|  "nested""""""quotes"  | nested"""quotes | Amazon Keyspaces removes the outer double-quotes and the escape double-quotes from the formatted name. | 

**Examples**

```
CREATE TYPE my_keyspace.phone (
    country_code int,
    number text
);
```

You can nest UDTs if the nested UDT is frozen. For more information about default values and quotas for types, see [Amazon Keyspaces UDT quotas and default values](quotas.md#udt-table).

```
CREATE TYPE my_keyspace.user (
    first_name text,
    last_name text,
    phones FROZEN<phone>
);
```

For more code examples that show how to create UDTs, see [User-defined types (UDTs) in Amazon Keyspaces](udts.md).

## DROP TYPE
<a name="cql.ddl.type.drop"></a>

Use the `DROP TYPE` statement to delete a UDT. You can only delete a type that's not in use by another type or table.

 **Syntax**

```
drop_type_statement ::=  DROP TYPE [ IF EXISTS ] udt_name
```

Where:
+ `IF EXISTS` prevents `DROP TYPE` from failing if the type doesn't exist. (Optional)
+ `udt_name` is the fully-qualified name of the UDT in type format, for example `my_keyspace.my_type`. If you define the current keyspace with the `USE` statement, you don't need to specify the keyspace name.

**Example**

```
DROP TYPE udt_name;
```

# DML statements (data manipulation language) in Amazon Keyspaces
<a name="cql.dml"></a>

*Data manipulation language* (DML) is the set of Cassandra Query Language (CQL) statements that you use to manage data in Amazon Keyspaces (for Apache Cassandra) tables. You use DML statements to add, modify, or delete data in a table.

You also use DML statements to query data in a table. (Note that CQL doesn't support joins or subqueries.)

**Topics**
+ [SELECT](cql.dml.select.md)
+ [INSERT](cql.dml.insert.md)
+ [UPDATE](cql.dml.update.md)
+ [DELETE](cql.dml.delete.md)

# SELECT
<a name="cql.dml.select"></a>

Use a SELECT statement to query data. 

**Syntax**

```
select_statement ::=  SELECT  [ JSON ] ( select_clause | '*' )
                      FROM table_name
                      [ WHERE 'where_clause' ]
                      [ ORDER BY 'ordering_clause' ]
                      [ LIMIT (integer | bind_marker) ]
                      [ ALLOW FILTERING ]
select_clause    ::=  selector [ AS identifier ] ( ',' selector [ AS identifier ] )
selector         ::=  column_name
                      | term
                      | CAST '(' selector AS cql_type ')'
                      | function_name '(' [ selector ( ',' selector )* ] ')'
where_clause     ::=  relation ( AND relation )*
relation         ::=  column_name operator term
                      TOKEN
operator         ::=  '=' | '<' | '>' | '<=' | '>=' | IN | CONTAINS | CONTAINS KEY
ordering_clause  ::=  column_name [ ASC | DESC ] ( ',' column_name [ ASC | DESC ] )*
```

**Examples**

```
SELECT name, id, manager_id FROM "myGSGKeyspace".employees_tbl ;

SELECT JSON name, id, manager_id FROM "myGSGKeyspace".employees_tbl ;
```

For a table that maps JSON-encoded data types to Amazon Keyspaces data types, see [JSON encoding of Amazon Keyspaces data types](cql.elements.md#cql.data-types.JSON).

**Using the `IN` keyword**

The `IN` keyword specifies equality for one or more values. It can be applied to the partition key and the clustering column. Results are returned in the order the keys are presented in the `SELECT` statement.

**Examples**

```
SELECT * from mykeyspace.mytable WHERE primary.key1 IN (1,2) and clustering.key1 = 2;
SELECT * from mykeyspace.mytable WHERE primary.key1 IN (1,2) and clustering.key1 <= 2;
SELECT * from mykeyspace.mytable WHERE primary.key1 = 1 and clustering.key1 IN (1, 2);
SELECT * from mykeyspace.mytable WHERE primary.key1 <= 2 and clustering.key1 IN (1, 2) ALLOW FILTERING;
```

For more information about the `IN` keyword and how Amazon Keyspaces processes the statement, see [Use the `IN` operator with the `SELECT` statement in a query in Amazon Keyspaces](in.select.md).

**Ordering results**

The `ORDER BY` clause specifies the sort order of the returned results. It takes as arguments a list of column names along with the sort order for each column. You can only specify clustering columns in ordering clauses. Non-clustering columns are not allowed. The sort order options are `ASC` for ascending and `DESC` for descending sort order. If the sort order is omitted, the default ordering of the clustering column is used. For possible sort orders, see [Order results with `ORDER BY` in Amazon Keyspaces](ordering-results.md).

**Example**

```
SELECT name, id, division, manager_id FROM "myGSGKeyspace".employees_tbl WHERE id = '012-34-5678' ORDER BY division;
```

When using `ORDER BY` with the `IN` keyword, results are ordered within a page. Full re-ordering with disabled pagination is not supported.

**TOKEN**

You can apply the `TOKEN` function to the `PARTITION KEY` column in `SELECT` and `WHERE` clauses. With the `TOKEN` function, Amazon Keyspaces returns rows based on the mapped token value of the `PARTITION_KEY` rather than on the value of the `PARTITION KEY`.

`TOKEN` relations are not supported with the `IN` keyword.

**Examples**

```
SELECT TOKEN(id) from my_table; 

SELECT TOKEN(id) from my_table WHERE TOKEN(id) > 100 and TOKEN(id) < 10000;
```

**TTL function**

You can use the `TTL` function with the `SELECT` statement to retrieve the expiration time in seconds that is stored for a column. If no `TTL` value is set, the function returns `null`.

**Example**

```
SELECT TTL(my_column) from my_table;
```

The `TTL` function can’t be used on multi-cell columns such as collections.

**WRITETIME function**

You can use the `WRITETIME` function with the `SELECT` statement to retrieve the timestamp that is stored as metadata for the value of a column only if the table uses client-side timestamps. For more information, see [Client-side timestamps in Amazon Keyspaces](client-side-timestamps.md). 

```
SELECT WRITETIME(my_column) from my_table;
```

The `WRITETIME` function can’t be used on multi-cell columns such as collections.

**Note**  
For compatibility with established Cassandra driver behavior, tag-based authorization policies are not enforced when you perform operations on system tables by using Cassandra Query Language (CQL) API calls through Cassandra drivers and developer tools. For more information, see [Amazon Keyspaces resource access based on tags](security_iam_id-based-policy-examples.md#security_iam_id-based-policy-examples-tags).

# INSERT
<a name="cql.dml.insert"></a>

Use the `INSERT` statement to add a row to a table.

**Syntax**

```
insert_statement ::=  INSERT INTO table_name ( names_values | json_clause )
                      [ IF NOT EXISTS ]
                      [ USING update_parameter ( AND update_parameter )* ]
names_values     ::=  names VALUES tuple_literal
json_clause      ::=  JSON string [ DEFAULT ( NULL | UNSET ) ]                
names            ::=  '(' column_name ( ',' column_name )* ')'
```

**Example**

```
INSERT INTO "myGSGKeyspace".employees_tbl (id, name, project, region, division, role, pay_scale, vacation_hrs, manager_id)
VALUES ('012-34-5678','Russ','NightFlight','US','Engineering','IC',3,12.5, '234-56-7890') ;
```

**Update parameters**

`INSERT` supports the following values as `update_parameter`:
+ `TTL` – A time value in seconds. The maximum configurable value is 630,720,000 seconds, which is the equivalent of 20 years.
+ `TIMESTAMP` – A `bigint` value representing the number of microseconds since the standard base time known as the epoch: January 1 1970 at 00:00:00 GMT. A timestamp in Amazon Keyspaces has to fall between the range of 2 days in the past and 5 minutes in the future.

**Example**

```
INSERT INTO my_table (userid, time, subject, body, user)
        VALUES (B79CB3BA-745E-5D9A-8903-4A02327A7E09, 96a29100-5e25-11ec-90d7-b5d91eceda0a, 'Message', 'Hello','205.212.123.123')
        USING TTL 259200;
```

**JSON support**

For a table that maps JSON-encoded data types to Amazon Keyspaces data types, see [JSON encoding of Amazon Keyspaces data types](cql.elements.md#cql.data-types.JSON).

You can use the `JSON` keyword to insert a `JSON`-encoded map as a single row. For columns that exist in the table but are omitted in the JSON insert statement, use `DEFAULT UNSET` to preserve the existing values. Use `DEFAULT NULL` to write a NULL value into each row of omitted columns and overwrite the existing values (standard write charges apply). `DEFAULT NULL` is the default option.

**Example**

```
INSERT INTO "myGSGKeyspace".employees_tbl JSON '{"id":"012-34-5678",
                                                 "name": "Russ",
                                                 "project": "NightFlight",
                                                 "region": "US",
                                                 "division": "Engineering",
                                                 "role": "IC",
                                                 "pay_scale": 3,
                                                 "vacation_hrs": 12.5,
                                                 "manager_id": "234-56-7890"}';
```

If the JSON data contains duplicate keys, Amazon Keyspaces stores the last value for the key (similar to Apache Cassandra). In the following example, where the duplicate key is `id`, the value `234-56-7890` is used.

**Example**

```
INSERT INTO "myGSGKeyspace".employees_tbl JSON '{"id":"012-34-5678",
                                                 "name": "Russ",
                                                 "project": "NightFlight",
                                                 "region": "US",
                                                 "division": "Engineering",
                                                 "role": "IC",
                                                 "pay_scale": 3,
                                                 "vacation_hrs": 12.5,
                                                 "id": "234-56-7890"}';
```

# UPDATE
<a name="cql.dml.update"></a>

Use the `UPDATE` statement to modify a row in a table.

**Syntax**

```
update_statement ::=  UPDATE table_name
                      [ USING update_parameter ( AND update_parameter )* ]
                      SET assignment ( ',' assignment )*
                      WHERE where_clause
                      [ IF ( EXISTS | condition ( AND condition )*) ]
update_parameter ::=  ( integer | bind_marker )
assignment       ::=  simple_selection '=' term
                     | column_name '=' column_name ( '+' | '-' ) term
                     | column_name '=' list_literal '+' column_name
simple_selection ::=  column_name
                     | column_name '[' term ']'
                     | column_name '.' `field_name
condition        ::=  simple_selection operator term
```

**Example**

```
UPDATE "myGSGKeyspace".employees_tbl SET pay_scale = 5 WHERE id = '567-89-0123' AND division = 'Marketing' ;
```

To increment a `counter`, use the following syntax. For more information, see [Counters](cql.elements.md#cql.data-types.numeric.counters).

```
UPDATE ActiveUsers SET counter = counter + 1  WHERE user = A70FE1C0-5408-4AE3-BE34-8733E5K09F14 AND action = 'click';
```

**Update parameters**

`UPDATE` supports the following values as `update_parameter`:
+ `TTL` – A time value in seconds. The maximum configurable value is 630,720,000 seconds, which is the equivalent of 20 years.
+ `TIMESTAMP` – A `bigint` value representing the number of microseconds since the standard base time known as the epoch: January 1 1970 at 00:00:00 GMT. A timestamp in Amazon Keyspaces has to fall between the range of 2 days in the past and 5 minutes in the future.

**Example**

```
UPDATE my_table (userid, time, subject, body, user)
        VALUES (B79CB3BA-745E-5D9A-8903-4A02327A7E09, 96a29100-5e25-11ec-90d7-b5d91eceda0a, 'Message', 'Hello again','205.212.123.123')
        USING TIMESTAMP '2022-11-03 13:30:54+0400';
```

# DELETE
<a name="cql.dml.delete"></a>

Use the `DELETE` statement to remove a row from a table.

**Syntax**

```
delete_statement ::=  DELETE [ simple_selection ( ',' simple_selection ) ]
                      FROM table_name
                      [ USING update_parameter ( AND update_parameter )* ]
                      WHERE where_clause
                      [ IF ( EXISTS | condition ( AND condition )*) ]

simple_selection ::=  column_name
                     | column_name '[' term ']'
                     | column_name '.' `field_name

condition        ::=  simple_selection operator term
```

Where:
+ `table_name` is the table that contains the row you want to delete.

**Example**

```
DELETE manager_id FROM "myGSGKeyspace".employees_tbl WHERE id='789-01-2345' AND division='Executive' ;
```

`DELETE` supports the following value as `update_parameter`:
+ `TIMESTAMP` – A `bigint` value representing the number of microseconds since the standard base time known as the epoch: January 1 1970 at 00:00:00 GMT.

# Built-in functions in Amazon Keyspaces
<a name="cql.functions"></a>

Amazon Keyspaces (for Apache Cassandra) supports a variety of built-in functions that you can use in Cassandra Query Language (CQL) statements.

**Topics**
+ [Scalar functions](#cql.functions.scalar)

## Scalar functions
<a name="cql.functions.scalar"></a>

A *scalar function* performs a calculation on a single value and returns the result as a single value. Amazon Keyspaces supports the following scalar functions.


****  

| Function | Description | 
| --- | --- | 
|  `blobAsType`  | Returns a value of the specified data type. | 
|  `cast`  | Converts one native data type into another native data type. | 
|  `currentDate`  | Returns the current date/time as a date. | 
|  `currentTime`  | Returns the current date/time as a time. | 
|  `currentTimestamp`  | Returns the current date/time as a timestamp. | 
|  `currentTimeUUID`  | Returns the current date/time as a timeuuid. | 
|  `fromJson`  | Converts the JSON string into the selected column's data type.  | 
|  `maxTimeuuid`  | Returns the largest possible timeuuid for timestamp or date string.  | 
|  `minTimeuuid`  | Returns the smallest possible timeuuid for timestamp or date string.  | 
|  `now`  | Returns a new unique timeuuid.  | 
|  `toDate`  | Converts either a timeuuid or a timestamp to a date type. | 
|  `toJson`  | Returns the column value of the selected column in JSON format.  | 
|  `token`  | Returns the hash value of the partition key.  | 
|  `toTimestamp`  | Converts either a timeuuid or a date to a timestamp. | 
|  `TTL`  | Returns the expiration time in seconds for a column. | 
|  `typeAsBlob`  | Converts the specified data type into a blob. | 
|  `toUnixTimestamp`  | Converts either a timeuuid or a timestamp into a bigInt. | 
|  `uuid`  | Returns a random version 4 UUID.  | 
|  `writetime`  | Returns the timestamp of the value of the specified column. | 
|  `dateOf`  | (Deprecated) Extracts the timestamp of a timeuuid, and returns the value as a date. | 
|  `unixTimestampOf`  | (Deprecated) Extracts the timestamp of a timeuuid, and returns the value as a raw, 64-bit integer timestamp.  | 